Skip to content

Feature request: UI doc review skill — review UI description docs from naive user perspective #851

@heartInsert

Description

@heartInsert

Summary

Before generating production HTML with /design-html, there's a critical missing step: validating that the UI description document is clear enough for both AI and end users to understand.

I'd like to propose a new skill (/ui-doc-review) that reviews UI/interaction/product design documents from the perspective of a completely non-technical target user — someone who doesn't know what APIs, CSS, components, or breakpoints are. They only care about: what does this page look like? what can I do? what happens when I click?

Problem

When AI generates HTML from a UI description doc, garbage in = garbage out. If the doc is ambiguous, has missing interaction states, or is full of technical jargon, the AI will guess — and guess wrong. Currently there's no quality gate between "write a UI doc" and "generate HTML from it."

Proposed Workflow

  1. User writes a UI description doc (markdown)
  2. /ui-doc-review docs/my-page-design.md ← this new skill
  3. AI role-plays as the most naive target user, reads the doc section by section
  4. Outputs a structured review report with:
    • Scores across 6 dimensions (1-10 scale):
      • Visualizability — can I picture this page in my head?
      • Interaction Completeness — what happens when I click/tap/hover?
      • User Journey Coverage — are all my scenarios covered?
      • Ambiguity & Contradiction — does the doc contradict itself?
      • Information Density — too much jargon or filler?
      • AI Actionability — can an AI build this without guessing?
    • Prioritized problem list (P0/P1/P2)
    • Naive user questions — "what if I want to see yesterday's data?"
    • AI-consumable fix instructions — concrete find-and-replace or insert directives that another AI can execute to fix the doc automatically
  5. User (or AI) fixes the doc based on the report
  6. Then proceed to /design-html

Why This Matters

This fits naturally into the gstack design pipeline:

  • /design-consultation → establish design system
  • /design-shotgun → explore visual variants
  • /ui-doc-review → validate the doc is clear ← NEW
  • /design-html → generate production HTML

The review report format is deliberately structured for AI consumption, so it can be piped into an automated doc-fix step.

Example: What It Catches

Given a doc like:

## User Management
Adopt React Hook Form + Zod validation with TanStack Query for data fetching.
Component tree: UserTable (AG Grid, server-side row model) → UserEditDrawer → RoleSelector (RBAC, hierarchical)

The skill would flag:

  • P0 [Visualizability]: "I have no idea what this page looks like. What's AG Grid? What does 'server-side row model' mean? I just want to see a list of users."
  • P0 [Interaction]: "What happens when I click on a user? Does a drawer open? From which side? How do I save changes?"
  • P1 [Ambiguity]: "Is 'RoleSelector (RBAC, hierarchical)' a dropdown? A tree? A modal?"

I've Built a Prototype

I have a working local implementation of this skill with the full SKILL.md definition, test docs (intentionally bad doc for testing), and evaluation framework. Happy to contribute it as a PR if there's interest.


This would pair especially well with /design-html and /plan-design-review — catch doc quality issues before they become design implementation issues.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions