Skip to content

Conversation

@ryderstorm
Copy link
Collaborator

@ryderstorm ryderstorm commented Nov 13, 2025

Why?

This PR improves the spec-driven development workflow by reorganizing workflow steps and improving output formatting. The changes address usability issues where context was not included in scope evaluation or clarifying questions, and task lists were difficult to scan due to flat formatting.

Evidence:

  • Commit bd83253 reorganizes workflow steps to create directory structure first
  • Commit ebea517 improves task list readability with hierarchical markdown
  • Commit 3fa1931 removes redundant instruction for clarity
  • Commit 37b159e adds validation report output location specification

What Changed?

Refactored prompt files to improve workflow organization and output formatting. The changes ensure consistent file organization from the start and make task lists significantly easier to read and navigate.

Key Changes:

Workflow Reorganization (generate-spec.md):

  • Added dedicated Step 1 for spec directory creation before any other work
  • Moved Context Assessment to Step 2 (before scope assessment)
  • Renumbered all subsequent steps consistently throughout the document
  • Introduced Questions File Workflow with iterative rounds ([NN]-questions-[N]-[feature-name].md)
  • Updated all internal step references to maintain document coherence

Task List Format Improvements (generate-task-list-from-spec.md):

  • Converted flat bullet lists to hierarchical markdown structure using ### and #### headers
  • Separated Demo Criteria, Proof Artifacts, and Tasks into distinct sections
  • Added "TBD" placeholder for Tasks section in Phase 2 output format
  • Improved consistency between Phase 2 (parent tasks only) and Phase 3 (complete with sub-tasks) formats

Validation Output Location (validate-spec-implementation.md):

  • Added "Saving The Output" section specifying validation report file location
  • Aligns validation report output with other workflow outputs (specs, tasks, proofs)
  • Ensures consistent file organization: ./docs/specs/[NN]-spec-[feature-name]/[NN]-validation-[feature-name].md

Documentation Cleanup:

  • Removed redundant instruction about handling clarification requests (already covered in workflow steps)

Files Modified:

  • prompts/generate-spec.md (+52 insertions, -20 deletions)
  • prompts/generate-task-list-from-spec.md (+113 insertions, -41 deletions)
  • prompts/validate-spec-implementation.md (+14 insertions)

Summary by CodeRabbit

  • Documentation
    • New initial "create spec directory" step, explicit directory/file naming and verification before proceeding.
    • Added Questions File workflow with STOP/WAIT semantics, iterative rounds and round naming conventions.
    • Expanded spec template into multi-section, unit-based layout emphasizing Purpose, Functional Requirements, and Proof Artifacts.
    • Reworked task-list outputs to hierarchical per-task Markdown with dedicated Proof Artifact(s) sections.
    • Standardized phrasing (Proof Artifacts vs demo criteria), clarified constraints, non-goals, and guidance for saving validation reports.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 13, 2025

Warning

Rate limit exceeded

@ryderstorm has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 22 minutes and 32 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between c3ccfd7 and a112277.

📒 Files selected for processing (4)
  • prompts/generate-spec.md (9 hunks)
  • prompts/generate-task-list-from-spec.md (7 hunks)
  • prompts/manage-tasks.md (7 hunks)
  • prompts/validate-spec-implementation.md (9 hunks)

Walkthrough

Replaces “demo criteria” with “proof artifacts” across prompts, formalizes spec-directory creation and naming, restructures spec-generation steps (questions file process with STOP/WAIT and iterative rounds), and changes task-list and validation outputs to hierarchical, artifact-driven formats and saving conventions.

Changes

Cohort / File(s) Summary
Spec Generation Workflow
prompts/generate-spec.md
Rewrites spec generation: adds explicit Step 1 to create/verify a spec directory and naming conventions; reorders/renames steps (Context Assessment, Initial Scope, Clarifying Questions, Spec Generation, Review/Refine); formalizes Questions File format/process (STOP/WAIT semantics, per-round files); defines Units with Purpose/Functional Requirements/Proof Artifacts; standardizes ALWAYS/NEVER constraints and output file paths.
Task List Formatting
prompts/generate-task-list-from-spec.md
Converts Phase 2/3 outputs to hierarchical Markdown per parent task: titled headings, separate “Proof Artifact(s)” and “Tasks” sections, enumerated sub-tasks, and replaces inline demo criteria with structured proof-artifact entries and standardized headings/labels.
Spec Validation Guidance
prompts/validate-spec-implementation.md
Reframes validation to center Proof Artifacts (replaces demo-criteria language), updates verification checks to include artifact existence and evidence, revises rubrics/tables to reflect artifact-driven results, and adds explicit “Saving The Output” guidance and naming/location conventions for validation reports.
Task Management Prompts
prompts/manage-tasks.md
Swaps demo-centric verification language for proof-artifact terminology across task selection, execution, and completion steps; updates verification sections to require proof-artifact review and adjusts phase checklists and headings accordingly.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant User
    participant Agent
    participant FS as FileSystem

    rect rgb(230,245,230)
    Note over Agent,FS: Spec generation with questions-file rounds
    end

    User->>Agent: Request spec generation for feature
    Agent->>FS: Create & verify spec directory (./docs/specs/.../<NN>-<title>/)
    FS-->>Agent: Confirm directory exists
    Agent->>User: Emit Context Assessment & Initial Scope prompts
    User-->>Agent: Populate Questions File (round-01.md) — STOP/WAIT
    Agent->>User: Iterate clarifying rounds as needed (round-02.md...)
    Agent->>Agent: Build spec.md (Units, FRs, Proof Artifacts)
    Agent->>FS: Save `<NN>-spec-<feature>.md` in spec directory
    FS-->>Agent: Confirm saved
    Agent->>User: Present spec for Review/Refine (loop)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Pay attention to:
    • Consistency of renumbered steps, cross-references, and ALWAYS/NEVER constraints in prompts/generate-spec.md.
    • Correct heading numbering and example formats in prompts/generate-task-list-from-spec.md.
    • Validation checks and the new “Saving The Output” naming/location details in prompts/validate-spec-implementation.md.

Possibly related PRs

Suggested reviewers

  • RobertKelly

Poem

🐰 I hopped through docs and made a map,
Folders, questions, each proof in a wrap,
Files saved in rounds, specs neat and bright,
Tasks stacked and verifiable by light,
Hop—now approve, and share a carrot-slice delight! 🥕

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately captures the main refactoring focus: improving spec generation workflow organization and task list output formatting through hierarchical markdown and step reorganization.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering motivation, detailed key changes across all modified files, evidence via commits, and alignment with the provided template requirements.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (3)
prompts/generate-spec.md (3)

139-179: Questions File Workflow introduces needed structure but needs clarity on file discovery.

The iterative rounds workflow is well-designed: [NN]-questions-[N]-[feature-name].md with incrementing [N] for each round. However, the handoff at line 172 ("After the user indicates they have saved their answers, read the file and continue the conversation") assumes the AI will proactively read the file. Consider adding explicit guidance: e.g., "Ask the user to respond 'Answers saved' when ready, then use the Read tool to retrieve the file." This prevents ambiguity about when to proceed.

Additionally, line 177 mentions "use context from earlier responses"—clarify whether earlier responses are retained from in-memory conversation history or should be extracted from previous questions files (e.g., [NN]-questions-1-[feature-name].md). This is critical for multi-round clarity.


141-141: Minor grammar: Use hyphen for compound modifier.

The phrase "multiple choice answers" should be "multiple-choice answers" (line 141) to follow standard English hyphenation rules for compound modifiers preceding a noun.

Apply this diff:

-Ask clarifying questions to gather sufficient detail. **Always provide multiple choice answers** with numbered or lettered options to allow the user to make selections easily. Focus on understanding the "what" and "why" rather than the "how."
+Ask clarifying questions to gather sufficient detail. **Always provide multiple-choice answers** with numbered or lettered options to allow the user to make selections easily. Focus on understanding the "what" and "why" rather than the "how."

236-245: Repository Standards section in spec template appropriately reinforces workflow alignment.

The expanded guidance in the spec generation template (lines 236–245) now explicitly asks for "Coding standards and style guides," "Architectural patterns," "Testing conventions," and "Commit conventions." This mirrors the context assessment step and ensures generated specs are grounded in repository practices. However, the parenthetical note at line 245 ("If no specific standards are identified, state 'Follow established repository patterns and conventions.'") is a sensible fallback but slightly circular—consider requiring developers to cite at least one repository artifact (e.g., "See CONTRIBUTING.md section X") to enforce rigor.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b8875bf and 3fa1931.

📒 Files selected for processing (2)
  • prompts/generate-spec.md (8 hunks)
  • prompts/generate-task-list-from-spec.md (2 hunks)
🧰 Additional context used
🪛 LanguageTool
prompts/generate-spec.md

[grammar] ~141-~141: Use a hyphen to join words.
Context: ...icient detail. Always provide multiple choice answers with numbered or letter...

(QB_NEW_EN_HYPHEN)

🔇 Additional comments (7)
prompts/generate-task-list-from-spec.md (2)

137-191: Phase 2 output format is clear and actionable.

The hierarchical structure with Demo Criteria → Proof Artifacts → Tasks (TBD) provides a well-organized template. The "TBD" placeholder correctly signals that sub-tasks are deferred to Phase 3. The use of checkboxes ([ ]) and consistent numbering (1.0, 2.0, 3.0) makes the format easy to follow and parse programmatically.


193-266: Phase 3 format maintains consistency while expanding task detail.

The parallel structure between Phase 2 and Phase 3 is excellent—only the Tasks section changes from "TBD" to enumerated sub-tasks (1.1, 1.2, 2.1, 2.2, etc.). This maintains cognitive continuity while signaling clear progression. The "Relevant Files" section (lines 198-212) at the top of Phase 3 provides necessary context before task breakdown.

prompts/generate-spec.md (5)

58-64: Process overview clearly signals workflow expansion.

The updated sequence (Steps 1–6) makes the progression explicit: Create Directory → Assess Context → Evaluate Scope → Ask Questions → Generate Spec → Review. This is a logical progression that ensures foundational steps (directory creation, context gathering) precede specification work. The numbering is clear and easy to reference.


65-75: Step 1: Create Spec Directory addition ensures consistent project structure.

Formalizing directory creation as Step 1 is a solid best practice—it guarantees all downstream files (questions, spec, tasks, proofs) land in a predictable location. The naming convention ([NN]-spec-[feature-name]/) is clear, and the verification checkpoint ("Confirm the directory exists before proceeding to Step 2") prevents downstream path errors.


77-93: Context Assessment step appropriately balances guidance with optionality.

Line 79's clarification—"If working in a pre-existing project, begin by..."—sets realistic expectations. The expanded Repository Standards guidance (lines 85–91) is well-placed and actionable. Line 93's directive ("Use this context to inform scope validation and requirements, not to drive technical decisions") correctly prevents premature design decisions.


267-282: Review and Refinement + Output Requirements sections are well-placed.

Step 6 (lines 267–276) provides concrete review prompts to ensure spec completeness. The Output Requirements section (lines 278–282) now includes a concrete example ("For feature 'user authentication'...") that demonstrates the naming and path conventions, making it much easier for users to apply. This is a helpful addition.


309-325: Final Instructions sequence is now complete and actionable.

The updated final instructions enumerate the full 9-step workflow (Create Directory → Assessment → Scope → Questions → Spec Generation → Save → Review → Guide → Stop). This is comprehensive and leaves no ambiguity about the expected sequence. Line 323 ("Save file as...") is now explicit (previously implied). The emphasis on "Stop working once user confirms spec is complete" (line 325) appropriately caps the phase.

@ryderstorm ryderstorm self-assigned this Nov 13, 2025
Copy link

@RobertKelly RobertKelly left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really just had questions on the directory/file structure and wanted to know if the files were committed or transient. Would love to discuss.


**Directory Structure:**

- **Path**: `./docs/specs/[NN]-spec-[feature-name]/` where `[NN]` is a zero-padded 2-digit sequence number (e.g., `01`, `02`, `03`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this accurate? I would suggest the feature name or story id would be a folder level here rather than sequences stacking up in the specs folder. Also is the word "spec" redundant if it's in the specs directory?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I may just be missing the need for the leading sequence in the folder path. Would like to discuss.

**Questions File Workflow:**

If the feature involves existing systems, briefly review the codebase and existing docs to understand:
1. **Create Questions File**: Save questions to `[NN]-questions-[N]-[feature-name].md` where `[N]` is the round number (starting at 1, incrementing for each new round).

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should the questions also be in the respective specs folder for each feature? I'm just conceptualizing how this lays out so the folder doesn't just grow and grow with contents from feature to feature. (Assuming they get checked in)


**Format:** Markdown (`.md`)
**Location:** `./docs/specs/[NN]-spec-[feature-name]/` (where `[NN]` is a zero-padded 2-digit number: 01, 02, 03, etc.)
**Filename:** `[NN]-validation-[feature-name].md` (e.g., if the Spec is `01-spec-user-authentication.md`, save as `01-validation-user-authentication.md`)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I'm understanding the sequence ID but may contend that a story/task unique ID is what I imagined. Again, would just like to discuss so I can come along on the journey.

- Add dedicated Step 1 for spec directory creation
- Renumber all subsequent steps (1→2, 2→3, etc.)
- Add Questions File Workflow for iterative Q&A process
- Update all step references throughout document
- Fix typo: 'pre-existing'
Update generate-task-list-from-spec prompt to use hierarchical markdown structure for better readability.

- Change from flat bullet lists to hierarchical headers (### and ####)

- Convert Demo Criteria and Proof Artifacts to bulleted lists

- Add TBD placeholder for Tasks section in Phase 2 output format
Remove bullet point about handling clarification requests in the current
questions file, as this is already covered by the workflow steps above.
Add 'Saving The Output' section specifying where validation reports should be saved, aligning with other workflow output file locations in the spec directory structure.
@ryderstorm ryderstorm force-pushed the feat/improve-spec-and-task-generation branch from 37b159e to faa1992 Compare November 17, 2025 21:13
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (4)
prompts/generate-spec.md (4)

168-179: Clarify the file location for iterative questions files.

The Questions File Workflow introduces rounds of questions files ([NN]-questions-[N]-[feature-name].md), but it's unclear whether these files should be saved in the spec directory created in Step 1 or elsewhere. Given the [NN] prefix (which matches the spec directory numbering), explicitly state that these files belong in ./docs/specs/[NN]-spec-[feature-name]/.

This addresses the concern raised in past reviews about how files are organized and prevents ambiguity for workflow users.

Apply this diff to clarify the file location:

 **Questions File Workflow:**
 
-1. **Create Questions File**: Save questions to `[NN]-questions-[N]-[feature-name].md` where `[N]` is the round number (starting at 1, incrementing for each new round).
+1. **Create Questions File**: Save questions to `./docs/specs/[NN]-spec-[feature-name]/[NN]-questions-[N]-[feature-name].md` where `[N]` is the round number (starting at 1, incrementing for each new round).
 2. **Point User to File**: Direct the user to the questions file and instruct them to answer the questions directly in the file.
 3. **Wait for Confirmation**: After the user indicates they have saved their answers, read the file and continue the conversation.
-4. **Follow-Up Rounds**: If answers reveal new questions, create a new questions file with incremented round number (`[NN]-questions-[N+1]-[feature-name].md`) and repeat the process.
+4. **Follow-Up Rounds**: If answers reveal new questions, create a new questions file with incremented round number (`./docs/specs/[NN]-spec-[feature-name]/[NN]-questions-[N+1]-[feature-name].md`) and repeat the process.

141-141: Minor grammar: hyphenate compound adjective.

"Multiple choice" used as a compound adjective should be hyphenated as "multiple-choice."

Apply this diff:

-Ask clarifying questions to gather sufficient detail. **Always provide multiple choice answers** with numbered or lettered options to allow the user to make selections easily. Focus on understanding the "what" and "why" rather than the "how."
+Ask clarifying questions to gather sufficient detail. **Always provide multiple-choice answers** with numbered or lettered options to allow the user to make selections easily. Focus on understanding the "what" and "why" rather than the "how."

204-214: Verify spec template section clarity for implementers.

The Demoable Units section now uses hierarchical formatting with Purpose, Demo Criteria, and Proof Artifacts subsections. This is well-structured, but verify that junior developers understand the distinction:

  • Purpose: Why this slice exists and who it serves (motivation)
  • Demo Criteria: Observable behaviors that prove it works (acceptance test mindset)
  • Proof Artifacts: Concrete evidence (URLs, CLI output, test names)

The section headers are clear, but consider adding a brief inline comment if users report confusion between "Demo Criteria" (the what) and "Proof Artifacts" (the evidence).


309-325: Reconcile the process overview with final instructions: step count mismatch.

The Process Overview (lines 58–63) lists 6 steps ending with "Review and Refine," but the Final Instructions (lines 309–325) list 9 items, adding "Save," "Review and Refine," "Guide User," and "Stop" as separate steps. This inconsistency may confuse users about the true workflow structure.

Ensure both sections align: either consolidate to 6 steps or expand the Process Overview to match the 9-step final breakdown.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 37b159e and faa1992.

📒 Files selected for processing (3)
  • prompts/generate-spec.md (8 hunks)
  • prompts/generate-task-list-from-spec.md (2 hunks)
  • prompts/validate-spec-implementation.md (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • prompts/validate-spec-implementation.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • prompts/generate-task-list-from-spec.md
🧰 Additional context used
🪛 LanguageTool
prompts/generate-spec.md

[grammar] ~141-~141: Use a hyphen to join words.
Context: ...icient detail. Always provide multiple choice answers with numbered or letter...

(QB_NEW_EN_HYPHEN)

🔇 Additional comments (1)
prompts/generate-spec.md (1)

278-282: Output path and example are clear and consistent.

The specification file location and example correctly reflect the directory structure created in Step 1 and provide a helpful concrete example.

Add a generic example template showing the expected format for clarifying
questions with numbered questions, lettered options, and checkboxes. Remove
redundant instruction about providing multiple choice answers.
…units

Move functional requirements from separate section into each Demoable Unit
to make specs more self-contained and easier to read. Each unit now contains
all requirements needed for implementation, reducing cross-referencing and
making the spec structure more actionable.
…acts

Remove separate 'Demo Criteria' sections and consolidate verification into
self-documenting proof artifacts. Each proof artifact now describes what
it demonstrates (e.g., 'Screenshot: `--help` output demonstrates new
command exists').

Changes:
- Remove Demo Criteria from spec and task list templates
- Update proof artifacts format to include descriptive language
- Update all workflow references from demo criteria to proof artifacts
- Rename 'Evidence & Output' to 'Verification Result' in validation table
- Update all verification checklists to focus on proof artifacts

This simplifies the workflow by making proof artifacts the single source
of truth for what needs to be demonstrated.
Refactor validation process to consistently focus on proof artifact
verification rather than code implementation analysis. This improves
cohesion and clarity throughout the validation prompt.

Changes:
- Update Evaluation Rubric (R1, R5) to focus on proof artifacts
- Refactor Step 4 verification to test proof artifacts instead of
  searching code files
- Rename 'Requirement Implementation' to 'Requirement Coverage'
- Update Coverage Matrix examples to use proof artifact results
- Restructure Validation Issues section with clearer format and
  table examples
- Remove code analysis directives (line numbers, file content checks)
- Add commit log analysis to repository compliance checks

This aligns validation with its stated goal: verifying proof artifacts
demonstrate spec requirements, not analyzing code implementation.
Add explicit STOP AND WAIT instructions to prevent AI from jumping
straight from creating clarifying questions to generating the spec
without waiting for user responses. Reorganize workflow section for
better clarity and add critical enforcement points.
- Rename sections for better clarity (Process Overview → Spec Generation Overview)
- Add explicit requirement to report scope assessment results
- Make context assessment required instead of optional
- Standardize question format with uppercase option letters
- Remove redundant Final Instructions section
- Improve formatting consistency throughout
@ryderstorm
Copy link
Collaborator Author

@coderabbitai full review

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 21, 2025

✅ Actions performed

Full review triggered.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants