Add GitHub issue pipelines and prompts using gh CLI

gh-issue-impl, gh-issue-research, gh-issue-rewrite, gh-issue-update
pipelines with corresponding prompts for fetch-assess, plan,
implement, and create-pr steps.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-25 17:02:42 +01:00
parent fc24f9a8ab
commit 22370827ee
16 changed files with 1453 additions and 0 deletions

View File

@@ -0,0 +1,121 @@
kind: WavePipeline
metadata:
name: gh-issue-impl
description: "Implement a GitHub issue end-to-end: fetch, assess, plan, implement, create PR"
input:
source: cli
schema:
type: string
description: "GitHub repository and issue number"
example: "re-cinq/wave 42"
steps:
- id: fetch-assess
persona: implementer
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source_path: .wave/prompts/github-issue-impl/fetch-assess.md
output_artifacts:
- name: assessment
path: .wave/output/issue-assessment.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/issue-assessment.json
schema_path: .wave/contracts/issue-assessment.schema.json
must_pass: true
on_failure: retry
max_retries: 2
- id: plan
persona: implementer
dependencies: [fetch-assess]
memory:
inject_artifacts:
- step: fetch-assess
artifact: assessment
as: issue_assessment
workspace:
type: worktree
branch: "{{ pipeline_id }}"
base: main
exec:
type: prompt
source_path: .wave/prompts/github-issue-impl/plan.md
output_artifacts:
- name: impl-plan
path: .wave/output/impl-plan.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/impl-plan.json
schema_path: .wave/contracts/issue-impl-plan.schema.json
must_pass: true
on_failure: retry
max_retries: 2
- id: implement
persona: craftsman
dependencies: [plan]
memory:
inject_artifacts:
- step: fetch-assess
artifact: assessment
as: issue_assessment
- step: plan
artifact: impl-plan
as: plan
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source_path: .wave/prompts/github-issue-impl/implement.md
handover:
contract:
type: test_suite
command: "{{ project.test_command }}"
must_pass: true
on_failure: retry
max_retries: 3
compaction:
trigger: "token_limit_80%"
persona: summarizer
- id: create-pr
persona: craftsman
dependencies: [implement]
memory:
inject_artifacts:
- step: fetch-assess
artifact: assessment
as: issue_assessment
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source_path: .wave/prompts/github-issue-impl/create-pr.md
output_artifacts:
- name: pr-result
path: .wave/output/pr-result.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/pr-result.json
schema_path: .wave/contracts/pr-result.schema.json
must_pass: true
on_failure: retry
max_retries: 2
outcomes:
- type: pr
extract_from: .wave/output/pr-result.json
json_path: .pr_url
label: "Pull Request"

View File

@@ -0,0 +1,255 @@
kind: WavePipeline
metadata:
name: gh-issue-research
description: Research a GitHub issue and post findings as a comment
release: true
input:
source: cli
example: "re-cinq/wave 42"
schema:
type: string
description: "GitHub repository and issue number (e.g. 'owner/repo number')"
steps:
- id: fetch-issue
persona: github-analyst
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Fetch the GitHub issue specified in the input: {{ input }}
The input format is "owner/repo issue_number" (e.g., "re-cinq/CFOAgent 112").
Parse the input to extract the repository and issue number.
Use the gh CLI to fetch the issue:
gh issue view <number> --repo <owner/repo> --json number,title,body,labels,state,author,createdAt,url,comments
Parse the output and produce structured JSON with the issue content.
Include repository information in the output.
output_artifacts:
- name: issue-content
path: .wave/output/issue-content.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/issue-content.json
schema_path: .wave/contracts/issue-content.schema.json
on_failure: retry
max_retries: 3
- id: analyze-topics
persona: researcher
dependencies: [fetch-issue]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: issue
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Analyze the GitHub issue and extract research topics.
Identify:
1. Key technical questions that need external research
2. Domain concepts that require clarification
3. External dependencies, libraries, or tools to investigate
4. Similar problems/solutions that might provide guidance
For each topic, provide:
- A unique ID (TOPIC-001, TOPIC-002, etc.)
- A clear title
- Specific questions to answer (1-5 questions per topic)
- Search keywords for web research
- Priority (critical/high/medium/low based on relevance to solving the issue)
- Category (technical/documentation/best_practices/security/performance/compatibility/other)
Focus on topics that will provide actionable insights for the issue author.
Limit to 10 most important topics.
output_artifacts:
- name: topics
path: .wave/output/research-topics.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/research-topics.json
schema_path: .wave/contracts/research-topics.schema.json
on_failure: retry
max_retries: 2
- id: research-topics
persona: researcher
dependencies: [analyze-topics]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: issue
- step: analyze-topics
artifact: topics
as: research_plan
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Research the topics identified in the research plan.
For each topic in the research plan:
1. Execute web searches using the provided keywords
2. Evaluate source credibility (official docs > authoritative > community)
3. Extract relevant findings with key points
4. Include direct quotes where helpful
5. Rate your confidence in the answer (high/medium/low/inconclusive)
For each finding:
- Assign a unique ID (FINDING-001, FINDING-002, etc.)
- Provide a summary (20-2000 characters)
- List key points as bullet items
- Include source URL, title, and type
- Rate relevance to the topic (0-1)
Always include source URLs for attribution.
If a topic yields no useful results, mark confidence as "inconclusive".
Document any gaps in the research.
output_artifacts:
- name: findings
path: .wave/output/research-findings.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/research-findings.json
schema_path: .wave/contracts/research-findings.schema.json
on_failure: retry
max_retries: 2
- id: synthesize-report
persona: summarizer
dependencies: [research-topics]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: original_issue
- step: research-topics
artifact: findings
as: research
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Synthesize the research findings into a coherent report for the GitHub issue.
Create a well-structured research report that includes:
1. Executive Summary:
- Brief overview (50-1000 chars)
- Key findings (1-7 bullet points)
- Primary recommendation
- Confidence assessment (high/medium/low)
2. Detailed Findings:
- Organize by topic/section
- Include code examples where relevant
- Reference sources using SRC-### IDs
3. Recommendations:
- Actionable items with IDs (REC-001, REC-002, etc.)
- Priority and effort estimates
- Maximum 10 recommendations
4. Sources:
- List all sources with IDs (SRC-001, SRC-002, etc.)
- Include URL, title, type, and reliability
5. Pre-rendered Markdown:
- Generate complete markdown_content field ready for GitHub comment
- Use proper headers, bullet points, and formatting
- Include a header: "## Research Findings (Wave Pipeline)"
- End with sources section
output_artifacts:
- name: report
path: .wave/output/research-report.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/research-report.json
schema_path: .wave/contracts/research-report.schema.json
on_failure: retry
max_retries: 2
- id: post-comment
persona: github-commenter
dependencies: [synthesize-report]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: issue
- step: synthesize-report
artifact: report
as: report
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Post the research report as a comment on the GitHub issue.
Steps:
1. Read the issue details to get the repository and issue number
2. Read the report to get the markdown_content
3. Write the markdown content to a file, then use gh CLI to post the comment:
# Write to file to avoid shell escaping issues with large markdown
cat > /tmp/comment-body.md << 'COMMENT_EOF'
<markdown_content>
COMMENT_EOF
gh issue comment <number> --repo <owner/repo> --body-file /tmp/comment-body.md
4. Add a footer to the comment:
---
*Generated by [Wave](https://github.com/re-cinq/wave) issue-research pipeline*
5. Capture the result and verify success
6. If successful, extract the comment URL from the output
Record the result with:
- success: true/false
- issue_reference: issue number and repository
- comment: id, url, body_length (if successful)
- error: code, message, retryable (if failed)
- timestamp: current time
output_artifacts:
- name: comment-result
path: .wave/output/comment-result.json
type: json
outcomes:
- type: url
extract_from: .wave/output/comment-result.json
json_path: .comment.url
label: "Research Comment"
handover:
contract:
type: json_schema
source: .wave/output/comment-result.json
schema_path: .wave/contracts/comment-result.schema.json
on_failure: retry
max_retries: 3

View File

@@ -0,0 +1,187 @@
kind: WavePipeline
metadata:
name: gh-issue-rewrite
description: "Analyze and rewrite poorly documented GitHub issues"
release: true
input:
source: cli
example: "re-cinq/wave 42"
schema:
type: string
description: "GitHub repository, optionally with issue number (e.g. 'owner/repo' or 'owner/repo 42')"
steps:
- id: scan-issues
persona: github-analyst
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
MANDATORY: You MUST call the Bash tool. NEVER say "gh CLI not installed" without trying.
Input: {{ input }}
Parse the input to determine the mode:
- If the input contains a number after the repo (e.g. "re-cinq/wave 42"), this is SINGLE ISSUE mode.
Extract the repo (first token) and issue number (second token).
- If the input is just a repo (e.g. "re-cinq/wave"), this is BATCH mode.
Execute these commands using the Bash tool:
1. gh --version
2a. SINGLE ISSUE mode: Parse the repo and number from {{ input }}, then run:
gh issue view <NUMBER> --repo <REPO> --json number,title,body,labels,url
2b. BATCH mode: gh issue list --repo {{ input }} --limit 10 --json number,title,body,labels,url
After getting REAL results from Bash, analyze issues and score them.
In single issue mode, analyze the one issue. In batch mode, analyze all returned issues.
output_artifacts:
- name: issue_analysis
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-issue-analysis.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: plan-enhancements
persona: github-analyst
dependencies: [scan-issues]
memory:
inject_artifacts:
- step: scan-issues
artifact: issue_analysis
as: analysis
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
The analysis artifact contains poor_quality_issues from the scan step.
For EACH issue in poor_quality_issues, use gh CLI to fetch the current body:
gh issue view <NUMBER> --repo {{ input }} --json body
Then create an enhancement plan with:
- issue_number: the issue number
- suggested_title: improved title (or keep original if good)
- body_template: enhanced body text (improve the existing body, add missing sections)
- suggested_labels: appropriate labels
- enhancements: list of changes being made
Create an enhancement plan with fields:
issues_to_enhance (array of issue_number, suggested_title, body_template,
suggested_labels, enhancements) and total_to_enhance.
output_artifacts:
- name: enhancement_plan
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-enhancement-plan.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: apply-enhancements
persona: github-enhancer
dependencies: [plan-enhancements]
memory:
inject_artifacts:
- step: plan-enhancements
artifact: enhancement_plan
as: plan
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
Step 1: Use Bash tool to verify gh works:
gh --version
Step 2: For EACH issue in the plan, use Bash tool to apply changes:
- If suggested_title differs from current: gh issue edit <N> --repo {{ input }} --title "suggested_title"
- If body_template is provided: gh issue edit <N> --repo {{ input }} --body "body_template"
- If suggested_labels: gh issue edit <N> --repo {{ input }} --add-label "label1,label2"
Step 4: For each issue, capture the URL: gh issue view <N> --repo {{ input }} --json url --jq .url
Step 5: Record the results with fields: enhanced_issues (each with issue_number,
success, changes_made, url), total_attempted, total_successful, total_failed.
output_artifacts:
- name: enhancement_results
path: .wave/artifact.json
type: json
required: true
outcomes:
- type: issue
extract_from: .wave/artifact.json
json_path: .enhanced_issues[0].url
label: "Enhanced Issue"
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-enhancement-results.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: verify-enhancements
persona: github-analyst
dependencies: [apply-enhancements]
memory:
inject_artifacts:
- step: apply-enhancements
artifact: enhancement_results
as: results
- step: scan-issues
artifact: issue_analysis
as: original_analysis
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
For each enhanced issue, verify with: gh issue view <N> --repo {{ input }} --json title,labels
Compile a verification report with fields:
total_enhanced, successful_enhancements, failed_enhancements, and summary.
output_artifacts:
- name: verification_report
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-verification-report.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false

View File

@@ -0,0 +1,184 @@
kind: WavePipeline
metadata:
name: gh-issue-update
description: "Refresh a stale GitHub issue by comparing it against recent codebase changes"
release: true
input:
source: cli
example: "re-cinq/wave 45 -- acceptance criteria are outdated after the worktree refactor"
schema:
type: string
description: "owner/repo number [-- optional criticism or direction]"
steps:
- id: gather-context
persona: github-analyst
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
MANDATORY: You MUST call the Bash tool. NEVER say "gh CLI not installed" without trying.
Input: {{ input }}
Parse the input:
- Split on " -- " to separate the repo+number from optional criticism.
- The first part is "<owner/repo> <number>". Extract REPO (first token) and NUMBER (second token).
- If there is text after " -- ", that is the user's CRITICISM about what's wrong with the issue.
- If there is no " -- ", criticism is empty.
Execute these commands using the Bash tool:
1. gh --version
2. Fetch the full issue:
gh issue view NUMBER --repo REPO --json number,title,body,labels,url,createdAt,comments
3. Get commits since the issue was created (cap at 100):
git log --since="<createdAt>" --oneline -100
4. Get releases since the issue was created:
gh release list --repo REPO --limit 20
Then filter to only releases after the issue's createdAt date.
5. Scan the issue body for file path references (anything matching patterns like
`internal/...`, `cmd/...`, `.wave/...`, or backtick-quoted paths).
For each referenced file, check if it still exists using `ls -la <path>`.
6. Read CLAUDE.md for current project context:
Read the file CLAUDE.md from the repository root.
After gathering ALL data, produce a JSON result matching the contract schema.
output_artifacts:
- name: issue_context
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/issue-update-context.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: draft-update
persona: github-analyst
dependencies: [gather-context]
memory:
inject_artifacts:
- step: gather-context
artifact: issue_context
as: context
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
MANDATORY: You MUST call the Bash tool for any commands. NEVER generate fake output.
The context artifact contains the gathered issue context.
Your task: Compare the original issue against the codebase changes and draft an updated version.
Step 1: Analyze each section of the issue body. Classify each as:
- STILL_VALID: Content is accurate and up-to-date
- OUTDATED: Content references old behavior, removed files, or superseded patterns
- INCOMPLETE: Content is partially correct but missing recent developments
- WRONG: Content is factually incorrect given current codebase state
Step 2: If there is user criticism (non-empty "criticism" field), address EVERY point raised.
The criticism takes priority — it represents what the issue author thinks is wrong.
Step 3: Draft the updated issue:
- Preserve sections classified as STILL_VALID (do not rewrite what works)
- Rewrite OUTDATED and WRONG sections to reflect current reality
- Expand INCOMPLETE sections with missing information
- If the title needs updating, draft a new title
- Append a "---\n**Changes since original**" section at the bottom listing what changed and why
Step 4: If file paths in the issue body are now missing (from referenced_files.missing),
update or remove those references.
Produce a JSON result matching the contract schema.
output_artifacts:
- name: update_draft
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/issue-update-draft.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: apply-update
persona: github-enhancer
dependencies: [draft-update]
memory:
inject_artifacts:
- step: draft-update
artifact: update_draft
as: draft
- step: gather-context
artifact: issue_context
as: context
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
Step 1: Use Bash tool to verify gh works:
gh --version
Step 2: Extract the repo as "<owner>/<name>" and the issue number from the available artifacts.
Step 3: Apply the update:
- If title_changed is true:
gh issue edit <NUMBER> --repo <REPO> --title "<updated_title>"
- Write the updated_body to a temp file, then apply it:
Write updated_body to /tmp/issue-body.md
gh issue edit <NUMBER> --repo <REPO> --body-file /tmp/issue-body.md
- Clean up /tmp/issue-body.md after applying.
Step 4: Verify the update was applied:
gh issue view <NUMBER> --repo <REPO> --json number,title,body,url
Compare the returned title and body against what was intended. Flag any discrepancies.
Step 5: Record the results as a JSON object matching the contract schema.
output_artifacts:
- name: update_result
path: .wave/artifact.json
type: json
required: true
outcomes:
- type: issue
extract_from: .wave/artifact.json
json_path: .url
label: "Updated Issue"
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/issue-update-result.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false

View File

@@ -0,0 +1,76 @@
You are creating a pull request for the implemented GitHub issue.
Input: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by the
plan step and is already checked out. All git operations here are isolated from
the main working tree.
Read the issue assessment artifact to find the issue number, repository, branch name, and issue URL.
## SAFETY: Do NOT Modify the Working Tree
This step MUST NOT run `git checkout`, `git stash`, or any command that changes
the current branch or working tree state. The branch already exists from the
implement step — just push it and create the PR.
## Instructions
### Step 1: Load Context
From the issue assessment artifact, extract:
- Issue number and title
- Repository (`owner/repo`)
- Branch name
- Issue URL
### Step 2: Push the Branch
Push the feature branch without checking it out:
```bash
git push -u origin <BRANCH_NAME>
```
### Step 3: Create Pull Request
Create the PR using `gh pr create` with `--head` to target the branch. The PR body MUST include `Closes #<NUMBER>` to auto-close the issue on merge.
```bash
gh pr create --repo <OWNER/REPO> --head <BRANCH_NAME> --title "<concise title>" --body "$(cat <<'EOF'
## Summary
<3-5 bullet points describing the changes>
Closes #<ISSUE_NUMBER>
## Changes
<list of key files changed and why>
## Test Plan
<how the changes were validated>
EOF
)"
```
### Step 4: Request Copilot Review (Best-Effort)
After the PR is created, attempt to add Copilot as a reviewer:
```bash
gh pr edit --add-reviewer "copilot"
```
This is a best-effort command. If Copilot isn't available in the repository, the command will fail silently and the PR will still be created successfully.
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT run `git checkout`, `git stash`, or any branch-switching commands
- The PR body MUST contain `Closes #<NUMBER>` to link to the issue
- Do NOT include Co-Authored-By or AI attribution in commits
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,79 @@
You are fetching a GitHub issue and assessing whether it has enough detail to implement.
Input: {{ input }}
The input format is `owner/repo number` (e.g. `re-cinq/wave 42`).
## Working Directory
You are running in an isolated Wave workspace. The `gh` CLI works from any
directory when using the `--repo` flag, so no directory change is needed.
## Instructions
### Step 1: Parse Input
Extract the repository (`owner/repo`) and issue number from the input string.
### Step 2: Fetch Issue
Use the `gh` CLI to fetch the issue with full details:
```bash
gh issue view <NUMBER> --repo <OWNER/REPO> --json number,title,body,url,labels,state,author,comments
```
### Step 3: Assess Implementability
Evaluate the issue against these criteria:
1. **Clear description**: Does the issue describe what needs to change? (not just "X is broken")
2. **Sufficient context**: Can you identify which code/files are affected?
3. **Testable outcome**: Are there acceptance criteria, or can you infer them from the description?
Score the issue 0-100:
- **80-100**: Well-specified, clear requirements, acceptance criteria present
- **60-79**: Adequate detail, some inference needed but feasible
- **40-59**: Marginal — missing key details but core intent is clear
- **0-39**: Too vague to implement — set `implementable` to `false`
### Step 4: Determine Skip Steps
Based on the issue quality, decide which speckit steps can be skipped:
- Issues with detailed specs can skip `specify`, `clarify`, `checklist`, `analyze`
- Issues with moderate detail might skip `specify` and `clarify` only
- Vague issues should skip nothing (but those should fail the assessment)
### Step 5: Generate Branch Name
Create a branch name using the pattern `<NNN>-<short-name>` where:
- `<NNN>` is the issue number zero-padded to 3 digits
- `<short-name>` is 2-3 words from the issue title, kebab-case
### Step 6: Assess Complexity
Estimate implementation complexity:
- **trivial**: Single file change, obvious fix (typo, config tweak)
- **simple**: 1-3 files, straightforward logic change
- **medium**: 3-10 files, new feature with tests
- **complex**: 10+ files, architectural changes, cross-cutting concerns
## CRITICAL: Implementability Gate
If the issue does NOT have enough detail to implement:
- Set `"implementable": false` in the output
- This will cause the contract validation to fail, aborting the pipeline
- Include `missing_info` listing what specific information is needed
- Include a `summary` explaining why the issue cannot be implemented as-is
If the issue IS implementable:
- Set `"implementable": true`
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT modify the issue — this is read-only assessment
## Output
Produce a JSON assessment matching the injected output schema.

View File

@@ -0,0 +1,87 @@
You are implementing a GitHub issue according to the plan and task breakdown.
Input: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by the
plan step and is already checked out. All git operations here are isolated from
the main working tree.
## Instructions
### Step 1: Load Context
1. Get the issue details and branch name from the issue assessment artifact
2. Get the task breakdown, file changes, and feature directory from the plan artifact
### Step 2: Read Plan Files
Navigate to the feature directory and read:
- `spec.md` — the full specification
- `plan.md` — the implementation plan
- `tasks.md` — the phased task breakdown
### Step 3: Execute Implementation
Follow the task breakdown phase by phase:
**Setup first**: Initialize project structure, dependencies, configuration
**Tests before code (TDD)**:
- Write tests that define expected behavior
- Run tests to confirm they fail for the right reason
- Implement the code to make tests pass
**Core development**: Implement the changes specified in the plan
**Integration**: Wire components together, update imports, middleware
**Polish**: Edge cases, error handling, documentation updates
### Step 4: Validate Between Phases
After each phase, run:
```bash
go test -race ./...
```
If tests fail, fix the issue before proceeding to the next phase.
### Step 5: Mark Completed Tasks
As you complete each task, mark it as `[X]` in `tasks.md`.
### Step 6: Final Validation
After all tasks are complete:
1. Run `go test -race ./...` one final time
2. Verify all tasks in `tasks.md` are marked complete
3. Stage and commit all changes:
```bash
git add -A
git reset HEAD -- .wave/artifacts .wave/output .claude CLAUDE.md 2>/dev/null || true
git commit -m "feat: implement #<ISSUE_NUMBER> — <short description>"
```
Commit changes to the worktree branch.
## Agent Usage — USE UP TO 6 AGENTS
Maximize parallelism with up to 6 Task agents for independent work:
- Agents 1-2: Setup and foundational tasks (Phase 1-2)
- Agents 3-4: Core implementation tasks (parallelizable [P] tasks)
- Agent 5: Test writing and validation
- Agent 6: Integration and polish tasks
Coordinate agents to respect task dependencies:
- Sequential tasks (no [P] marker) must complete before dependents start
- Parallel tasks [P] affecting different files can run simultaneously
- Run test validation between phases
## Error Handling
- If a task fails, halt dependent tasks but continue independent ones
- Provide clear error context for debugging
- If tests fail, fix the issue before proceeding to the next phase

View File

@@ -0,0 +1,90 @@
You are creating an implementation plan for a GitHub issue.
Input: {{ input }}
## Working Directory
You are running in an **isolated git worktree** checked out at `main` (detached HEAD).
Your working directory IS the project root. All git operations here are isolated
from the main working tree and will not affect it.
Use `create-new-feature.sh` to create the feature branch from this clean starting point.
## Instructions
### Step 1: Read Assessment
From the issue assessment artifact, extract:
- Issue number, title, body, and repository
- Branch name from the assessment
- Complexity estimate
- Which speckit steps were skipped
### Step 2: Create Feature Branch
Use the `create-new-feature.sh` script to create a properly numbered branch:
```bash
.specify/scripts/bash/create-new-feature.sh --json --number <ISSUE_NUMBER> --short-name "<SHORT_NAME>" "<ISSUE_TITLE>"
```
If the branch already exists (e.g. from a resume), check it out instead:
```bash
git checkout <BRANCH_NAME>
```
### Step 3: Write Spec from Issue
In the feature directory (e.g. `specs/<BRANCH_NAME>/`), create `spec.md` with:
- Issue title as heading
- Full issue body
- Labels and metadata
- Any acceptance criteria extracted from the issue
- Link back to the original issue URL
### Step 4: Create Implementation Plan
Write `plan.md` in the feature directory with:
1. **Objective**: What the issue asks for (1-2 sentences)
2. **Approach**: High-level strategy
3. **File Mapping**: Which files need to be created/modified/deleted
4. **Architecture Decisions**: Any design choices made
5. **Risks**: Potential issues and mitigations
6. **Testing Strategy**: What tests are needed
### Step 5: Create Task Breakdown
Write `tasks.md` in the feature directory with a phased breakdown:
```markdown
# Tasks
## Phase 1: Setup
- [ ] Task 1.1: Description
- [ ] Task 1.2: Description
## Phase 2: Core Implementation
- [ ] Task 2.1: Description [P] (parallelizable)
- [ ] Task 2.2: Description [P]
## Phase 3: Testing
- [ ] Task 3.1: Write unit tests
- [ ] Task 3.2: Write integration tests
## Phase 4: Polish
- [ ] Task 4.1: Documentation updates
- [ ] Task 4.2: Final validation
```
Mark parallelizable tasks with `[P]`.
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT start implementation — only planning in this step
- Do NOT use WebSearch — all information is in the issue and codebase
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,47 @@
You are performing a cross-artifact consistency and quality analysis across the
specification, plan, and tasks before implementation begins.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.analyze` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks`
to find FEATURE_DIR and locate spec.md, plan.md, tasks.md
3. Load all three artifacts and build semantic models:
- Requirements inventory from spec.md
- User story/action inventory with acceptance criteria
- Task coverage mapping from tasks.md
- Constitution rule set from `.specify/memory/constitution.md`
4. Run detection passes (limit to 50 findings total):
- **Duplication**: Near-duplicate requirements across artifacts
- **Ambiguity**: Vague adjectives, unresolved placeholders
- **Underspecification**: Requirements missing outcomes, tasks missing file paths
- **Constitution alignment**: Conflicts with MUST principles
- **Coverage gaps**: Requirements with no tasks, tasks with no requirements
- **Inconsistency**: Terminology drift, data entity mismatches, ordering contradictions
5. Assign severity: CRITICAL / HIGH / MEDIUM / LOW
6. Produce a compact analysis report (do NOT modify files — read-only analysis)
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec artifacts
- This is a READ-ONLY analysis — do NOT modify any files
## Output
Produce a JSON analysis report matching the injected output schema.
IMPORTANT: If CRITICAL issues are found, document them clearly but do NOT block
the pipeline. The implement step will handle resolution.

View File

@@ -0,0 +1,40 @@
You are generating quality checklists to validate requirement completeness before
implementation.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.checklist` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json` to get FEATURE_DIR
3. Load feature context: spec.md, plan.md, tasks.md
4. Generate focused checklists as "unit tests for requirements":
- Each item tests the QUALITY of requirements, not the implementation
- Use format: `- [ ] CHK### - Question about requirement quality [Dimension]`
- Group by quality dimensions: Completeness, Clarity, Consistency, Coverage
5. Create the following checklist files in `FEATURE_DIR/checklists/`:
- `review.md` — overall requirements quality validation
- Additional domain-specific checklists as warranted by the feature
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec artifacts
## Checklist Anti-Patterns (AVOID)
- WRONG: "Verify the button clicks correctly" (tests implementation)
- RIGHT: "Are interaction requirements defined for all clickable elements?" (tests requirements)
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,42 @@
You are refining a feature specification by identifying and resolving ambiguities.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.clarify` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` to confirm paths
3. Load the current spec and perform a focused ambiguity scan across:
- Functional scope and domain model
- Integration points and edge cases
- Terminology consistency
4. Generate up to 5 clarification questions (prioritized)
5. For each question, select the best option based on codebase context
6. Integrate each resolution directly into the spec file
7. Save the updated spec
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all clarifications should be resolved from codebase
context and the existing spec. The specify step already did the research.
- Keep the scope tight: only fix genuine ambiguities, don't redesign the spec
## Non-Interactive Mode
Since this runs in a pipeline, resolve all clarifications autonomously:
- Select the recommended option based on codebase patterns and existing architecture
- Document the rationale for each choice in the Clarifications section
- Err on the side of commonly-accepted industry standards
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,53 @@
You are creating a pull request for the implemented feature and requesting a review.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
1. Find the branch name and feature directory from the spec info artifact
2. **Verify implementation**: Run `go test -race ./...` one final time to confirm
all tests pass. If tests fail, fix them before proceeding.
3. **Stage changes**: Review all modified and new files with `git status` and `git diff`.
Stage relevant files — exclude any sensitive files (.env, credentials).
4. **Commit**: Create a well-structured commit (or multiple commits if logical):
- Use conventional commit prefixes: `feat:`, `fix:`, `refactor:`, `test:`, `docs:`
- Write concise commit messages focused on the "why"
- Do NOT include Co-Authored-By or AI attribution lines
5. **Push**: Push the branch to the remote repository:
```bash
git push -u origin HEAD
```
6. **Create Pull Request**: Use `gh pr create` with a descriptive summary:
```bash
gh pr create --title "<concise title>" --body "<PR body with summary and test plan>"
```
The PR body should include:
- Summary of changes (3-5 bullet points)
- Link to the spec file in the specs/ directory
- Test plan describing how changes were validated
- Any known limitations or follow-up work needed
7. **Request Copilot Review**: After the PR is created, request a review from Copilot:
```bash
gh pr edit --add-reviewer "copilot"
```
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,49 @@
You are implementing a feature according to the specification, plan, and task breakdown.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.implement` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks`
to find FEATURE_DIR, load tasks.md, plan.md, and all available artifacts
3. Check checklists status — if any are incomplete, note them but proceed
4. Parse tasks.md and extract phase structure, dependencies, and execution order
5. Execute implementation phase-by-phase:
**Setup first**: Initialize project structure, dependencies, configuration
**Tests before code**: Write tests for contracts and entities (TDD approach)
**Core development**: Implement models, services, CLI commands, endpoints
**Integration**: Database connections, middleware, logging, external services
**Polish**: Unit tests, performance optimization, documentation
6. For each completed task, mark it as `[X]` in tasks.md
7. Run `go test -race ./...` after each phase to catch regressions early
8. Final validation: verify all tasks complete, tests pass, spec requirements met
## Agent Usage — USE UP TO 6 AGENTS
Maximize parallelism with up to 6 Task agents for independent work:
- Agents 1-2: Setup and foundational tasks (Phase 1-2)
- Agents 3-4: Core implementation tasks (parallelizable [P] tasks)
- Agent 5: Test writing and validation
- Agent 6: Integration and polish tasks
Coordinate agents to respect task dependencies:
- Sequential tasks (no [P] marker) must complete before dependents start
- Parallel tasks [P] affecting different files can run simultaneously
- Run test validation between phases
## Error Handling
- If a task fails, halt dependent tasks but continue independent ones
- Provide clear error context for debugging
- If tests fail, fix the issue before proceeding to the next phase

View File

@@ -0,0 +1,41 @@
You are creating an implementation plan for a feature specification.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.plan` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/setup-plan.sh --json` to get FEATURE_SPEC, IMPL_PLAN,
SPECS_DIR, and BRANCH paths
3. Load the feature spec and `.specify/memory/constitution.md`
4. Follow the plan template phases:
**Phase 0 — Outline & Research**:
- Extract unknowns from the spec (NEEDS CLARIFICATION markers, tech decisions)
- Research best practices for each technology choice
- Consolidate findings into `research.md` with Decision/Rationale/Alternatives
**Phase 1 — Design & Contracts**:
- Extract entities from spec → write `data-model.md`
- Generate API contracts from functional requirements → `/contracts/`
- Run `.specify/scripts/bash/update-agent-context.sh claude`
5. Evaluate constitution compliance at each phase gate
6. Stop after Phase 1 — report branch, plan path, and generated artifacts
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec and codebase
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,50 @@
You are creating a feature specification for the following request:
{{ input }}
## Working Directory
You are running in an **isolated git worktree** checked out at `main` (detached HEAD).
Your working directory IS the project root. All git operations here are isolated
from the main working tree and will not affect it.
Use `create-new-feature.sh` to create the feature branch from this clean starting point.
## Instructions
Follow the `/speckit.specify` workflow to generate a complete feature specification:
1. Generate a concise short name (2-4 words) for the feature branch
2. Check existing branches to determine the next available number:
```bash
git fetch --all --prune
git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-'
git branch | grep -E '^[* ]*[0-9]+-'
```
3. Run the feature creation script:
```bash
.specify/scripts/bash/create-new-feature.sh --json --number <N> --short-name "<name>" "{{ input }}"
```
4. Load `.specify/templates/spec-template.md` for the required structure
5. Write the specification to the SPEC_FILE returned by the script
6. Create the quality checklist at `FEATURE_DIR/checklists/requirements.md`
7. Run self-validation against the checklist (up to 3 iterations)
## Agent Usage
Use 1-3 Task agents to parallelize independent work:
- Agent 1: Analyze the codebase to understand existing patterns and architecture
- Agent 2: Research domain-specific best practices for the feature
- Agent 3: Draft specification sections in parallel
## Quality Standards
- Focus on WHAT and WHY, not HOW (no implementation details)
- Every requirement must be testable and unambiguous
- Maximum 3 `[NEEDS CLARIFICATION]` markers — make informed guesses for the rest
- Include user stories with acceptance criteria, data model, edge cases
- Success criteria must be measurable and technology-agnostic
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,52 @@
You are generating an actionable, dependency-ordered task breakdown for implementation.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.tasks` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json` to get FEATURE_DIR
and AVAILABLE_DOCS
3. Load from FEATURE_DIR:
- **Required**: plan.md (tech stack, structure), spec.md (user stories, priorities)
- **Optional**: data-model.md, contracts/, research.md, quickstart.md
4. Execute task generation:
- Extract user stories with priorities (P1, P2, P3) from spec.md
- Map entities and endpoints to user stories
- Generate tasks organized by user story
5. Write `tasks.md` following the strict checklist format:
```
- [ ] [TaskID] [P?] [Story?] Description with file path
```
6. Organize into phases:
- Phase 1: Setup (project initialization)
- Phase 2: Foundational (blocking prerequisites)
- Phase 3+: One phase per user story (priority order)
- Final: Polish & cross-cutting concerns
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec artifacts
- Keep the scope tight: generate tasks from existing artifacts only
## Quality Requirements
- Every task must have a unique ID (T001, T002...), description, and file path
- Mark parallelizable tasks with [P]
- Each user story phase must be independently testable
- Tasks must be specific enough for an LLM to complete without additional context
## Output
Produce a JSON status report matching the injected output schema.