Add Gitea issue pipelines and prompts using tea CLI

gt-issue-impl, gt-issue-research, gt-issue-rewrite, gt-issue-update
pipelines with corresponding prompts. Mirrors the gh-issue-* variants
but uses tea CLI with --login librete for Gitea authentication.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-25 17:02:48 +01:00
parent 22370827ee
commit 1230c6a538
8 changed files with 1079 additions and 0 deletions

View File

@@ -0,0 +1,121 @@
kind: WavePipeline
metadata:
name: gt-issue-impl
description: "Implement a Gitea issue end-to-end: fetch, assess, plan, implement, create PR"
input:
source: cli
schema:
type: string
description: "Gitea repository and issue number"
example: "public/librenotes 42"
steps:
- id: fetch-assess
persona: implementer
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source_path: .wave/prompts/gitea-issue-impl/fetch-assess.md
output_artifacts:
- name: assessment
path: .wave/output/issue-assessment.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/issue-assessment.json
schema_path: .wave/contracts/issue-assessment.schema.json
must_pass: true
on_failure: retry
max_retries: 2
- id: plan
persona: implementer
dependencies: [fetch-assess]
memory:
inject_artifacts:
- step: fetch-assess
artifact: assessment
as: issue_assessment
workspace:
type: worktree
branch: "{{ pipeline_id }}"
base: main
exec:
type: prompt
source_path: .wave/prompts/gitea-issue-impl/plan.md
output_artifacts:
- name: impl-plan
path: .wave/output/impl-plan.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/impl-plan.json
schema_path: .wave/contracts/issue-impl-plan.schema.json
must_pass: true
on_failure: retry
max_retries: 2
- id: implement
persona: craftsman
dependencies: [plan]
memory:
inject_artifacts:
- step: fetch-assess
artifact: assessment
as: issue_assessment
- step: plan
artifact: impl-plan
as: plan
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source_path: .wave/prompts/gitea-issue-impl/implement.md
handover:
contract:
type: test_suite
command: "{{ project.test_command }}"
must_pass: true
on_failure: retry
max_retries: 3
compaction:
trigger: "token_limit_80%"
persona: summarizer
- id: create-pr
persona: craftsman
dependencies: [implement]
memory:
inject_artifacts:
- step: fetch-assess
artifact: assessment
as: issue_assessment
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source_path: .wave/prompts/gitea-issue-impl/create-pr.md
output_artifacts:
- name: pr-result
path: .wave/output/pr-result.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/pr-result.json
schema_path: .wave/contracts/pr-result.schema.json
must_pass: true
on_failure: retry
max_retries: 2
outcomes:
- type: pr
extract_from: .wave/output/pr-result.json
json_path: .pr_url
label: "Pull Request"

View File

@@ -0,0 +1,257 @@
kind: WavePipeline
metadata:
name: gt-issue-research
description: Research a Gitea issue and post findings as a comment
release: true
input:
source: cli
example: "public/librenotes 42"
schema:
type: string
description: "Gitea repository and issue number (e.g. 'owner/repo number')"
steps:
- id: fetch-issue
persona: gitea-analyst
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Fetch the Gitea issue specified in the input: {{ input }}
The input format is "owner/repo issue_number" (e.g., "public/librenotes 42").
Parse the input to extract the repository and issue number.
Use the tea CLI to fetch the issue:
tea issues list --repo <owner/repo> --login librete -o json -f index,title,body,labels,state,url --limit 50
Then filter the JSON output for the issue matching the requested number.
Parse the output and produce structured JSON with the issue content.
Include repository information in the output.
output_artifacts:
- name: issue-content
path: .wave/output/issue-content.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/issue-content.json
schema_path: .wave/contracts/issue-content.schema.json
on_failure: retry
max_retries: 3
- id: analyze-topics
persona: researcher
dependencies: [fetch-issue]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: issue
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Analyze the Gitea issue and extract research topics.
Identify:
1. Key technical questions that need external research
2. Domain concepts that require clarification
3. External dependencies, libraries, or tools to investigate
4. Similar problems/solutions that might provide guidance
For each topic, provide:
- A unique ID (TOPIC-001, TOPIC-002, etc.)
- A clear title
- Specific questions to answer (1-5 questions per topic)
- Search keywords for web research
- Priority (critical/high/medium/low based on relevance to solving the issue)
- Category (technical/documentation/best_practices/security/performance/compatibility/other)
Focus on topics that will provide actionable insights for the issue author.
Limit to 10 most important topics.
output_artifacts:
- name: topics
path: .wave/output/research-topics.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/research-topics.json
schema_path: .wave/contracts/research-topics.schema.json
on_failure: retry
max_retries: 2
- id: research-topics
persona: researcher
dependencies: [analyze-topics]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: issue
- step: analyze-topics
artifact: topics
as: research_plan
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Research the topics identified in the research plan.
For each topic in the research plan:
1. Execute web searches using the provided keywords
2. Evaluate source credibility (official docs > authoritative > community)
3. Extract relevant findings with key points
4. Include direct quotes where helpful
5. Rate your confidence in the answer (high/medium/low/inconclusive)
For each finding:
- Assign a unique ID (FINDING-001, FINDING-002, etc.)
- Provide a summary (20-2000 characters)
- List key points as bullet items
- Include source URL, title, and type
- Rate relevance to the topic (0-1)
Always include source URLs for attribution.
If a topic yields no useful results, mark confidence as "inconclusive".
Document any gaps in the research.
output_artifacts:
- name: findings
path: .wave/output/research-findings.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/research-findings.json
schema_path: .wave/contracts/research-findings.schema.json
on_failure: retry
max_retries: 2
- id: synthesize-report
persona: summarizer
dependencies: [research-topics]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: original_issue
- step: research-topics
artifact: findings
as: research
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Synthesize the research findings into a coherent report for the Gitea issue.
Create a well-structured research report that includes:
1. Executive Summary:
- Brief overview (50-1000 chars)
- Key findings (1-7 bullet points)
- Primary recommendation
- Confidence assessment (high/medium/low)
2. Detailed Findings:
- Organize by topic/section
- Include code examples where relevant
- Reference sources using SRC-### IDs
3. Recommendations:
- Actionable items with IDs (REC-001, REC-002, etc.)
- Priority and effort estimates
- Maximum 10 recommendations
4. Sources:
- List all sources with IDs (SRC-001, SRC-002, etc.)
- Include URL, title, type, and reliability
5. Pre-rendered Markdown:
- Generate complete markdown_content field ready for Gitea comment
- Use proper headers, bullet points, and formatting
- Include a header: "## Research Findings (Wave Pipeline)"
- End with sources section
output_artifacts:
- name: report
path: .wave/output/research-report.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/research-report.json
schema_path: .wave/contracts/research-report.schema.json
on_failure: retry
max_retries: 2
- id: post-comment
persona: gitea-commenter
dependencies: [synthesize-report]
memory:
inject_artifacts:
- step: fetch-issue
artifact: issue-content
as: issue
- step: synthesize-report
artifact: report
as: report
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
Post the research report as a comment on the Gitea issue.
Steps:
1. Read the issue details to get the repository and issue number
2. Read the report to get the markdown_content
3. Write the markdown content to a file, then use tea CLI to post the comment:
# Write to file to avoid shell escaping issues with large markdown
cat > /tmp/comment-body.md << 'COMMENT_EOF'
<markdown_content>
COMMENT_EOF
tea comment <number> "$(cat /tmp/comment-body.md)" --repo <owner/repo> --login librete
4. Add a footer to the comment:
---
*Generated by [Wave](https://github.com/re-cinq/wave) issue-research pipeline*
5. Capture the result and verify success
6. Clean up temp files
Record the result with:
- success: true/false
- issue_reference: issue number and repository
- comment: id, url, body_length (if successful)
- error: code, message, retryable (if failed)
- timestamp: current time
output_artifacts:
- name: comment-result
path: .wave/output/comment-result.json
type: json
outcomes:
- type: url
extract_from: .wave/output/comment-result.json
json_path: .comment.url
label: "Research Comment"
handover:
contract:
type: json_schema
source: .wave/output/comment-result.json
schema_path: .wave/contracts/comment-result.schema.json
on_failure: retry
max_retries: 3

View File

@@ -0,0 +1,194 @@
kind: WavePipeline
metadata:
name: gt-issue-rewrite
description: "Analyze and rewrite poorly documented Gitea issues"
release: true
input:
source: cli
example: "public/librenotes 42"
schema:
type: string
description: "Gitea repository, optionally with issue number (e.g. 'owner/repo' or 'owner/repo 42')"
steps:
- id: scan-issues
persona: gitea-analyst
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
MANDATORY: You MUST call the Bash tool. NEVER say "tea CLI not installed" without trying.
Input: {{ input }}
Parse the input to determine the mode:
- If the input contains a number after the repo (e.g. "public/librenotes 42"), this is SINGLE ISSUE mode.
Extract the repo (first token) and issue number (second token).
- If the input is just a repo (e.g. "public/librenotes"), this is BATCH mode.
Execute these commands using the Bash tool:
1. tea --version
2a. SINGLE ISSUE mode: Parse the repo and number from {{ input }}, then run:
tea issues list --repo <REPO> --login librete -o json -f index,title,body,labels,url --limit 50
Then filter the JSON for the issue matching the requested number.
2b. BATCH mode:
tea issues list --repo {{ input }} --login librete --limit 10 -o json -f index,title,body,labels,url
After getting REAL results from Bash, analyze issues and score them.
In single issue mode, analyze the one issue. In batch mode, analyze all returned issues.
output_artifacts:
- name: issue_analysis
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-issue-analysis.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: plan-enhancements
persona: gitea-analyst
dependencies: [scan-issues]
memory:
inject_artifacts:
- step: scan-issues
artifact: issue_analysis
as: analysis
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
The analysis artifact contains poor_quality_issues from the scan step.
For EACH issue in poor_quality_issues, use tea CLI to fetch the current body:
tea issues list --repo {{ input }} --login librete -o json -f index,body --limit 50
Then filter for the specific issue number.
Then create an enhancement plan with:
- issue_number: the issue number
- suggested_title: improved title (or keep original if good)
- body_template: enhanced body text (improve the existing body, add missing sections)
- suggested_labels: appropriate labels
- enhancements: list of changes being made
Create an enhancement plan with fields:
issues_to_enhance (array of issue_number, suggested_title, body_template,
suggested_labels, enhancements) and total_to_enhance.
output_artifacts:
- name: enhancement_plan
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-enhancement-plan.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: apply-enhancements
persona: gitea-enhancer
dependencies: [plan-enhancements]
memory:
inject_artifacts:
- step: plan-enhancements
artifact: enhancement_plan
as: plan
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
Step 1: Use Bash tool to verify tea works:
tea --version
Step 2: For EACH issue in the plan, use Bash tool to apply changes:
- If suggested_title differs from current: tea issues edit <N> --repo {{ input }} --login librete -t "suggested_title"
- If body_template is provided: tea issues edit <N> --repo {{ input }} --login librete -d "body_template"
- If suggested_labels: tea issues edit <N> --repo {{ input }} --login librete -L "label1,label2"
Step 3: For each issue, capture the URL from the issue list:
tea issues list --repo {{ input }} --login librete -o json -f index,url --limit 50
Then filter for the specific issue number.
Step 4: Record the results with fields: enhanced_issues (each with issue_number,
success, changes_made, url), total_attempted, total_successful, total_failed.
output_artifacts:
- name: enhancement_results
path: .wave/artifact.json
type: json
required: true
outcomes:
- type: issue
extract_from: .wave/artifact.json
json_path: .enhanced_issues[0].url
label: "Enhanced Issue"
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-enhancement-results.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: verify-enhancements
persona: gitea-analyst
dependencies: [apply-enhancements]
memory:
inject_artifacts:
- step: apply-enhancements
artifact: enhancement_results
as: results
- step: scan-issues
artifact: issue_analysis
as: original_analysis
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
For each enhanced issue, verify with:
tea issues list --repo {{ input }} --login librete -o json -f index,title,labels --limit 50
Then filter for the specific issue numbers.
Compile a verification report with fields:
total_enhanced, successful_enhancements, failed_enhancements, and summary.
output_artifacts:
- name: verification_report
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/github-verification-report.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false

View File

@@ -0,0 +1,182 @@
kind: WavePipeline
metadata:
name: gt-issue-update
description: "Refresh a stale Gitea issue by comparing it against recent codebase changes"
release: true
input:
source: cli
example: "public/librenotes 45 -- acceptance criteria are outdated after the worktree refactor"
schema:
type: string
description: "owner/repo number [-- optional criticism or direction]"
steps:
- id: gather-context
persona: gitea-analyst
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
MANDATORY: You MUST call the Bash tool. NEVER say "tea CLI not installed" without trying.
Input: {{ input }}
Parse the input:
- Split on " -- " to separate the repo+number from optional criticism.
- The first part is "<owner/repo> <number>". Extract REPO (first token) and NUMBER (second token).
- If there is text after " -- ", that is the user's CRITICISM about what's wrong with the issue.
- If there is no " -- ", criticism is empty.
Execute these commands using the Bash tool:
1. tea --version
2. Fetch the full issue:
tea issues list --repo REPO --login librete -o json -f index,title,body,labels,state,url --limit 50
Then filter the JSON for the issue matching NUMBER.
3. Get commits since the issue was created (cap at 100):
git log --since="<created_at>" --oneline -100
4. Scan the issue body for file path references (anything matching patterns like
`internal/...`, `cmd/...`, `.wave/...`, or backtick-quoted paths).
For each referenced file, check if it still exists using `ls -la <path>`.
5. Read CLAUDE.md for current project context:
Read the file CLAUDE.md from the repository root.
After gathering ALL data, produce a JSON result matching the contract schema.
output_artifacts:
- name: issue_context
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/issue-update-context.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: draft-update
persona: gitea-analyst
dependencies: [gather-context]
memory:
inject_artifacts:
- step: gather-context
artifact: issue_context
as: context
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
MANDATORY: You MUST call the Bash tool for any commands. NEVER generate fake output.
The context artifact contains the gathered issue context.
Your task: Compare the original issue against the codebase changes and draft an updated version.
Step 1: Analyze each section of the issue body. Classify each as:
- STILL_VALID: Content is accurate and up-to-date
- OUTDATED: Content references old behavior, removed files, or superseded patterns
- INCOMPLETE: Content is partially correct but missing recent developments
- WRONG: Content is factually incorrect given current codebase state
Step 2: If there is user criticism (non-empty "criticism" field), address EVERY point raised.
The criticism takes priority — it represents what the issue author thinks is wrong.
Step 3: Draft the updated issue:
- Preserve sections classified as STILL_VALID (do not rewrite what works)
- Rewrite OUTDATED and WRONG sections to reflect current reality
- Expand INCOMPLETE sections with missing information
- If the title needs updating, draft a new title
- Append a "---\n**Changes since original**" section at the bottom listing what changed and why
Step 4: If file paths in the issue body are now missing (from referenced_files.missing),
update or remove those references.
Produce a JSON result matching the contract schema.
output_artifacts:
- name: update_draft
path: .wave/artifact.json
type: json
required: true
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/issue-update-draft.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false
- id: apply-update
persona: gitea-enhancer
dependencies: [draft-update]
memory:
inject_artifacts:
- step: draft-update
artifact: update_draft
as: draft
- step: gather-context
artifact: issue_context
as: context
workspace:
type: worktree
branch: "{{ pipeline_id }}"
exec:
type: prompt
source: |
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
Step 1: Use Bash tool to verify tea works:
tea --version
Step 2: Extract the repo and issue number from the available artifacts.
Step 3: Apply the update:
- If title_changed is true:
tea issues edit <NUMBER> --repo <REPO> --login librete -t "<updated_title>"
- Write the updated_body to a temp file, then apply it:
Write updated_body to /tmp/issue-body.md
tea issues edit <NUMBER> --repo <REPO> --login librete -d "$(cat /tmp/issue-body.md)"
- Clean up /tmp/issue-body.md after applying.
Step 4: Verify the update was applied:
tea issues list --repo <REPO> --login librete -o json -f index,title,body,url --limit 50
Then filter for the specific issue number.
Compare the returned title and body against what was intended. Flag any discrepancies.
Step 5: Record the results as a JSON object matching the contract schema.
output_artifacts:
- name: update_result
path: .wave/artifact.json
type: json
required: true
outcomes:
- type: issue
extract_from: .wave/artifact.json
json_path: .url
label: "Updated Issue"
handover:
max_retries: 1
contract:
type: json_schema
schema_path: .wave/contracts/issue-update-result.schema.json
validate: true
must_pass: true
allow_recovery: true
recovery_level: progressive
progressive_validation: false

View File

@@ -0,0 +1,67 @@
You are creating a pull request for the implemented Gitea issue.
Input: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by the
plan step and is already checked out. All git operations here are isolated from
the main working tree.
Read the issue assessment artifact to find the issue number, repository, branch name, and issue URL.
## SAFETY: Do NOT Modify the Working Tree
This step MUST NOT run `git checkout`, `git stash`, or any command that changes
the current branch or working tree state. The branch already exists from the
implement step — just push it and create the PR.
## Instructions
### Step 1: Load Context
From the issue assessment artifact, extract:
- Issue number and title
- Repository (`owner/repo`)
- Branch name
- Issue URL
### Step 2: Push the Branch
Push the feature branch without checking it out:
```bash
git push -u origin <BRANCH_NAME>
```
### Step 3: Create Pull Request
Create the PR using `tea pulls create` with `--head` to target the branch. The PR description MUST include `Closes #<NUMBER>` to auto-close the issue on merge.
```bash
tea pulls create --repo <OWNER/REPO> --login librete --head <BRANCH_NAME> -t "<concise title>" -d "$(cat <<'PRBODY'
## Summary
<3-5 bullet points describing the changes>
Closes #<ISSUE_NUMBER>
## Changes
<list of key files changed and why>
## Test Plan
<how the changes were validated>
PRBODY
)"
```
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT run `git checkout`, `git stash`, or any branch-switching commands
- The PR description MUST contain `Closes #<NUMBER>` to link to the issue
- Do NOT include Co-Authored-By or AI attribution in commits
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,81 @@
You are fetching a Gitea issue and assessing whether it has enough detail to implement.
Input: {{ input }}
The input format is `owner/repo number` (e.g. `public/librenotes 42`).
## Working Directory
You are running in an isolated Wave workspace. The `tea` CLI works from any
directory when using the `--repo` and `--login` flags, so no directory change is needed.
## Instructions
### Step 1: Parse Input
Extract the repository (`owner/repo`) and issue number from the input string.
### Step 2: Fetch Issue
Use the `tea` CLI to fetch issues and filter for the target:
```bash
tea issues list --repo <OWNER/REPO> --login librete -o json -f index,title,body,labels,state,url --limit 50
```
Then filter the JSON output for the issue matching the requested number.
### Step 3: Assess Implementability
Evaluate the issue against these criteria:
1. **Clear description**: Does the issue describe what needs to change? (not just "X is broken")
2. **Sufficient context**: Can you identify which code/files are affected?
3. **Testable outcome**: Are there acceptance criteria, or can you infer them from the description?
Score the issue 0-100:
- **80-100**: Well-specified, clear requirements, acceptance criteria present
- **60-79**: Adequate detail, some inference needed but feasible
- **40-59**: Marginal — missing key details but core intent is clear
- **0-39**: Too vague to implement — set `implementable` to `false`
### Step 4: Determine Skip Steps
Based on the issue quality, decide which speckit steps can be skipped:
- Issues with detailed specs can skip `specify`, `clarify`, `checklist`, `analyze`
- Issues with moderate detail might skip `specify` and `clarify` only
- Vague issues should skip nothing (but those should fail the assessment)
### Step 5: Generate Branch Name
Create a branch name using the pattern `<NNN>-<short-name>` where:
- `<NNN>` is the issue number zero-padded to 3 digits
- `<short-name>` is 2-3 words from the issue title, kebab-case
### Step 6: Assess Complexity
Estimate implementation complexity:
- **trivial**: Single file change, obvious fix (typo, config tweak)
- **simple**: 1-3 files, straightforward logic change
- **medium**: 3-10 files, new feature with tests
- **complex**: 10+ files, architectural changes, cross-cutting concerns
## CRITICAL: Implementability Gate
If the issue does NOT have enough detail to implement:
- Set `"implementable": false` in the output
- This will cause the contract validation to fail, aborting the pipeline
- Include `missing_info` listing what specific information is needed
- Include a `summary` explaining why the issue cannot be implemented as-is
If the issue IS implementable:
- Set `"implementable": true`
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT modify the issue — this is read-only assessment
## Output
Produce a JSON assessment matching the injected output schema.

View File

@@ -0,0 +1,87 @@
You are implementing a Gitea issue according to the plan and task breakdown.
Input: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by the
plan step and is already checked out. All git operations here are isolated from
the main working tree.
## Instructions
### Step 1: Load Context
1. Get the issue details and branch name from the issue assessment artifact
2. Get the task breakdown, file changes, and feature directory from the plan artifact
### Step 2: Read Plan Files
Navigate to the feature directory and read:
- `spec.md` — the full specification
- `plan.md` — the implementation plan
- `tasks.md` — the phased task breakdown
### Step 3: Execute Implementation
Follow the task breakdown phase by phase:
**Setup first**: Initialize project structure, dependencies, configuration
**Tests before code (TDD)**:
- Write tests that define expected behavior
- Run tests to confirm they fail for the right reason
- Implement the code to make tests pass
**Core development**: Implement the changes specified in the plan
**Integration**: Wire components together, update imports, middleware
**Polish**: Edge cases, error handling, documentation updates
### Step 4: Validate Between Phases
After each phase, run:
```bash
go test -race ./...
```
If tests fail, fix the issue before proceeding to the next phase.
### Step 5: Mark Completed Tasks
As you complete each task, mark it as `[X]` in `tasks.md`.
### Step 6: Final Validation
After all tasks are complete:
1. Run `go test -race ./...` one final time
2. Verify all tasks in `tasks.md` are marked complete
3. Stage and commit all changes:
```bash
git add -A
git reset HEAD -- .wave/artifacts .wave/output .claude CLAUDE.md 2>/dev/null || true
git commit -m "feat: implement #<ISSUE_NUMBER> — <short description>"
```
Commit changes to the worktree branch.
## Agent Usage — USE UP TO 6 AGENTS
Maximize parallelism with up to 6 Task agents for independent work:
- Agents 1-2: Setup and foundational tasks (Phase 1-2)
- Agents 3-4: Core implementation tasks (parallelizable [P] tasks)
- Agent 5: Test writing and validation
- Agent 6: Integration and polish tasks
Coordinate agents to respect task dependencies:
- Sequential tasks (no [P] marker) must complete before dependents start
- Parallel tasks [P] affecting different files can run simultaneously
- Run test validation between phases
## Error Handling
- If a task fails, halt dependent tasks but continue independent ones
- Provide clear error context for debugging
- If tests fail, fix the issue before proceeding to the next phase

View File

@@ -0,0 +1,90 @@
You are creating an implementation plan for a Gitea issue.
Input: {{ input }}
## Working Directory
You are running in an **isolated git worktree** checked out at `main` (detached HEAD).
Your working directory IS the project root. All git operations here are isolated
from the main working tree and will not affect it.
Use `create-new-feature.sh` to create the feature branch from this clean starting point.
## Instructions
### Step 1: Read Assessment
From the issue assessment artifact, extract:
- Issue number, title, body, and repository
- Branch name from the assessment
- Complexity estimate
- Which speckit steps were skipped
### Step 2: Create Feature Branch
Use the `create-new-feature.sh` script to create a properly numbered branch:
```bash
.specify/scripts/bash/create-new-feature.sh --json --number <ISSUE_NUMBER> --short-name "<SHORT_NAME>" "<ISSUE_TITLE>"
```
If the branch already exists (e.g. from a resume), check it out instead:
```bash
git checkout <BRANCH_NAME>
```
### Step 3: Write Spec from Issue
In the feature directory (e.g. `specs/<BRANCH_NAME>/`), create `spec.md` with:
- Issue title as heading
- Full issue body
- Labels and metadata
- Any acceptance criteria extracted from the issue
- Link back to the original issue URL
### Step 4: Create Implementation Plan
Write `plan.md` in the feature directory with:
1. **Objective**: What the issue asks for (1-2 sentences)
2. **Approach**: High-level strategy
3. **File Mapping**: Which files need to be created/modified/deleted
4. **Architecture Decisions**: Any design choices made
5. **Risks**: Potential issues and mitigations
6. **Testing Strategy**: What tests are needed
### Step 5: Create Task Breakdown
Write `tasks.md` in the feature directory with a phased breakdown:
```markdown
# Tasks
## Phase 1: Setup
- [ ] Task 1.1: Description
- [ ] Task 1.2: Description
## Phase 2: Core Implementation
- [ ] Task 2.1: Description [P] (parallelizable)
- [ ] Task 2.2: Description [P]
## Phase 3: Testing
- [ ] Task 3.1: Write unit tests
- [ ] Task 3.2: Write integration tests
## Phase 4: Polish
- [ ] Task 4.1: Documentation updates
- [ ] Task 4.2: Final validation
```
Mark parallelizable tasks with `[P]`.
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT start implementation — only planning in this step
- Do NOT use WebSearch — all information is in the issue and codebase
## Output
Produce a JSON status report matching the injected output schema.