fix(ci): correct image digest separator
This commit is contained in:
278
.wave/pipelines/audit-closed.yaml
Normal file
278
.wave/pipelines/audit-closed.yaml
Normal file
@@ -0,0 +1,278 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-closed
|
||||
description: "Audit closed GitHub issues and merged PRs for implementation fidelity"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "last 30 days -- audit recent closed work"
|
||||
schema:
|
||||
type: string
|
||||
description: "Audit scope: empty for full audit, time range ('last 30 days', 'since 2026-01-01'), or label filter ('label:enhancement')"
|
||||
|
||||
steps:
|
||||
- id: inventory
|
||||
persona: "gitea-analyst"
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fetch all closed issues and merged PRs for audit inventory.
|
||||
|
||||
## Detect Repository
|
||||
|
||||
Run: `{{ forge.cli_tool }} repo view --json nameWithOwner --jq .nameWithOwner`
|
||||
|
||||
## Parse Scope
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
Determine the scope mode from the input:
|
||||
|
||||
- **Empty or blank input**: Full audit — fetch ALL closed issues and merged PRs
|
||||
- **Time range** (e.g., "last 30 days", "last 7 days", "since 2026-01-01"):
|
||||
- For "last N days": calculate the date N days ago, use `closed:>YYYY-MM-DD` / `merged:>YYYY-MM-DD`
|
||||
- For "since YYYY-MM-DD": use `closed:>YYYY-MM-DD` / `merged:>YYYY-MM-DD`
|
||||
- **Label filter** (e.g., "label:enhancement", "label:bug"):
|
||||
- Extract the label name after "label:"
|
||||
- Add `--label <name>` to both issue and PR queries
|
||||
|
||||
## Fetch Closed Issues
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} issue list --state closed --json number,title,body,labels,closedAt,stateReason,url \
|
||||
--limit 500 [--search "closed:>YYYY-MM-DD"] [--label <name>]
|
||||
```
|
||||
|
||||
Filter out issues where `stateReason` is `NOT_PLANNED` — these represent intentional non-implementation and should be excluded.
|
||||
|
||||
If the result count equals the limit (500), make additional paginated calls to fetch remaining items.
|
||||
|
||||
## Fetch Merged PRs
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} list --state merged --json number,title,body,files,mergeCommit,closedAt,url \
|
||||
--limit 500 [--search "merged:>YYYY-MM-DD"] [--label <name>]
|
||||
```
|
||||
|
||||
If the result count equals the limit, paginate for more.
|
||||
|
||||
## Build Inventory
|
||||
|
||||
For each closed issue:
|
||||
- `number`, `type` ("issue"), `title`, `url`, `body`, `labels` (array)
|
||||
- `closed_at`: ISO 8601 timestamp
|
||||
- `linked_prs`: Search body for "Fixes #N", "Closes #N", or PR cross-references
|
||||
- `acceptance_criteria`: Extract from issue body by looking for checklist patterns (`- [ ]`, `- [x]`) or sections titled "Acceptance Criteria", "Requirements", or similar headers
|
||||
|
||||
For each merged PR:
|
||||
- `number`, `type` ("pr"), `title`, `url`, `body`, `labels` (array)
|
||||
- `merged_at`: ISO 8601 timestamp (from `closedAt`)
|
||||
- `merge_commit`: the mergeCommit SHA
|
||||
- `files_changed`: count of modified files from the PR
|
||||
|
||||
## Output
|
||||
|
||||
Write the inventory as structured JSON to `.wave/artifacts/inventory.json`.
|
||||
|
||||
The JSON must include:
|
||||
- `scope`: object with `mode` ("full", "time_range", or "label"), `filter` (the raw scope expression), `repository` (owner/repo)
|
||||
- `items`: array of inventory items (issues and PRs combined)
|
||||
- `summary`: object with `total_issues`, `total_prs`, `excluded_not_planned` counts
|
||||
- `timestamp`: current ISO 8601 timestamp
|
||||
|
||||
output_artifacts:
|
||||
- name: inventory
|
||||
path: .wave/artifacts/inventory.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/artifacts/inventory.json
|
||||
schema_path: .wave/contracts/audit-inventory.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: audit
|
||||
persona: "gitea-analyst"
|
||||
model: claude-haiku
|
||||
dependencies: [inventory]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: inventory
|
||||
artifact: inventory
|
||||
as: inventory
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Audit each closed issue and merged PR against the current codebase to verify implementation fidelity.
|
||||
|
||||
## Read Inventory
|
||||
|
||||
Read the injected inventory artifact to get the list of items to audit.
|
||||
|
||||
## Verification Methodology
|
||||
|
||||
For each inventory item, perform static analysis verification:
|
||||
|
||||
1. **Read the item description** — identify what should exist in the codebase: specific functions, types, handlers, configuration options, CLI flags, test files, documentation
|
||||
2. **Check file existence** — use Glob to verify that referenced files still exist at HEAD
|
||||
3. **Search for key artifacts** — use Grep to find function names, type definitions, handler registrations, and other code artifacts mentioned in the issue/PR
|
||||
4. **Read relevant code** — use Read to verify logic matches the described behavior
|
||||
5. **Check test coverage** — verify related test files exist and contain assertions matching the acceptance criteria
|
||||
6. **Detect regressions** — run `git log --oneline --all -- <file>` to check if key files were modified after implementation. Run `git log --grep="Revert" --oneline` to find revert commits that may have undone the work
|
||||
|
||||
## Classification Rules
|
||||
|
||||
Assign exactly ONE fidelity category per item:
|
||||
|
||||
- **fully_implemented**: All referenced files exist, key functions/types are present via Grep, logic reads match the described behavior, related tests exist
|
||||
- **partial**: Some but not all acceptance criteria have matching code evidence. For each partial finding, list WHICH criteria passed and WHICH did not
|
||||
- **regressed**: Was implemented but later broken or reverted. Include the revert commit SHAs and affected file paths as evidence
|
||||
- **obsolete**: Referenced files have been deleted at HEAD, or the codebase has diverged significantly enough that the item no longer applies
|
||||
- **not_implemented**: No evidence of implementation; issue or PR describes work that does not appear in the codebase
|
||||
|
||||
## Evidence Requirements
|
||||
|
||||
Every finding MUST include evidence:
|
||||
- For **fully_implemented**: file paths confirming existence, Grep matches for key code artifacts
|
||||
- For **partial**: which criteria passed (with file:line references) and which did not (what was searched for but not found)
|
||||
- For **regressed**: revert commit SHAs, `git log` output showing modification/deletion after the implementing PR
|
||||
- For **obsolete**: evidence that files no longer exist or architecture has changed
|
||||
- For **not_implemented**: description of what was expected to exist but does not
|
||||
|
||||
## Edge Cases
|
||||
|
||||
- **Issues with no traceable code changes**: Mark as "not_implemented" with a note explaining the lack of implementation evidence
|
||||
- **Issues referencing deleted files**: Mark as "obsolete" with evidence that the referenced code no longer exists at HEAD
|
||||
- **Large inventories**: Focus on the most impactful items first — non-trivial issues with acceptance criteria. If context limits approach, prioritize quality of analysis over quantity
|
||||
|
||||
## Output
|
||||
|
||||
Write the findings as structured JSON to `.wave/artifacts/audit-report.json`.
|
||||
|
||||
The JSON must include:
|
||||
- `findings`: array of finding objects, each with:
|
||||
- `item_number`: issue or PR number
|
||||
- `item_type`: "issue" or "pr"
|
||||
- `item_url`: GitHub URL
|
||||
- `title`: item title
|
||||
- `status`: one of (fully_implemented, partial, regressed, obsolete, not_implemented)
|
||||
- `evidence`: array of strings describing evidence found
|
||||
- `unmet_criteria`: array of strings describing criteria not met (for partial/regressed)
|
||||
- `remediation`: string describing remediation needed (empty for fully_implemented/obsolete)
|
||||
- `summary`: object with counts by status (fully_implemented, partial, regressed, obsolete, not_implemented)
|
||||
- `timestamp`: current ISO 8601 timestamp
|
||||
|
||||
output_artifacts:
|
||||
- name: audit-report
|
||||
path: .wave/artifacts/audit-report.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/artifacts/audit-report.json
|
||||
schema_path: .wave/contracts/audit-findings.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: triage
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
dependencies: [audit]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: inventory
|
||||
artifact: inventory
|
||||
as: inventory
|
||||
- step: audit
|
||||
artifact: audit-report
|
||||
as: audit_report
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Compose a triage summary from the audit findings with prioritized remediation recommendations.
|
||||
|
||||
## Read Inputs
|
||||
|
||||
Read the injected inventory artifact to get scope and repository metadata.
|
||||
Read the injected audit report artifact to get the per-item verification results.
|
||||
|
||||
## Group Findings by Status
|
||||
|
||||
Organize all findings into groups by implementation status, in this order:
|
||||
1. **regressed** — highest priority, was working but now broken
|
||||
2. **partial** — some criteria unmet
|
||||
3. **not_implemented** — no implementation found
|
||||
4. **obsolete** — no longer applicable
|
||||
5. **fully_implemented** — fully intact (included for reference)
|
||||
|
||||
## Summary Statistics
|
||||
|
||||
Calculate counts for each status:
|
||||
- `fully_implemented`: number of fully verified items
|
||||
- `partial`: number with some criteria unmet
|
||||
- `regressed`: number that were broken or reverted
|
||||
- `obsolete`: number no longer applicable
|
||||
- `not_implemented`: number with no implementation evidence
|
||||
|
||||
## Prioritized Remediation Actions
|
||||
|
||||
Generate an ordered list of remediation actions for non-fully-implemented items. Priority ranking:
|
||||
|
||||
1. **regressed** items (highest priority — was working, now broken)
|
||||
2. **partial** items with many unmet criteria (sort by unmet count descending)
|
||||
3. **partial** items with fewer unmet criteria
|
||||
4. **not_implemented** items (moderate priority — work was never done)
|
||||
5. **obsolete** items are EXCLUDED from actions — they are intentionally non-applicable
|
||||
|
||||
## Output Format
|
||||
|
||||
Write a markdown summary to `.wave/artifacts/triage-report.md` with:
|
||||
|
||||
1. **Audit Scope** — Description of what was audited (time range, labels, etc.)
|
||||
2. **Summary Statistics** — Counts by status as a table or list
|
||||
3. **Regressed Items** (if any) — Bulleted list with issue numbers, titles, revert commit SHAs, and remediation steps
|
||||
4. **Partial Implementation Items** (if any) — Bulleted list with issue numbers, titles, which criteria failed, and remediation steps
|
||||
5. **Not Implemented Items** (if any) — Bulleted list with issue numbers, titles, and what would need to be done
|
||||
6. **Obsolete Items** — Count only, explanation that these are no longer applicable
|
||||
7. **Fully Implemented Items** — Count only, confirmation of fidelity
|
||||
8. **Recommended Next Steps** — Actionable recommendations for the team
|
||||
|
||||
All issue/PR references should be clickable links to their GitHub URLs.
|
||||
|
||||
output_artifacts:
|
||||
- name: triage-report
|
||||
path: .wave/artifacts/triage-report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/artifacts/triage-report.md
|
||||
74
.wave/pipelines/audit-consolidate.yaml
Normal file
74
.wave/pipelines/audit-consolidate.yaml
Normal file
@@ -0,0 +1,74 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-consolidate
|
||||
description: "Detect redundant implementations and architectural drift"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "internal/pipeline — check for redundant patterns"
|
||||
schema:
|
||||
type: string
|
||||
description: "Package or directory scope to analyze, or empty for full codebase"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan for redundant implementations and architectural drift.
|
||||
|
||||
Scope: {{ input }}
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Duplicate logic**: Functions doing the same thing in different packages.
|
||||
Search for similar function signatures and bodies.
|
||||
|
||||
2. **Parallel abstractions**: Multiple interfaces or types representing
|
||||
the same concept (e.g., two different error types for the same domain).
|
||||
|
||||
3. **Inconsistent patterns**: Same operation done differently in different
|
||||
places (e.g., file reading with os.ReadFile in one place, io.ReadAll in another).
|
||||
|
||||
4. **Dead abstractions**: Interfaces with only one implementation,
|
||||
wrapper types that add no value.
|
||||
|
||||
5. **Package boundary violations**: Packages reaching into each other's
|
||||
internals instead of using proper interfaces.
|
||||
|
||||
6. **Naming inconsistencies**: Same concept with different names across
|
||||
packages (e.g., "workspace" vs "workdir" vs "cwd").
|
||||
|
||||
## Analysis
|
||||
|
||||
For each finding:
|
||||
- List the affected file:line locations
|
||||
- Explain what's redundant or inconsistent
|
||||
- Propose a consolidation strategy
|
||||
- Assess effort (trivial/small/medium/large) and risk
|
||||
|
||||
Produce a structured assessment.
|
||||
output_artifacts:
|
||||
- name: assessment
|
||||
path: .wave/output/assessment.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/assessment.json
|
||||
schema_path: .wave/contracts/improvement-assessment.schema.json
|
||||
on_failure: retry
|
||||
200
.wave/pipelines/audit-dead-code-issue.yaml
Normal file
200
.wave/pipelines/audit-dead-code-issue.yaml
Normal file
@@ -0,0 +1,200 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-dead-code-issue
|
||||
description: "Scan codebase for dead code and create a GitHub issue with findings"
|
||||
release: true
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- go
|
||||
- gh
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "scan all packages for dead code and report findings"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan for dead or redundant code: {{ input }}
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Unused exports**: Exported functions, types, constants, or variables
|
||||
that are never referenced outside their package.
|
||||
|
||||
2. **Unreachable code**: Code after return/panic, impossible branches,
|
||||
dead switch cases.
|
||||
|
||||
3. **Orphaned files**: Files not imported by any other file in the project.
|
||||
|
||||
4. **Redundant code**: Duplicate functions, copy-paste blocks,
|
||||
wrappers that add no value.
|
||||
|
||||
5. **Stale tests**: Tests for functions that no longer exist,
|
||||
or tests that test nothing meaningful.
|
||||
|
||||
6. **Unused dependencies**: Imports that are no longer needed.
|
||||
|
||||
7. **Commented-out code**: Large blocks of commented code that
|
||||
should be deleted (git has history).
|
||||
|
||||
8. **Duplicate signatures**: Functions with identical signatures across
|
||||
packages that could be consolidated.
|
||||
|
||||
## Verification
|
||||
|
||||
For each finding, verify it's truly dead:
|
||||
- Grep for all references across the entire codebase
|
||||
- Check for reflect-based or string-based usage
|
||||
- Check if it's part of an interface implementation
|
||||
- Check for build tag conditional compilation
|
||||
|
||||
Produce a structured JSON result matching the contract schema.
|
||||
Only include findings with high or medium confidence. Skip low confidence.
|
||||
output_artifacts:
|
||||
- name: scan_results
|
||||
path: .wave/output/dead-code-scan.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/dead-code-scan.json
|
||||
schema_path: .wave/contracts/dead-code-scan.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: compose-report
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
dependencies: [scan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Compose a dead code report as a GitHub-ready markdown file.
|
||||
|
||||
## Check for Findings
|
||||
|
||||
If the scan found zero findings:
|
||||
- Write a short "No dead code found" message as the report
|
||||
- Write the issue result with skipped=true and reason="clean"
|
||||
|
||||
## Compose the Report
|
||||
|
||||
Write the report as markdown:
|
||||
|
||||
```
|
||||
## Dead Code Report
|
||||
|
||||
**Scan date**: <timestamp from findings>
|
||||
**Findings**: <total_count>
|
||||
|
||||
### Summary by Type
|
||||
| Type | Count |
|
||||
|------|-------|
|
||||
| unused_export | N |
|
||||
| unreachable | N |
|
||||
| ... | N |
|
||||
|
||||
### Summary by Suggested Action
|
||||
| Action | Count |
|
||||
|--------|-------|
|
||||
| remove | N |
|
||||
| consolidate | N |
|
||||
| investigate | N |
|
||||
|
||||
### Task List
|
||||
|
||||
For each finding (sorted by confidence, high first):
|
||||
- [ ] **[DC-001]** (`type`, `confidence`) `location` — description
|
||||
Action: `suggested_action` | Safe to remove: `safe_to_remove`
|
||||
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) dead-code-issue pipeline*
|
||||
```
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/report.md
|
||||
|
||||
- id: create-issue
|
||||
persona: craftsman
|
||||
dependencies: [compose-report]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: findings
|
||||
- step: compose-report
|
||||
artifact: report
|
||||
as: report
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Create a GitHub issue from the dead code report.
|
||||
|
||||
If the report says "No dead code found", skip issue creation and exit
|
||||
with skipped=true in the result JSON.
|
||||
|
||||
## Detect Repository
|
||||
|
||||
Run: `{{ forge.cli_tool }} repo view --json nameWithOwner --jq .nameWithOwner`
|
||||
|
||||
## Create the Issue
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} issue create \
|
||||
--title "chore: dead code report" \
|
||||
--body-file .wave/artifacts/report \
|
||||
--label "code-quality"
|
||||
```
|
||||
|
||||
If the `code-quality` label doesn't exist, create without labels.
|
||||
If any `{{ forge.cli_tool }}` command fails, log the error and continue.
|
||||
|
||||
## Capture Result
|
||||
|
||||
Write a JSON status report matching the contract schema.
|
||||
Include the issue URL, number, title, and finding count from the scan results.
|
||||
output_artifacts:
|
||||
- name: issue-result
|
||||
path: .wave/output/issue-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/issue-result.json
|
||||
schema_path: .wave/contracts/dead-code-issue-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/output/issue-result.json
|
||||
json_path: .issue.url
|
||||
label: "Dead Code Issue"
|
||||
186
.wave/pipelines/audit-dead-code-review.yaml
Normal file
186
.wave/pipelines/audit-dead-code-review.yaml
Normal file
@@ -0,0 +1,186 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-dead-code-review
|
||||
description: "Scan PR-changed files for dead code and post a review comment"
|
||||
release: true
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- go
|
||||
- gh
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "https://github.com/re-cinq/wave/pull/42"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan for dead or redundant code in the files changed by: {{ input }}
|
||||
|
||||
## Scope
|
||||
|
||||
Only scan files changed in the given pull request. Use `{{ forge.cli_tool }} {{ forge.pr_command }} diff` to
|
||||
identify the changed files, then analyze only those files for dead code.
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Unused exports**: Exported functions, types, constants, or variables
|
||||
that are never referenced outside their package.
|
||||
|
||||
2. **Unreachable code**: Code after return/panic, impossible branches,
|
||||
dead switch cases.
|
||||
|
||||
3. **Orphaned files**: Files not imported by any other file in the project.
|
||||
|
||||
4. **Redundant code**: Duplicate functions, copy-paste blocks,
|
||||
wrappers that add no value.
|
||||
|
||||
5. **Stale tests**: Tests for functions that no longer exist,
|
||||
or tests that test nothing meaningful.
|
||||
|
||||
6. **Unused dependencies**: Imports that are no longer needed.
|
||||
|
||||
7. **Commented-out code**: Large blocks of commented code that
|
||||
should be deleted (git has history).
|
||||
|
||||
8. **Duplicate signatures**: Functions with identical signatures across
|
||||
packages that could be consolidated.
|
||||
|
||||
## Verification
|
||||
|
||||
For each finding, verify it's truly dead:
|
||||
- Grep for all references across the entire codebase
|
||||
- Check for reflect-based or string-based usage
|
||||
- Check if it's part of an interface implementation
|
||||
- Check for build tag conditional compilation
|
||||
|
||||
Produce a structured JSON result matching the contract schema.
|
||||
Only include findings with high or medium confidence. Skip low confidence.
|
||||
output_artifacts:
|
||||
- name: scan_results
|
||||
path: .wave/output/dead-code-scan.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/dead-code-scan.json
|
||||
schema_path: .wave/contracts/dead-code-scan.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: compose
|
||||
persona: summarizer
|
||||
model: claude-haiku
|
||||
dependencies: [scan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Compose a dead code review comment from the scan findings.
|
||||
|
||||
Read the injected findings and produce a markdown summary suitable for
|
||||
posting as a PR review comment.
|
||||
|
||||
## Format
|
||||
|
||||
If zero findings:
|
||||
```
|
||||
No dead code found in the changed files.
|
||||
```
|
||||
|
||||
If findings exist:
|
||||
```
|
||||
## Dead Code Review
|
||||
|
||||
**Findings**: <total_count> items found in changed files
|
||||
|
||||
### Summary by Type
|
||||
| Type | Count |
|
||||
|------|-------|
|
||||
| ... | N |
|
||||
|
||||
### Findings
|
||||
|
||||
For each finding (sorted by confidence, high first):
|
||||
- **[DC-001]** (`type`) `location` — description
|
||||
Suggested action: `suggested_action`
|
||||
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) dead-code-review pipeline*
|
||||
```
|
||||
|
||||
Do NOT include a title/header line — the publish step adds one.
|
||||
output_artifacts:
|
||||
- name: review_comment
|
||||
path: .wave/output/review-comment.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/review-comment.md
|
||||
|
||||
- id: publish
|
||||
persona: "gitea-commenter"
|
||||
dependencies: [compose]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: compose
|
||||
artifact: review_comment
|
||||
as: review_summary
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Post the dead code review as a PR comment.
|
||||
|
||||
The original input was: {{ input }}
|
||||
Extract the PR number or URL from the input.
|
||||
|
||||
1. Write the review content to a temp file, then post it as a PR comment:
|
||||
cat > /tmp/dead-code-review-comment.md <<'REVIEW_EOF'
|
||||
## Dead Code Review (Wave Pipeline)
|
||||
|
||||
<review content>
|
||||
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) dead-code-review pipeline*
|
||||
REVIEW_EOF
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} comment <PR_NUMBER_OR_URL> --body-file /tmp/dead-code-review-comment.md
|
||||
|
||||
output_artifacts:
|
||||
- name: publish-result
|
||||
path: .wave/output/publish-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/publish-result.json
|
||||
schema_path: .wave/contracts/gh-pr-comment-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: url
|
||||
extract_from: .wave/output/publish-result.json
|
||||
json_path: .comment_url
|
||||
label: "Dead Code Review Comment"
|
||||
285
.wave/pipelines/audit-dead-code.yaml
Normal file
285
.wave/pipelines/audit-dead-code.yaml
Normal file
@@ -0,0 +1,285 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-dead-code
|
||||
description: "Find dead or redundant code, remove it, and commit to a feature branch"
|
||||
release: true
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- go
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "find and remove dead code in internal/pipeline"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan for dead or redundant code: {{ input }}
|
||||
|
||||
## Pre-Scan: Ensure Code is Up-to-Date
|
||||
|
||||
Before scanning, verify the local code matches the remote to avoid analyzing stale code:
|
||||
```bash
|
||||
git fetch origin
|
||||
```
|
||||
If the local HEAD is behind origin/main, warn in your output that findings may
|
||||
need re-verification against the latest main branch.
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Unused exports**: Exported functions, types, constants, or variables
|
||||
that are never referenced outside their package.
|
||||
|
||||
2. **Unreachable code**: Code after return/panic, impossible branches,
|
||||
dead switch cases.
|
||||
|
||||
3. **Orphaned files**: Files not imported by any other file in the project.
|
||||
|
||||
4. **Redundant code**: Duplicate functions, copy-paste blocks,
|
||||
wrappers that add no value.
|
||||
|
||||
5. **Stale tests**: Tests for functions that no longer exist,
|
||||
or tests that test nothing meaningful.
|
||||
|
||||
6. **Unused dependencies**: Imports that are no longer needed.
|
||||
|
||||
7. **Commented-out code**: Large blocks of commented code that
|
||||
should be deleted (git has history).
|
||||
|
||||
## Verification
|
||||
|
||||
For each finding, verify it's truly dead:
|
||||
- Grep for all references across the entire codebase
|
||||
- Check for reflect-based or string-based usage
|
||||
- Check if it's part of an interface implementation
|
||||
- Check for build tag conditional compilation
|
||||
|
||||
Produce a structured JSON result matching the contract schema.
|
||||
Only include findings with high or medium confidence. Skip low confidence.
|
||||
output_artifacts:
|
||||
- name: scan_results
|
||||
path: .wave/output/dead-code-scan.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/dead-code-scan.json
|
||||
schema_path: .wave/contracts/dead-code-scan.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: clean
|
||||
persona: craftsman
|
||||
dependencies: [scan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: findings
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "chore/{{ pipeline_id }}"
|
||||
base: "origin/main"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Remove the dead code on this isolated worktree branch.
|
||||
|
||||
The scan findings have been injected into your workspace. Read them first.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Remove dead code** — ONLY high-confidence findings:
|
||||
- Start with unused imports (safest)
|
||||
- Then commented-out code blocks
|
||||
- Then unused exports
|
||||
- Then orphaned files
|
||||
- Skip anything with confidence=medium unless trivially safe
|
||||
- After each removal, verify the build still passes
|
||||
|
||||
2. Run the full test suite and fix any failures before committing.
|
||||
|
||||
3. **Commit**:
|
||||
```bash
|
||||
git add <specific-files>
|
||||
git commit -m "chore: remove dead code
|
||||
|
||||
Removed N items of dead code:
|
||||
- DC-001: <symbol> (unused export)
|
||||
- DC-002: <file> (orphaned file)
|
||||
..."
|
||||
```
|
||||
|
||||
If ANY test fails after a removal, revert that specific removal
|
||||
and continue with the next item.
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: verify
|
||||
persona: reviewer
|
||||
dependencies: [clean]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: original_findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the dead code removal was safe.
|
||||
|
||||
The original scan findings have been injected into your workspace. Read them first.
|
||||
|
||||
Check:
|
||||
1. Were only high-confidence items removed?
|
||||
2. Are all tests still passing?
|
||||
3. Does the project still build cleanly?
|
||||
4. Were any false positives accidentally removed?
|
||||
5. Is the commit focused (no unrelated changes)?
|
||||
|
||||
Produce a structured JSON verification report matching the contract schema.
|
||||
|
||||
The `verdict` field MUST be either:
|
||||
- `"CLEAN"` — all removals are safe, tests pass, no false positives detected
|
||||
- `"NEEDS_REVIEW"` — potential issues found that require human review
|
||||
|
||||
**Important**: The contract schema only accepts `"CLEAN"`. If you set verdict
|
||||
to `"NEEDS_REVIEW"`, contract validation will intentionally fail and the
|
||||
pipeline will halt before creating a PR. This is the desired safety behavior.
|
||||
output_artifacts:
|
||||
- name: verification
|
||||
path: .wave/output/verification.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/verification.json
|
||||
schema_path: .wave/contracts/dead-code-verification.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: create-pr
|
||||
persona: craftsman
|
||||
dependencies: [verify]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: findings
|
||||
- step: verify
|
||||
artifact: verification
|
||||
as: verification_report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "chore/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Create a pull request for the dead code removal.
|
||||
|
||||
## Working Directory
|
||||
|
||||
You are running in an **isolated git worktree** shared with previous pipeline steps.
|
||||
Your working directory IS the project root. The branch already exists from the
|
||||
clean step — just push it and create the PR.
|
||||
|
||||
## SAFETY: Do NOT Modify the Working Tree
|
||||
|
||||
This step MUST NOT run `git checkout`, `git stash`, or any command that changes
|
||||
the current branch or working tree state.
|
||||
|
||||
## Instructions
|
||||
|
||||
### Step 1: Load Context
|
||||
|
||||
The scan findings and verification report have been injected into your workspace.
|
||||
Read them both to understand what was found and the verification outcome.
|
||||
|
||||
### Step 2: Push the Branch
|
||||
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
```
|
||||
|
||||
### Step 3: Create Pull Request
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} create --title "chore: remove dead code" --body "$(cat <<'PREOF'
|
||||
## Summary
|
||||
|
||||
Automated dead code removal based on static analysis scan.
|
||||
|
||||
<summarize what was removed: N items, types, estimated lines saved>
|
||||
|
||||
## Verification
|
||||
|
||||
<summarize verification report: CLEAN or NEEDS_REVIEW, test status>
|
||||
|
||||
## Removed Items
|
||||
|
||||
<list each removed item with its ID, type, and location>
|
||||
|
||||
## Test Plan
|
||||
|
||||
- Full test suite passed after each removal
|
||||
- Build verified clean after all removals
|
||||
- Auditor persona verified no false positives
|
||||
PREOF
|
||||
)"
|
||||
```
|
||||
|
||||
### Step 4: Request Copilot Review (Best-Effort)
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} edit --add-reviewer "copilot" 2>/dev/null || true
|
||||
```
|
||||
|
||||
## CONSTRAINTS
|
||||
|
||||
- Do NOT spawn Task subagents — work directly in the main context
|
||||
- Do NOT run `git checkout`, `git stash`, or any branch-switching commands
|
||||
- Do NOT include Co-Authored-By or AI attribution in commits
|
||||
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
265
.wave/pipelines/audit-doc.yaml
Normal file
265
.wave/pipelines/audit-doc.yaml
Normal file
@@ -0,0 +1,265 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-doc
|
||||
description: Pre-PR documentation consistency gate — scans changes, cross-references docs, and creates a GitHub issue with inconsistencies
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "full -- scan all documentation for inconsistencies with the codebase"
|
||||
schema:
|
||||
type: string
|
||||
description: "Scan scope: empty for branch diff, 'full' for all files, or a git ref"
|
||||
|
||||
steps:
|
||||
- id: scan-changes
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan the repository to identify changed files and capture the current documentation state.
|
||||
|
||||
## Determine Scan Scope
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
- If the input is empty or blank: use `git log --name-status main...HEAD` to find files changed on the current branch vs main.
|
||||
- If the input is "full": skip the diff — treat ALL files as in-scope and scan all documentation.
|
||||
- Otherwise, treat the input as a git ref and use `git log --name-status <input>...HEAD`.
|
||||
|
||||
Run `git log --oneline --name-status` with the appropriate range to get the list of changed files.
|
||||
If no commits are found (e.g. on main with no branch divergence), fall back to `git status --porcelain` for uncommitted changes.
|
||||
|
||||
## Categorize Changed Files
|
||||
|
||||
Sort each changed file into one of these categories:
|
||||
- **source_code**: source files matching the project language (excluding test files)
|
||||
- **tests**: test files (files with test/spec in name or in test directories)
|
||||
- **documentation**: markdown files, doc directories, README, CONTRIBUTING, CHANGELOG
|
||||
- **configuration**: config files, schema files, environment configs
|
||||
- **build**: build scripts, CI/CD configs, Makefiles, Dockerfiles
|
||||
- **other**: everything else
|
||||
|
||||
## Read Documentation Surface Area
|
||||
|
||||
Discover and read key documentation files. Common locations include:
|
||||
- Project root: README.md, CONTRIBUTING.md, CHANGELOG.md
|
||||
- Documentation directories: docs/, doc/, wiki/
|
||||
- Configuration docs: any files documenting config options or environment variables
|
||||
- CLI/API docs: any files documenting commands, endpoints, or public interfaces
|
||||
|
||||
Adapt your scan to the actual project structure — do not assume a fixed layout.
|
||||
|
||||
## Output
|
||||
|
||||
Write your findings as structured JSON.
|
||||
Include:
|
||||
- scan_scope: mode ("diff" or "full"), range used, base_ref
|
||||
- changed_files: total_count + categories object with arrays of file paths
|
||||
- documentation_snapshot: array of {path, exists, summary} for each doc file
|
||||
- timestamp: current ISO 8601 timestamp
|
||||
output_artifacts:
|
||||
- name: scan-results
|
||||
path: .wave/output/scan-results.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/scan-results.json
|
||||
schema_path: .wave/contracts/doc-scan-results.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: analyze-consistency
|
||||
persona: reviewer
|
||||
dependencies: [scan-changes]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-changes
|
||||
artifact: scan-results
|
||||
as: scan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze documentation consistency by cross-referencing code changes with documentation.
|
||||
|
||||
## Cross-Reference Checks
|
||||
|
||||
For each category of changed files, perform these checks:
|
||||
|
||||
**CLI/API surface** (changed command or endpoint files):
|
||||
- Compare command definitions, endpoints, or public interfaces against documentation
|
||||
- Check for new, removed, or changed options/parameters
|
||||
- Verify documented examples still work
|
||||
|
||||
**Configuration** (changed config schemas or parsers):
|
||||
- Compare documented options against actual config structure
|
||||
- Check for undocumented settings or environment variables
|
||||
|
||||
**Source code** (changed source files):
|
||||
- Check for new exported functions/types that might need API docs
|
||||
- Look for stale code comments referencing removed features
|
||||
- Verify public API descriptions in docs match actual behavior
|
||||
|
||||
**Environment variables**:
|
||||
- Scan source code for environment variable access patterns
|
||||
- Compare against documentation
|
||||
- Flag undocumented environment variables
|
||||
|
||||
## Severity Rating
|
||||
|
||||
Rate each inconsistency:
|
||||
- **CRITICAL**: Feature exists in code but completely missing from docs, or docs describe non-existent feature
|
||||
- **HIGH**: Incorrect information in docs (wrong flag name, wrong description, wrong behavior)
|
||||
- **MEDIUM**: Outdated information (stale counts, missing new options, incomplete lists)
|
||||
- **LOW**: Minor style issues, slightly imprecise wording
|
||||
|
||||
## Output
|
||||
|
||||
Write your analysis as structured JSON.
|
||||
Include:
|
||||
- summary: total_count, by_severity counts, clean (true if zero inconsistencies)
|
||||
- inconsistencies: array of {id (DOC-001 format), severity, category, title, description, source_location, doc_location, fix_description}
|
||||
- timestamp: current ISO 8601 timestamp
|
||||
|
||||
If no inconsistencies are found, output an empty array with clean=true.
|
||||
output_artifacts:
|
||||
- name: consistency-report
|
||||
path: .wave/output/consistency-report.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/consistency-report.json
|
||||
schema_path: .wave/contracts/doc-consistency-report.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: compose-report
|
||||
persona: navigator
|
||||
dependencies: [analyze-consistency]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze-consistency
|
||||
artifact: consistency-report
|
||||
as: report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Compose a documentation consistency report as a GitHub-ready markdown file.
|
||||
|
||||
## Check for Inconsistencies
|
||||
|
||||
If the consistency report has `summary.clean == true` (zero inconsistencies):
|
||||
- Write a short "No inconsistencies found" message as the report
|
||||
- Write the issue result with skipped=true and reason="clean"
|
||||
|
||||
## Compose the Report
|
||||
|
||||
Write the report as markdown:
|
||||
|
||||
```
|
||||
## Documentation Consistency Report
|
||||
|
||||
**Scan date**: <timestamp from report>
|
||||
**Inconsistencies found**: <total_count>
|
||||
|
||||
### Summary by Severity
|
||||
| Severity | Count |
|
||||
|----------|-------|
|
||||
| Critical | N |
|
||||
| High | N |
|
||||
| Medium | N |
|
||||
| Low | N |
|
||||
|
||||
### Task List
|
||||
|
||||
For each inconsistency (sorted by severity, critical first):
|
||||
- [ ] **[DOC-001]** (CRITICAL) Title here — `doc_location`
|
||||
Fix: fix_description
|
||||
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) doc-audit pipeline*
|
||||
```
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/report.md
|
||||
|
||||
- id: publish
|
||||
persona: craftsman
|
||||
dependencies: [compose-report]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: compose-report
|
||||
artifact: report
|
||||
as: report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
PUBLISH — create a GitHub issue from the documentation report.
|
||||
|
||||
If the report says "No inconsistencies found", skip issue creation and exit.
|
||||
|
||||
## Detect Repository
|
||||
|
||||
Run: `{{ forge.cli_tool }} repo view --json nameWithOwner --jq .nameWithOwner`
|
||||
|
||||
## Create the Issue
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} issue create \
|
||||
--title "docs: documentation consistency report" \
|
||||
--body-file .wave/artifacts/report \
|
||||
--label "documentation"
|
||||
```
|
||||
|
||||
If the `documentation` label doesn't exist, create without labels.
|
||||
If any `{{ forge.cli_tool }}` command fails, log the error and continue.
|
||||
|
||||
## Capture Result
|
||||
|
||||
Write a JSON status report.
|
||||
output_artifacts:
|
||||
- name: issue-result
|
||||
path: .wave/output/issue-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/issue-result.json
|
||||
schema_path: .wave/contracts/doc-issue-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/output/issue-result.json
|
||||
json_path: .issue_url
|
||||
label: "Documentation Issue"
|
||||
193
.wave/pipelines/audit-dual.yaml
Normal file
193
.wave/pipelines/audit-dual.yaml
Normal file
@@ -0,0 +1,193 @@
|
||||
# Independent Parallel Tracks Pattern
|
||||
#
|
||||
# This pipeline demonstrates two fully independent analysis tracks
|
||||
# running simultaneously and converging at a final merge step.
|
||||
# Unlike the fan-out pattern (used in ops-pr-review.yaml), these tracks
|
||||
# have NO shared upstream step — they start independently and converge
|
||||
# only at the end.
|
||||
#
|
||||
# Execution flow:
|
||||
#
|
||||
# quality-scan security-scan ← both start immediately (no deps)
|
||||
# │ │
|
||||
# quality-detail security-detail ← each track continues independently
|
||||
# └────────┬─────────┘
|
||||
# merge ← converges results from both tracks
|
||||
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-dual
|
||||
description: "Parallel code-quality and security analysis with independent tracks"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "analyze the authentication module"
|
||||
|
||||
steps:
|
||||
# ── Track A: Code Quality ──────────────────────────────────────────
|
||||
- id: quality-scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform a code quality scan of: {{ input }}
|
||||
|
||||
Identify:
|
||||
1. Code duplication and copy-paste patterns
|
||||
2. Functions exceeding 50 lines or high cyclomatic complexity
|
||||
3. Naming inconsistencies and style violations
|
||||
4. Missing or outdated documentation
|
||||
5. Unused exports, dead code, and unreachable branches
|
||||
|
||||
Output a structured JSON report matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: quality_scan
|
||||
path: .wave/output/quality-scan.json
|
||||
type: json
|
||||
|
||||
- id: quality-detail
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
dependencies: [quality-scan]
|
||||
memory:
|
||||
strategy: fresh
|
||||
inject_artifacts:
|
||||
- step: quality-scan
|
||||
artifact: quality_scan
|
||||
as: scan_results
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Deepen the code quality analysis from the scan results.
|
||||
|
||||
For each finding in .wave/artifacts/scan_results:
|
||||
1. Verify the finding by reading the source code
|
||||
2. Assess severity and impact on maintainability
|
||||
3. Suggest specific refactoring with code examples
|
||||
4. Search for similar patterns elsewhere in the codebase
|
||||
|
||||
Produce a markdown report with prioritized recommendations.
|
||||
output_artifacts:
|
||||
- name: quality_report
|
||||
path: .wave/output/quality-detail.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/quality-detail.md
|
||||
|
||||
# ── Track B: Security ──────────────────────────────────────────────
|
||||
- id: security-scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform a security scan of: {{ input }}
|
||||
|
||||
Check for:
|
||||
1. Injection vulnerabilities (SQL, command, path traversal)
|
||||
2. Authentication and authorization gaps
|
||||
3. Hardcoded secrets or credentials
|
||||
4. Insecure data handling (missing encryption, logging sensitive data)
|
||||
5. Input validation gaps at system boundaries
|
||||
|
||||
Output a structured JSON report matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: security_scan
|
||||
path: .wave/output/security-scan.json
|
||||
type: json
|
||||
|
||||
- id: security-detail
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
dependencies: [security-scan]
|
||||
memory:
|
||||
strategy: fresh
|
||||
inject_artifacts:
|
||||
- step: security-scan
|
||||
artifact: security_scan
|
||||
as: scan_results
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Deepen the security analysis from the scan results.
|
||||
|
||||
For each finding in .wave/artifacts/scan_results:
|
||||
1. Verify by reading the actual source code
|
||||
2. Trace data flow from entry point to sink
|
||||
3. Assess exploitability and real-world impact
|
||||
4. Propose specific remediation with code examples
|
||||
|
||||
Produce a markdown report with severity-ordered findings.
|
||||
output_artifacts:
|
||||
- name: security_report
|
||||
path: .wave/output/security-detail.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/security-detail.md
|
||||
|
||||
# ── Merge: Converge both tracks ────────────────────────────────────
|
||||
- id: merge
|
||||
persona: summarizer
|
||||
model: claude-haiku
|
||||
dependencies: [quality-detail, security-detail]
|
||||
memory:
|
||||
strategy: fresh
|
||||
inject_artifacts:
|
||||
- step: quality-detail
|
||||
artifact: quality_report
|
||||
as: quality_findings
|
||||
- step: security-detail
|
||||
artifact: security_report
|
||||
as: security_findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize the quality and security analysis reports into a
|
||||
unified assessment.
|
||||
|
||||
Read both reports:
|
||||
- .wave/artifacts/quality_findings (code quality)
|
||||
- .wave/artifacts/security_findings (security)
|
||||
|
||||
Produce a final report with:
|
||||
1. Executive summary with overall health rating
|
||||
2. Critical issues requiring immediate attention
|
||||
3. Top recommendations ordered by impact
|
||||
4. Positive observations and strengths
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/dual-analysis-report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/dual-analysis-report.md
|
||||
66
.wave/pipelines/audit-dx.yaml
Normal file
66
.wave/pipelines/audit-dx.yaml
Normal file
@@ -0,0 +1,66 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-dx
|
||||
description: "Evaluate developer experience for contributors and integrators"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "audit the contributor onboarding experience"
|
||||
schema:
|
||||
type: string
|
||||
description: "DX area to audit: onboarding, testing, ci, api, or empty for full audit"
|
||||
|
||||
steps:
|
||||
- id: audit
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform a developer experience audit of: {{ input }}
|
||||
|
||||
## Evaluation Areas
|
||||
|
||||
1. **Setup experience**: Can a new contributor get running quickly?
|
||||
Check README, Makefile/scripts, dependency installation, IDE setup.
|
||||
|
||||
2. **Code navigation**: Is the codebase easy to navigate?
|
||||
Check package organization, naming, documentation, godoc comments.
|
||||
|
||||
3. **Testing**: Is it easy to write and run tests?
|
||||
Check test helpers, mocks, fixtures, CI integration.
|
||||
|
||||
4. **Debugging**: Can developers debug issues efficiently?
|
||||
Check logging, debug flags, error messages, stack traces.
|
||||
|
||||
5. **API surface**: Is the internal API surface clean?
|
||||
Check exported vs unexported, interface boundaries, type safety.
|
||||
|
||||
6. **Extensibility**: Can developers add new pipelines, personas,
|
||||
contracts without modifying core code?
|
||||
Check plugin points, configuration, documentation.
|
||||
|
||||
7. **CI/CD**: Is the CI pipeline fast and reliable?
|
||||
Check test times, flaky tests, build reproducibility.
|
||||
|
||||
## Output
|
||||
|
||||
Produce a markdown report with findings grouped by area,
|
||||
severity ratings, and specific improvement recommendations.
|
||||
output_artifacts:
|
||||
- name: dx-report
|
||||
path: .wave/output/dx-audit-report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/dx-audit-report.md
|
||||
73
.wave/pipelines/audit-junk-code.yaml
Normal file
73
.wave/pipelines/audit-junk-code.yaml
Normal file
@@ -0,0 +1,73 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-junk-code
|
||||
description: "Identify accidental complexity, conceptual misalignment, and technical debt"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "internal/ — find accidental complexity and dead weight"
|
||||
schema:
|
||||
type: string
|
||||
description: "Package or directory scope to analyze, or empty for full codebase"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Identify accidental complexity and technical debt.
|
||||
|
||||
Scope: {{ input }}
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Over-engineering**: Abstractions with one consumer, generics where
|
||||
concrete types suffice, configuration for hypothetical use cases.
|
||||
|
||||
2. **Copy-paste drift**: Nearly-identical code blocks that diverged
|
||||
slightly over time instead of being extracted.
|
||||
|
||||
3. **Stale code**: TODO/FIXME comments older than 3 months, commented-out
|
||||
code blocks, unused imports or variables.
|
||||
|
||||
4. **Conceptual misalignment**: Types or functions in wrong packages,
|
||||
misleading names, abstraction boundaries that don't match domain.
|
||||
|
||||
5. **Complexity hotspots**: Functions over 50 lines, deeply nested
|
||||
control flow (3+ levels), cyclomatic complexity > 10.
|
||||
|
||||
6. **Test debt**: Tests that don't test anything meaningful, tests
|
||||
with `t.Skip()` without linked issues, flaky test patterns.
|
||||
|
||||
## Output
|
||||
|
||||
For each finding, provide:
|
||||
- Location (file:line)
|
||||
- Category (over-engineering, copy-paste, stale, misaligned, complex, test-debt)
|
||||
- Description of the issue
|
||||
- Suggested remediation
|
||||
- Effort estimate (trivial/small/medium/large)
|
||||
output_artifacts:
|
||||
- name: assessment
|
||||
path: .wave/output/assessment.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/assessment.json
|
||||
schema_path: .wave/contracts/improvement-assessment.schema.json
|
||||
on_failure: retry
|
||||
147
.wave/pipelines/audit-pedagogy.yaml
Normal file
147
.wave/pipelines/audit-pedagogy.yaml
Normal file
@@ -0,0 +1,147 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-pedagogy
|
||||
description: "Didactic quality audit: evaluate exercises for learning effectiveness, not code quality"
|
||||
release: false
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "Audit all lesson modules for pedagogical quality"
|
||||
schema:
|
||||
type: string
|
||||
description: "Focus area or scope for the pedagogy audit"
|
||||
|
||||
steps:
|
||||
- id: scan-lessons
|
||||
persona: navigator
|
||||
workspace:
|
||||
type: basic
|
||||
root: ./
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan ALL lesson JSON files in the lessons/ directory (English versions only, not translations).
|
||||
|
||||
For EACH lesson file:
|
||||
1. Read the full JSON
|
||||
2. For each exercise in the lessons array, extract:
|
||||
- id, title, task, description, solution, validations, codePrefix, codeSuffix
|
||||
3. Analyze the relationship between task description and solution:
|
||||
- Is the solution literally stated in the task/description text?
|
||||
- Does solving it require understanding beyond what's written?
|
||||
- Are there multiple valid solutions or only one exact match?
|
||||
|
||||
Output a structured inventory of all exercises with their metadata.
|
||||
Write to .wave/output/lesson-inventory.json
|
||||
output_artifacts:
|
||||
- name: inventory
|
||||
path: .wave/output/lesson-inventory.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/lesson-inventory.json
|
||||
schema_path: .wave/contracts/lesson-inventory.schema.json
|
||||
on_failure: skip
|
||||
|
||||
- id: pedagogy-audit
|
||||
persona: pedagogy-auditor
|
||||
dependencies: [scan-lessons]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-lessons
|
||||
artifact: inventory
|
||||
as: lessons
|
||||
workspace:
|
||||
type: basic
|
||||
root: ./
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform a thorough pedagogical audit of all lesson modules.
|
||||
|
||||
You have the full lesson inventory. For EACH module, evaluate:
|
||||
|
||||
1. BLOOM'S TAXONOMY LEVEL
|
||||
- What cognitive level do most exercises target?
|
||||
- Level 1 (Remember): Type exact syntax from description
|
||||
- Level 2 (Understand): Adapt a concept to a slightly different context
|
||||
- Level 3 (Apply): Solve a novel problem using learned concepts
|
||||
- Level 4 (Analyze): Debug, compare, or optimize code
|
||||
|
||||
2. COPY-PASTE SCORE (0-100)
|
||||
- Compare each task description to its solution
|
||||
- If the solution text appears verbatim in the description → high copy-paste
|
||||
- If the student must transform/combine information → low copy-paste
|
||||
- Score 100 = pure copy-paste, 0 = fully original thinking required
|
||||
|
||||
3. TRANSFER REQUIREMENT
|
||||
- Does the student need to apply concepts from earlier lessons?
|
||||
- Are there exercises that combine multiple skills?
|
||||
- Does difficulty progress within the module?
|
||||
|
||||
4. VALIDATION QUALITY
|
||||
- Do validations accept multiple correct solutions?
|
||||
- Do error messages guide learning or just say "wrong"?
|
||||
- Are there partial-credit possibilities?
|
||||
|
||||
5. SPECIFIC ISSUES per exercise
|
||||
For exercises scoring poorly, provide:
|
||||
- The exact problem (e.g., "solution 'display: flex;' is literally in the task text")
|
||||
- A concrete improvement suggestion
|
||||
- Expected impact on learning
|
||||
|
||||
Be brutally honest. The goal is to identify WHERE students coast through
|
||||
without learning and WHERE they get stuck without support.
|
||||
|
||||
Write the full audit to .wave/output/pedagogy-report.json
|
||||
Also write a human-readable markdown summary to .wave/output/pedagogy-report.md
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/pedagogy-report.md
|
||||
type: markdown
|
||||
- name: report-json
|
||||
path: .wave/output/pedagogy-report.json
|
||||
type: json
|
||||
|
||||
- id: improvement-plan
|
||||
persona: planner
|
||||
dependencies: [pedagogy-audit]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: pedagogy-audit
|
||||
artifact: report-json
|
||||
as: audit
|
||||
workspace:
|
||||
type: basic
|
||||
root: ./
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Based on the pedagogy audit, create a concrete improvement plan.
|
||||
|
||||
For EACH module that scored below 60 on transfer or above 60 on copy-paste:
|
||||
1. Identify the 2-3 worst exercises
|
||||
2. Write improved task descriptions that require actual thinking
|
||||
3. Suggest additional validation types that accept multiple solutions
|
||||
4. Propose new exercises that test TRANSFER, not recall
|
||||
|
||||
Group improvements by priority:
|
||||
- CRITICAL: Exercises where students learn nothing (pure copy-paste)
|
||||
- HIGH: Exercises that could be great with small changes
|
||||
- MEDIUM: Missing scaffolding or difficulty gaps
|
||||
|
||||
Write the plan to .wave/output/improvement-plan.json with structure:
|
||||
{ modules: [{ id, current_score, improvements: [{ exercise_id, problem, improved_task, improved_validations }] }] }
|
||||
|
||||
Also write .wave/output/improvement-plan.md as human-readable markdown.
|
||||
output_artifacts:
|
||||
- name: plan
|
||||
path: .wave/output/improvement-plan.md
|
||||
type: markdown
|
||||
- name: plan-json
|
||||
path: .wave/output/improvement-plan.json
|
||||
type: json
|
||||
31
.wave/pipelines/audit-quality-loop.yaml
Normal file
31
.wave/pipelines/audit-quality-loop.yaml
Normal file
@@ -0,0 +1,31 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-quality-loop
|
||||
description: "Supervise work, loop improvements until quality passes"
|
||||
category: composition
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "last pipeline run"
|
||||
schema:
|
||||
type: string
|
||||
description: "Work reference to evaluate"
|
||||
|
||||
steps:
|
||||
- id: quality-check
|
||||
pipeline: ops-supervise
|
||||
input: "{{input}}"
|
||||
loop:
|
||||
max_iterations: 3
|
||||
until: "{{supervise.output.verdict}}"
|
||||
steps:
|
||||
- id: improve
|
||||
pipeline: impl-improve
|
||||
input: "{{input}}"
|
||||
- id: recheck
|
||||
pipeline: ops-supervise
|
||||
input: "{{input}}"
|
||||
157
.wave/pipelines/audit-security.yaml
Normal file
157
.wave/pipelines/audit-security.yaml
Normal file
@@ -0,0 +1,157 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-security
|
||||
description: "Comprehensive security vulnerability audit"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "audit the authentication module for vulnerabilities"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform a security scan of: {{ input }}
|
||||
|
||||
## Scan Process
|
||||
|
||||
1. **Map attack surface**: Identify all entry points (HTTP handlers, CLI args,
|
||||
file parsers, IPC endpoints, database queries, external API calls)
|
||||
|
||||
2. **Check OWASP Top 10**:
|
||||
- Injection (SQL, command, LDAP, XPath)
|
||||
- Broken authentication/authorization
|
||||
- Sensitive data exposure
|
||||
- XML external entities (XXE)
|
||||
- Broken access control
|
||||
- Security misconfiguration
|
||||
- Cross-site scripting (XSS)
|
||||
- Insecure deserialization
|
||||
- Using components with known vulnerabilities
|
||||
- Insufficient logging and monitoring
|
||||
|
||||
3. **Scan for common Go vulnerabilities** (if Go project):
|
||||
- Unchecked errors on security-critical operations
|
||||
- Race conditions on shared state
|
||||
- Path traversal via unsanitized file paths
|
||||
- Template injection
|
||||
- Unsafe use of reflect or unsafe packages
|
||||
|
||||
4. **Check secrets and configuration**:
|
||||
- Hardcoded credentials, API keys, tokens
|
||||
- Insecure default configurations
|
||||
- Missing TLS/encryption
|
||||
- Overly permissive file permissions
|
||||
|
||||
5. **Review dependency usage**:
|
||||
- Known vulnerable patterns in dependency usage
|
||||
- Outdated security practices
|
||||
|
||||
output_artifacts:
|
||||
- name: scan_results
|
||||
path: .wave/output/security-scan.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/security-scan.json
|
||||
schema_path: .wave/contracts/security-scan.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: deep-dive
|
||||
persona: auditor
|
||||
dependencies: [scan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: scan_findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform a deep security analysis based on the injected scan results.
|
||||
|
||||
For each finding with severity HIGH or CRITICAL:
|
||||
|
||||
1. **Verify the finding**: Read the actual source code at the reported location.
|
||||
Confirm the vulnerability exists (eliminate false positives).
|
||||
|
||||
2. **Trace the data flow**: Follow untrusted input from entry point to sink.
|
||||
Identify all transformations and validation (or lack thereof).
|
||||
|
||||
3. **Assess exploitability**: Could an attacker realistically exploit this?
|
||||
What preconditions are needed? What's the impact?
|
||||
|
||||
4. **Check for related patterns**: Search for similar vulnerable patterns
|
||||
elsewhere in the codebase using Grep.
|
||||
|
||||
5. **Propose remediation**: Specific, actionable fix with code examples.
|
||||
Prioritize by effort vs. impact.
|
||||
|
||||
For MEDIUM and LOW findings, do a lighter review confirming they're real.
|
||||
|
||||
Produce a markdown report with these sections:
|
||||
- Executive Summary
|
||||
- Confirmed Vulnerabilities (with severity badges)
|
||||
- False Positives Eliminated
|
||||
- Data Flow Analysis
|
||||
- Remediation Plan (ordered by priority)
|
||||
- Related Patterns Found
|
||||
output_artifacts:
|
||||
- name: deep_dive
|
||||
path: .wave/output/security-deep-dive.md
|
||||
type: markdown
|
||||
|
||||
- id: report
|
||||
persona: summarizer
|
||||
dependencies: [deep-dive]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: scan_findings
|
||||
- step: deep-dive
|
||||
artifact: deep_dive
|
||||
as: analysis
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize the injected scan findings and deep-dive analysis into a final report.
|
||||
|
||||
Create a concise, actionable security report:
|
||||
|
||||
1. **Risk Score**: Overall risk rating (CRITICAL/HIGH/MEDIUM/LOW) with justification
|
||||
2. **Top 3 Issues**: The most important findings to fix immediately
|
||||
3. **Quick Wins**: Low-effort fixes that improve security posture
|
||||
4. **Remediation Roadmap**: Ordered list of fixes by priority
|
||||
5. **What's Good**: Security practices already in place
|
||||
|
||||
Format as a clean markdown report suitable for sharing with the team.
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/security-report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/security-report.md
|
||||
68
.wave/pipelines/audit-ux.yaml
Normal file
68
.wave/pipelines/audit-ux.yaml
Normal file
@@ -0,0 +1,68 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: audit-ux
|
||||
description: "Evaluate user experience across CLI, TUI, docs, or workflows"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "audit the CLI onboarding flow for new users"
|
||||
schema:
|
||||
type: string
|
||||
description: "UX area to audit: cli, tui, docs, onboarding, or empty for full audit"
|
||||
|
||||
steps:
|
||||
- id: audit
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform a UX audit of: {{ input }}
|
||||
|
||||
## Evaluation Criteria
|
||||
|
||||
1. **Discoverability**: Can users find features without reading docs?
|
||||
Check help text, error messages, command suggestions.
|
||||
|
||||
2. **Error experience**: Are error messages actionable? Do they suggest
|
||||
fixes? Check all error paths for user-friendly messages.
|
||||
|
||||
3. **Progressive disclosure**: Does the interface reveal complexity
|
||||
gradually? Check default behaviors, optional flags, advanced modes.
|
||||
|
||||
4. **Consistency**: Are patterns uniform across commands? Check flag
|
||||
names, output formats, exit codes.
|
||||
|
||||
5. **Feedback**: Does the system communicate progress? Check spinners,
|
||||
status messages, completion indicators.
|
||||
|
||||
6. **Recovery**: Can users recover from mistakes? Check undo capabilities,
|
||||
dry-run modes, confirmation prompts for destructive actions.
|
||||
|
||||
7. **Documentation alignment**: Does the actual behavior match what's
|
||||
documented? Cross-reference docs/ with implementation.
|
||||
|
||||
## For Each Finding
|
||||
|
||||
- Severity: critical (blocks usage), high (causes confusion),
|
||||
medium (suboptimal), low (polish)
|
||||
- Current behavior with reproduction steps
|
||||
- Expected behavior
|
||||
- Suggested fix with effort estimate
|
||||
output_artifacts:
|
||||
- name: ux-report
|
||||
path: .wave/output/ux-audit-report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/ux-audit-report.md
|
||||
141
.wave/pipelines/changelog.yaml
Normal file
141
.wave/pipelines/changelog.yaml
Normal file
@@ -0,0 +1,141 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: changelog
|
||||
description: "Generate structured changelog from git history"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "generate changelog from v0.1.0 to HEAD"
|
||||
|
||||
steps:
|
||||
- id: analyze-commits
|
||||
persona: navigator
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze git history for changelog generation: {{ input }}
|
||||
|
||||
## Process
|
||||
|
||||
1. **Determine range**: Parse input to identify the commit range.
|
||||
If tags mentioned, use them. If time period, calculate dates.
|
||||
If unclear, use last tag to HEAD (or last 50 commits).
|
||||
|
||||
2. **Extract commits**: Use `git log --format` to get hash, author,
|
||||
date, subject, and body for each commit.
|
||||
|
||||
3. **Parse conventional commits**: Categorize by prefix:
|
||||
feat → Features, fix → Fixes, docs → Documentation,
|
||||
refactor → Refactoring, test → Testing, chore → Maintenance,
|
||||
perf → Performance, ci → CI/CD, no prefix → Other
|
||||
|
||||
4. **Identify breaking changes**: Look for `BREAKING CHANGE:` in body,
|
||||
`!` after prefix, API removals in body.
|
||||
|
||||
5. **Extract scope**: Parse from prefix (e.g., `fix(pipeline):` → "pipeline")
|
||||
output_artifacts:
|
||||
- name: commits
|
||||
path: .wave/output/commit-analysis.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/commit-analysis.json
|
||||
schema_path: .wave/contracts/commit-analysis.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: categorize
|
||||
persona: planner
|
||||
dependencies: [analyze-commits]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze-commits
|
||||
artifact: commits
|
||||
as: raw_commits
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Categorize and describe changes for a changelog using the injected commit analysis.
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Group by type** into sections
|
||||
2. **Write user-facing descriptions**: Rewrite technical messages into
|
||||
clear descriptions focused on what changed and why it matters.
|
||||
3. **Highlight breaking changes** first with migration notes
|
||||
4. **Deduplicate**: Combine commits for the same logical change
|
||||
5. **Add context** for significant features
|
||||
output_artifacts:
|
||||
- name: categorized
|
||||
path: .wave/output/categorized-changes.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/categorized-changes.json
|
||||
schema_path: .wave/contracts/categorized-changes.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: format
|
||||
persona: philosopher
|
||||
dependencies: [categorize]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze-commits
|
||||
artifact: commits
|
||||
as: raw_commits
|
||||
- step: categorize
|
||||
artifact: categorized
|
||||
as: changes
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Format the injected commit analysis and categorized changes into a polished changelog.
|
||||
|
||||
Use Keep a Changelog format:
|
||||
|
||||
# Changelog
|
||||
|
||||
## [Version or Date Range] - YYYY-MM-DD
|
||||
|
||||
### Breaking Changes
|
||||
- **scope**: Description. Migration: what to do
|
||||
|
||||
### Added
|
||||
- **scope**: Feature description
|
||||
|
||||
### Fixed
|
||||
- **scope**: Bug fix description
|
||||
|
||||
### Changed
|
||||
- **scope**: Change description
|
||||
|
||||
### Security
|
||||
- **scope**: Security fix description
|
||||
|
||||
Rules:
|
||||
- Only include sections with entries
|
||||
- Bold scope if present
|
||||
- Most notable entries first per section
|
||||
- One line per entry, concise
|
||||
- Contributors list at bottom
|
||||
output_artifacts:
|
||||
- name: changelog
|
||||
path: .wave/output/CHANGELOG.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/CHANGELOG.md
|
||||
261
.wave/pipelines/dead-code.yaml
Normal file
261
.wave/pipelines/dead-code.yaml
Normal file
@@ -0,0 +1,261 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: dead-code
|
||||
description: "Find dead or redundant code, remove it, and commit to a feature branch"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "find and remove dead code in internal/pipeline"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: navigator
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan for dead or redundant code: {{ input }}
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Unused exports**: Exported functions, types, constants, or variables
|
||||
that are never referenced outside their package.
|
||||
|
||||
2. **Unreachable code**: Code after return/panic, impossible branches,
|
||||
dead switch cases.
|
||||
|
||||
3. **Orphaned files**: Files not imported by any other file in the project.
|
||||
|
||||
4. **Redundant code**: Duplicate functions, copy-paste blocks,
|
||||
wrappers that add no value.
|
||||
|
||||
5. **Stale tests**: Tests for functions that no longer exist,
|
||||
or tests that test nothing meaningful.
|
||||
|
||||
6. **Unused dependencies**: Imports that are no longer needed.
|
||||
|
||||
7. **Commented-out code**: Large blocks of commented code that
|
||||
should be deleted (git has history).
|
||||
|
||||
## Verification
|
||||
|
||||
For each finding, verify it's truly dead:
|
||||
- Grep for all references across the entire codebase
|
||||
- Check for reflect-based or string-based usage
|
||||
- Check if it's part of an interface implementation
|
||||
- Check for build tag conditional compilation
|
||||
|
||||
Produce a structured JSON result matching the contract schema.
|
||||
Only include findings with high or medium confidence. Skip low confidence.
|
||||
output_artifacts:
|
||||
- name: scan_results
|
||||
path: .wave/output/dead-code-scan.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/dead-code-scan.json
|
||||
schema_path: .wave/contracts/dead-code-scan.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: clean
|
||||
persona: craftsman
|
||||
dependencies: [scan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: findings
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "chore/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Remove the dead code on this isolated worktree branch.
|
||||
|
||||
The scan findings have been injected into your workspace. Read them first.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Remove dead code** — ONLY high-confidence findings:
|
||||
- Start with unused imports (safest)
|
||||
- Then commented-out code blocks
|
||||
- Then unused exports
|
||||
- Then orphaned files
|
||||
- Skip anything with confidence=medium unless trivially safe
|
||||
- After each removal, verify: `go build ./...`
|
||||
|
||||
2. **Run goimports** if available to clean up imports:
|
||||
```bash
|
||||
goimports -w <modified-files> 2>/dev/null || true
|
||||
```
|
||||
|
||||
3. **Run full test suite**:
|
||||
```bash
|
||||
go test ./... -count=1
|
||||
```
|
||||
|
||||
4. **Commit**:
|
||||
```bash
|
||||
git add <specific-files>
|
||||
git commit -m "chore: remove dead code
|
||||
|
||||
Removed N items of dead code:
|
||||
- DC-001: <symbol> (unused export)
|
||||
- DC-002: <file> (orphaned file)
|
||||
..."
|
||||
```
|
||||
|
||||
If ANY test fails after a removal, revert that specific removal
|
||||
and continue with the next item.
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
|
||||
- id: verify
|
||||
persona: reviewer
|
||||
dependencies: [clean]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: original_findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the dead code removal was safe.
|
||||
|
||||
The original scan findings have been injected into your workspace. Read them first.
|
||||
|
||||
Check:
|
||||
1. Were only high-confidence items removed?
|
||||
2. Are all tests still passing?
|
||||
3. Does the project still build cleanly?
|
||||
4. Were any false positives accidentally removed?
|
||||
5. Is the commit focused (no unrelated changes)?
|
||||
|
||||
Produce a verification report covering:
|
||||
- Items removed (with justification)
|
||||
- Items skipped (with reason)
|
||||
- Lines of code removed
|
||||
- Test status
|
||||
- Overall assessment: CLEAN / NEEDS_REVIEW
|
||||
output_artifacts:
|
||||
- name: verification
|
||||
path: .wave/output/verification.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/verification.md
|
||||
|
||||
- id: create-pr
|
||||
persona: craftsman
|
||||
dependencies: [verify]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: scan_results
|
||||
as: findings
|
||||
- step: verify
|
||||
artifact: verification
|
||||
as: verification_report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "chore/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Create a pull request for the dead code removal.
|
||||
|
||||
## Working Directory
|
||||
|
||||
You are running in an **isolated git worktree** shared with previous pipeline steps.
|
||||
Your working directory IS the project root. The branch already exists from the
|
||||
clean step — just push it and create the PR.
|
||||
|
||||
## SAFETY: Do NOT Modify the Working Tree
|
||||
|
||||
This step MUST NOT run `git checkout`, `git stash`, or any command that changes
|
||||
the current branch or working tree state.
|
||||
|
||||
## Instructions
|
||||
|
||||
### Step 1: Load Context
|
||||
|
||||
The scan findings and verification report have been injected into your workspace.
|
||||
Read them both to understand what was found and the verification outcome.
|
||||
|
||||
### Step 2: Push the Branch
|
||||
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
```
|
||||
|
||||
### Step 3: Create Pull Request
|
||||
|
||||
```bash
|
||||
gh pr create --title "chore: remove dead code" --body "$(cat <<'PREOF'
|
||||
## Summary
|
||||
|
||||
Automated dead code removal based on static analysis scan.
|
||||
|
||||
<summarize what was removed: N items, types, estimated lines saved>
|
||||
|
||||
## Verification
|
||||
|
||||
<summarize verification report: CLEAN or NEEDS_REVIEW, test status>
|
||||
|
||||
## Removed Items
|
||||
|
||||
<list each removed item with its ID, type, and location>
|
||||
|
||||
## Test Plan
|
||||
|
||||
- Full test suite passed after each removal
|
||||
- Build verified clean after all removals
|
||||
- Auditor persona verified no false positives
|
||||
PREOF
|
||||
)"
|
||||
```
|
||||
|
||||
### Step 4: Request Copilot Review (Best-Effort)
|
||||
|
||||
```bash
|
||||
gh pr edit --add-reviewer "copilot" 2>/dev/null || true
|
||||
```
|
||||
|
||||
## CONSTRAINTS
|
||||
|
||||
- Do NOT spawn Task subagents — work directly in the main context
|
||||
- Do NOT run `git checkout`, `git stash`, or any branch-switching commands
|
||||
- Do NOT include Co-Authored-By or AI attribution in commits
|
||||
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
142
.wave/pipelines/debug.yaml
Normal file
142
.wave/pipelines/debug.yaml
Normal file
@@ -0,0 +1,142 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: debug
|
||||
description: "Systematic debugging with hypothesis testing"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "TestPipelineExecutor fails with nil pointer on resume"
|
||||
|
||||
steps:
|
||||
- id: reproduce
|
||||
persona: debugger
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Reproduce and characterize the issue: {{ input }}
|
||||
|
||||
1. Understand expected vs actual behavior
|
||||
2. Create minimal reproduction steps
|
||||
3. Identify relevant code paths
|
||||
4. Note environmental factors (OS, versions, config)
|
||||
output_artifacts:
|
||||
- name: reproduction
|
||||
path: .wave/output/reproduction.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/reproduction.json
|
||||
schema_path: .wave/contracts/debug-reproduction.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: hypothesize
|
||||
persona: debugger
|
||||
dependencies: [reproduce]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: reproduce
|
||||
artifact: reproduction
|
||||
as: issue
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Form hypotheses about the root cause.
|
||||
|
||||
For each hypothesis:
|
||||
1. What could cause this behavior?
|
||||
2. What evidence would confirm/refute it?
|
||||
3. How to test this hypothesis?
|
||||
|
||||
Rank by likelihood and ease of testing.
|
||||
output_artifacts:
|
||||
- name: hypotheses
|
||||
path: .wave/output/hypotheses.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/hypotheses.json
|
||||
schema_path: .wave/contracts/debug-hypotheses.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: investigate
|
||||
persona: debugger
|
||||
dependencies: [hypothesize]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: reproduce
|
||||
artifact: reproduction
|
||||
as: issue
|
||||
- step: hypothesize
|
||||
artifact: hypotheses
|
||||
as: hypotheses
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Test each hypothesis systematically.
|
||||
|
||||
1. Start with most likely / easiest to test
|
||||
2. Use git bisect if needed to find regression
|
||||
3. Add diagnostic logging to trace execution
|
||||
4. Examine data flow and state changes
|
||||
5. Document findings for each hypothesis
|
||||
|
||||
Continue until root cause is identified.
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/investigation.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/investigation.md
|
||||
|
||||
- id: fix
|
||||
persona: craftsman
|
||||
dependencies: [investigate]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: investigate
|
||||
artifact: findings
|
||||
as: root_cause
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fix the root cause identified in the investigation.
|
||||
|
||||
1. Implement the minimal fix
|
||||
2. Add a regression test that would have caught this
|
||||
3. Remove any diagnostic code added during debugging
|
||||
4. Verify the original reproduction no longer fails
|
||||
5. Check for similar issues elsewhere
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
|
||||
must_pass: false
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
output_artifacts:
|
||||
- name: fix
|
||||
path: .wave/output/fix-summary.md
|
||||
type: markdown
|
||||
149
.wave/pipelines/doc-changelog.yaml
Normal file
149
.wave/pipelines/doc-changelog.yaml
Normal file
@@ -0,0 +1,149 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: doc-changelog
|
||||
description: "Generate structured changelog from git history"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "generate changelog from v0.1.0 to HEAD"
|
||||
|
||||
steps:
|
||||
- id: analyze-commits
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze git history for changelog generation: {{ input }}
|
||||
|
||||
## Process
|
||||
|
||||
1. **Determine range**: Parse input to identify the commit range.
|
||||
If tags mentioned, use them. If time period, calculate dates.
|
||||
If unclear, use last tag to HEAD (or last 50 commits).
|
||||
|
||||
2. **Extract commits**: Use `git log --format` to get hash, author,
|
||||
date, subject, and body for each commit.
|
||||
|
||||
3. **Parse conventional commits**: Categorize by prefix:
|
||||
feat → Features, fix → Fixes, docs → Documentation,
|
||||
refactor → Refactoring, test → Testing, chore → Maintenance,
|
||||
perf → Performance, ci → CI/CD, no prefix → Other
|
||||
|
||||
4. **Identify breaking changes**: Look for `BREAKING CHANGE:` in body,
|
||||
`!` after prefix, API removals in body.
|
||||
|
||||
5. **Extract scope**: Parse from prefix (e.g., `fix(pipeline):` → "pipeline")
|
||||
output_artifacts:
|
||||
- name: commits
|
||||
path: .wave/output/commit-analysis.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/commit-analysis.json
|
||||
schema_path: .wave/contracts/commit-analysis.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: categorize
|
||||
persona: planner
|
||||
dependencies: [analyze-commits]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze-commits
|
||||
artifact: commits
|
||||
as: raw_commits
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Categorize and describe changes for a changelog using the injected commit analysis.
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Group by type** into sections
|
||||
2. **Write user-facing descriptions**: Rewrite technical messages into
|
||||
clear descriptions focused on what changed and why it matters.
|
||||
3. **Highlight breaking changes** first with migration notes
|
||||
4. **Deduplicate**: Combine commits for the same logical change
|
||||
5. **Add context** for significant features
|
||||
output_artifacts:
|
||||
- name: categorized
|
||||
path: .wave/output/categorized-changes.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/categorized-changes.json
|
||||
schema_path: .wave/contracts/categorized-changes.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: format
|
||||
persona: philosopher
|
||||
dependencies: [categorize]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze-commits
|
||||
artifact: commits
|
||||
as: raw_commits
|
||||
- step: categorize
|
||||
artifact: categorized
|
||||
as: changes
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Format the injected commit analysis and categorized changes into a polished changelog.
|
||||
|
||||
Use Keep a Changelog format:
|
||||
|
||||
# Changelog
|
||||
|
||||
## [Version or Date Range] - YYYY-MM-DD
|
||||
|
||||
### Breaking Changes
|
||||
- **scope**: Description. Migration: what to do
|
||||
|
||||
### Added
|
||||
- **scope**: Feature description
|
||||
|
||||
### Fixed
|
||||
- **scope**: Bug fix description
|
||||
|
||||
### Changed
|
||||
- **scope**: Change description
|
||||
|
||||
### Security
|
||||
- **scope**: Security fix description
|
||||
|
||||
Rules:
|
||||
- Only include sections with entries
|
||||
- Bold scope if present
|
||||
- Most notable entries first per section
|
||||
- One line per entry, concise
|
||||
- Contributors list at bottom
|
||||
output_artifacts:
|
||||
- name: changelog
|
||||
path: .wave/output/CHANGELOG.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/CHANGELOG.md
|
||||
140
.wave/pipelines/doc-explain.yaml
Normal file
140
.wave/pipelines/doc-explain.yaml
Normal file
@@ -0,0 +1,140 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: doc-explain
|
||||
description: "Deep-dive explanation of code, modules, or architectural patterns"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "explain the pipeline execution system and how steps are scheduled"
|
||||
|
||||
steps:
|
||||
- id: explore
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Explore the codebase to understand: {{ input }}
|
||||
|
||||
## Exploration Steps
|
||||
|
||||
1. **Find relevant files**: Use Glob and Grep to locate all files related
|
||||
to the topic. Cast a wide net — include implementations, tests, configs,
|
||||
and documentation.
|
||||
|
||||
2. **Trace the call graph**: For key entry points, follow the execution flow.
|
||||
Note which functions call which, and how data flows through the system.
|
||||
|
||||
3. **Identify key abstractions**: Find the core types, interfaces, and structs.
|
||||
Note their responsibilities and relationships.
|
||||
|
||||
4. **Map dependencies**: Which packages/modules does this depend on?
|
||||
Which depend on it?
|
||||
|
||||
5. **Find tests**: Locate test files that exercise this code.
|
||||
Tests often reveal intended behavior and edge cases.
|
||||
|
||||
6. **Check configuration**: Find config files, constants, or environment
|
||||
variables that affect behavior.
|
||||
|
||||
output_artifacts:
|
||||
- name: exploration
|
||||
path: .wave/output/exploration.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/exploration.json
|
||||
schema_path: .wave/contracts/explain-exploration.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: analyze
|
||||
persona: planner
|
||||
model: claude-haiku
|
||||
dependencies: [explore]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: codebase_map
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the architecture and design of the explored code.
|
||||
|
||||
Review the injected exploration data, then read the key source files identified. Focus on:
|
||||
|
||||
1. **Design patterns**: What patterns are used and why?
|
||||
2. **Data flow**: How does data enter, transform, and exit?
|
||||
3. **Error handling**: What's the error strategy?
|
||||
4. **Concurrency model**: Goroutines, channels, mutexes?
|
||||
5. **Extension points**: Where can new functionality be added?
|
||||
6. **Design decisions**: What trade-offs were made?
|
||||
output_artifacts:
|
||||
- name: analysis
|
||||
path: .wave/output/analysis.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/analysis.json
|
||||
schema_path: .wave/contracts/explain-analysis.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: document
|
||||
persona: philosopher
|
||||
dependencies: [analyze]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: codebase_map
|
||||
- step: analyze
|
||||
artifact: analysis
|
||||
as: architecture
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Write a comprehensive explanation document.
|
||||
|
||||
Review the injected exploration and architecture data, then produce a markdown document with:
|
||||
|
||||
1. **Overview** — One paragraph summary
|
||||
2. **Key Concepts** — Core abstractions and terminology (glossary)
|
||||
3. **Architecture** — How pieces fit together (include ASCII diagram)
|
||||
4. **How It Works** — Step-by-step main execution flow with file:line refs
|
||||
5. **Design Decisions** — Decision → Rationale → Trade-off entries
|
||||
6. **Extension Guide** — How to add new functionality
|
||||
7. **Testing Strategy** — How the code is tested
|
||||
8. **Common Pitfalls** — Things that trip people up
|
||||
|
||||
Write for an experienced developer new to this codebase.
|
||||
Use real file paths, function names, and type names.
|
||||
output_artifacts:
|
||||
- name: explanation
|
||||
path: .wave/output/explanation.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/explanation.md
|
||||
255
.wave/pipelines/doc-fix.yaml
Normal file
255
.wave/pipelines/doc-fix.yaml
Normal file
@@ -0,0 +1,255 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: doc-fix
|
||||
description: "Scan documentation for inconsistencies, fix them, and commit to a feature branch"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "sync docs with current implementation"
|
||||
|
||||
steps:
|
||||
- id: scan-changes
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan the repository for documentation inconsistencies: {{ input }}
|
||||
|
||||
## Process
|
||||
|
||||
1. **Identify documentation files**: Find all markdown files, README,
|
||||
CONTRIBUTING, docs/ directory, inline code comments with doc references.
|
||||
|
||||
2. **Identify code surface area**: Scan for exported functions, CLI commands,
|
||||
config options, environment variables, API endpoints.
|
||||
|
||||
3. **Cross-reference**: For each documented feature, verify it exists in code.
|
||||
For each code feature, verify it's documented.
|
||||
|
||||
4. **Check accuracy**: Compare documented behavior, flags, options, and
|
||||
examples against actual implementation.
|
||||
|
||||
5. **Categorize findings**:
|
||||
- MISSING_DOCS: Feature in code, not in docs
|
||||
- STALE_DOCS: Docs reference removed/changed feature
|
||||
- INACCURATE: Docs describe wrong behavior
|
||||
- INCOMPLETE: Docs exist but missing details
|
||||
|
||||
Write your findings as structured JSON.
|
||||
Include: scan_scope, findings (id, type, severity, title, doc_location, code_location,
|
||||
description, suggested_fix), summary (total_findings, by_type, by_severity, fixable_count),
|
||||
and timestamp.
|
||||
output_artifacts:
|
||||
- name: scan_results
|
||||
path: .wave/output/doc-scan.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/doc-scan.json
|
||||
schema_path: .wave/contracts/doc-fix-scan.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: analyze
|
||||
persona: reviewer
|
||||
dependencies: [scan-changes]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-changes
|
||||
artifact: scan_results
|
||||
as: scan
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Review the doc scan findings and prioritize fixes.
|
||||
|
||||
For each finding:
|
||||
1. Verify it's a real inconsistency (not a false positive)
|
||||
2. Assess if it can be fixed by editing docs alone
|
||||
3. Prioritize: CRITICAL/HIGH first, then by effort
|
||||
|
||||
Produce a fix plan as markdown:
|
||||
- Ordered list of fixes to apply
|
||||
- For each: which file to edit, what to change, why
|
||||
- Skip items that require code changes (docs-only fixes)
|
||||
- Estimated scope of changes
|
||||
output_artifacts:
|
||||
- name: fix_plan
|
||||
path: .wave/output/fix-plan.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/fix-plan.md
|
||||
|
||||
- id: fix-docs
|
||||
persona: craftsman
|
||||
dependencies: [analyze]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-changes
|
||||
artifact: scan_results
|
||||
as: scan
|
||||
- step: analyze
|
||||
artifact: fix_plan
|
||||
as: impl_plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "fix/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fix the documentation inconsistencies on this isolated worktree branch.
|
||||
|
||||
## Process
|
||||
|
||||
1. **Apply fixes** following the priority order from the plan:
|
||||
- Edit documentation files to fix each inconsistency
|
||||
- Keep changes minimal and focused
|
||||
- Preserve existing formatting and style
|
||||
- Do NOT modify source code — docs-only changes
|
||||
|
||||
2. **Verify**: Ensure no broken links or formatting issues
|
||||
|
||||
3. **Commit**:
|
||||
```bash
|
||||
git add <changed-doc-files>
|
||||
git commit -m "docs: sync documentation with implementation
|
||||
|
||||
Fix N documentation inconsistencies found by doc-fix pipeline:
|
||||
- DOC-001: <title>
|
||||
- DOC-002: <title>
|
||||
..."
|
||||
```
|
||||
|
||||
Write a summary including:
|
||||
- Branch name
|
||||
- List of files modified
|
||||
- Findings fixed vs skipped
|
||||
- Commit hash
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: false
|
||||
on_failure: retry
|
||||
output_artifacts:
|
||||
- name: result
|
||||
path: .wave/output/result.md
|
||||
type: markdown
|
||||
|
||||
- id: create-pr
|
||||
persona: craftsman
|
||||
dependencies: [fix-docs]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-changes
|
||||
artifact: scan_results
|
||||
as: scan
|
||||
- step: fix-docs
|
||||
artifact: result
|
||||
as: fix_result
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "fix/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Create a pull request for the documentation fixes.
|
||||
|
||||
## Working Directory
|
||||
|
||||
You are running in an **isolated git worktree** shared with previous pipeline steps.
|
||||
Your working directory IS the project root. The branch already exists from the
|
||||
fix-docs step — just push it and create the PR.
|
||||
|
||||
## SAFETY: Do NOT Modify the Working Tree
|
||||
|
||||
This step MUST NOT run `git checkout`, `git stash`, or any command that changes
|
||||
the current branch or working tree state.
|
||||
|
||||
## Instructions
|
||||
|
||||
### Step 1: Push the Branch
|
||||
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
```
|
||||
|
||||
### Step 2: Create Pull Request
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} create --title "docs: sync documentation with implementation" --body "$(cat <<'PREOF'
|
||||
## Summary
|
||||
|
||||
Automated documentation sync to fix inconsistencies between docs and code.
|
||||
|
||||
<summarize: N findings fixed, types of issues addressed>
|
||||
|
||||
## Changes
|
||||
|
||||
<list each doc file modified and what was fixed>
|
||||
|
||||
## Findings Addressed
|
||||
|
||||
<list each finding ID, type, and resolution>
|
||||
|
||||
## Skipped
|
||||
|
||||
<list any findings that were skipped and why>
|
||||
PREOF
|
||||
)"
|
||||
```
|
||||
|
||||
### Step 3: Request Copilot Review (Best-Effort)
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} edit --add-reviewer "copilot" 2>/dev/null || true
|
||||
```
|
||||
|
||||
## CONSTRAINTS
|
||||
|
||||
- Do NOT spawn Task subagents — work directly in the main context
|
||||
- Do NOT run `git checkout`, `git stash`, or any branch-switching commands
|
||||
- Do NOT include Co-Authored-By or AI attribution in commits
|
||||
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
129
.wave/pipelines/doc-onboard.yaml
Normal file
129
.wave/pipelines/doc-onboard.yaml
Normal file
@@ -0,0 +1,129 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: doc-onboard
|
||||
description: "Generate a project onboarding guide for new contributors"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "create an onboarding guide for this project"
|
||||
|
||||
steps:
|
||||
- id: survey
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Survey this project to build an onboarding guide: {{ input }}
|
||||
|
||||
## Survey Checklist
|
||||
|
||||
1. **Project identity**: Find README, package manifests (go.mod, package.json),
|
||||
license, and config files. Determine language, framework, purpose.
|
||||
|
||||
2. **Build system**: How to build, test, and run the project.
|
||||
Find Makefiles, scripts, CI configs, Dockerfiles.
|
||||
|
||||
3. **Directory structure**: Map the top-level layout and key directories.
|
||||
What does each directory contain?
|
||||
|
||||
4. **Architecture**: Identify the main components and how they interact.
|
||||
Find entry points (main.go, index.ts, etc.).
|
||||
|
||||
5. **Dependencies**: List key dependencies and their purposes.
|
||||
Check go.mod, package.json, requirements.txt, etc.
|
||||
|
||||
6. **Configuration**: Find environment variables, config files, feature flags.
|
||||
|
||||
7. **Testing**: Where are tests? How to run them? What patterns are used?
|
||||
|
||||
8. **Development workflow**: Find contributing guides, PR templates,
|
||||
commit conventions, branch strategies.
|
||||
|
||||
9. **Documentation**: Where is documentation? Is it up to date?
|
||||
output_artifacts:
|
||||
- name: survey
|
||||
path: .wave/output/project-survey.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/project-survey.json
|
||||
schema_path: .wave/contracts/project-survey.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: guide
|
||||
persona: philosopher
|
||||
dependencies: [survey]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: survey
|
||||
artifact: survey
|
||||
as: project_info
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Write a comprehensive onboarding guide for new contributors.
|
||||
|
||||
Using the injected project survey data, write a guide with these sections:
|
||||
|
||||
# Onboarding Guide: [Project Name]
|
||||
|
||||
## Quick Start
|
||||
- Prerequisites (what to install)
|
||||
- Clone and build (exact commands)
|
||||
- Run tests (exact commands)
|
||||
- Run the project (exact commands)
|
||||
|
||||
## Project Overview
|
||||
- What this project does (2-3 sentences)
|
||||
- Key technologies and why they were chosen
|
||||
- High-level architecture (ASCII diagram)
|
||||
|
||||
## Directory Map
|
||||
- What each top-level directory contains
|
||||
- Where to find things (tests, configs, docs)
|
||||
|
||||
## Core Concepts
|
||||
- Key abstractions and terminology
|
||||
- How the main components interact
|
||||
- Data flow through the system
|
||||
|
||||
## Development Workflow
|
||||
- How to create a feature branch
|
||||
- Commit message conventions
|
||||
- How to run tests before pushing
|
||||
- PR process
|
||||
|
||||
## Common Tasks
|
||||
- "I want to add a new [feature/command/endpoint]" → where to start
|
||||
- "I want to fix a bug" → debugging approach
|
||||
- "I want to understand [component]" → where to look
|
||||
|
||||
## Helpful Resources
|
||||
- Documentation locations
|
||||
- Key files to read first
|
||||
- Related external docs
|
||||
|
||||
Write for someone on their first day with this codebase.
|
||||
Be specific — use real paths, real commands, real examples.
|
||||
output_artifacts:
|
||||
- name: guide
|
||||
path: .wave/output/onboarding-guide.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/onboarding-guide.md
|
||||
131
.wave/pipelines/explain.yaml
Normal file
131
.wave/pipelines/explain.yaml
Normal file
@@ -0,0 +1,131 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: explain
|
||||
description: "Deep-dive explanation of code, modules, or architectural patterns"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "explain the pipeline execution system and how steps are scheduled"
|
||||
|
||||
steps:
|
||||
- id: explore
|
||||
persona: navigator
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Explore the codebase to understand: {{ input }}
|
||||
|
||||
## Exploration Steps
|
||||
|
||||
1. **Find relevant files**: Use Glob and Grep to locate all files related
|
||||
to the topic. Cast a wide net — include implementations, tests, configs,
|
||||
and documentation.
|
||||
|
||||
2. **Trace the call graph**: For key entry points, follow the execution flow.
|
||||
Note which functions call which, and how data flows through the system.
|
||||
|
||||
3. **Identify key abstractions**: Find the core types, interfaces, and structs.
|
||||
Note their responsibilities and relationships.
|
||||
|
||||
4. **Map dependencies**: Which packages/modules does this depend on?
|
||||
Which depend on it?
|
||||
|
||||
5. **Find tests**: Locate test files that exercise this code.
|
||||
Tests often reveal intended behavior and edge cases.
|
||||
|
||||
6. **Check configuration**: Find config files, constants, or environment
|
||||
variables that affect behavior.
|
||||
|
||||
output_artifacts:
|
||||
- name: exploration
|
||||
path: .wave/output/exploration.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/exploration.json
|
||||
schema_path: .wave/contracts/explain-exploration.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: analyze
|
||||
persona: planner
|
||||
dependencies: [explore]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: codebase_map
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the architecture and design of the explored code.
|
||||
|
||||
Review the injected exploration data, then read the key source files identified. Focus on:
|
||||
|
||||
1. **Design patterns**: What patterns are used and why?
|
||||
2. **Data flow**: How does data enter, transform, and exit?
|
||||
3. **Error handling**: What's the error strategy?
|
||||
4. **Concurrency model**: Goroutines, channels, mutexes?
|
||||
5. **Extension points**: Where can new functionality be added?
|
||||
6. **Design decisions**: What trade-offs were made?
|
||||
output_artifacts:
|
||||
- name: analysis
|
||||
path: .wave/output/analysis.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/analysis.json
|
||||
schema_path: .wave/contracts/explain-analysis.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: document
|
||||
persona: philosopher
|
||||
dependencies: [analyze]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: codebase_map
|
||||
- step: analyze
|
||||
artifact: analysis
|
||||
as: architecture
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Write a comprehensive explanation document.
|
||||
|
||||
Review the injected exploration and architecture data, then produce a markdown document with:
|
||||
|
||||
1. **Overview** — One paragraph summary
|
||||
2. **Key Concepts** — Core abstractions and terminology (glossary)
|
||||
3. **Architecture** — How pieces fit together (include ASCII diagram)
|
||||
4. **How It Works** — Step-by-step main execution flow with file:line refs
|
||||
5. **Design Decisions** — Decision → Rationale → Trade-off entries
|
||||
6. **Extension Guide** — How to add new functionality
|
||||
7. **Testing Strategy** — How the code is tested
|
||||
8. **Common Pitfalls** — Things that trip people up
|
||||
|
||||
Write for an experienced developer new to this codebase.
|
||||
Use real file paths, function names, and type names.
|
||||
output_artifacts:
|
||||
- name: explanation
|
||||
path: .wave/output/explanation.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/explanation.md
|
||||
178
.wave/pipelines/gh-pr-review.yaml
Normal file
178
.wave/pipelines/gh-pr-review.yaml
Normal file
@@ -0,0 +1,178 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gh-pr-review
|
||||
description: "GitHub pull request code review with automated security and quality analysis"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "review the authentication module"
|
||||
|
||||
steps:
|
||||
- id: diff-analysis
|
||||
persona: navigator
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the code changes for: {{ input }}
|
||||
|
||||
1. Identify all modified files and their purposes
|
||||
2. Map the change scope (which modules/packages affected)
|
||||
3. Find related tests that should be updated
|
||||
4. Check for breaking API changes
|
||||
|
||||
Produce a structured result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: diff
|
||||
path: .wave/output/diff-analysis.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/diff-analysis.json
|
||||
schema_path: .wave/contracts/diff-analysis.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: security-review
|
||||
persona: reviewer
|
||||
dependencies: [diff-analysis]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diff-analysis
|
||||
artifact: diff
|
||||
as: changes
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Security review of the PR changes.
|
||||
|
||||
Check for:
|
||||
1. SQL injection, XSS, CSRF vulnerabilities
|
||||
2. Hardcoded secrets or credentials
|
||||
3. Insecure deserialization
|
||||
4. Missing input validation
|
||||
5. Authentication/authorization gaps
|
||||
6. Sensitive data exposure
|
||||
|
||||
Output findings with severity (CRITICAL/HIGH/MEDIUM/LOW).
|
||||
output_artifacts:
|
||||
- name: security
|
||||
path: .wave/output/security-review.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/security-review.md
|
||||
|
||||
- id: quality-review
|
||||
persona: reviewer
|
||||
dependencies: [diff-analysis]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diff-analysis
|
||||
artifact: diff
|
||||
as: changes
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Quality review of the PR changes.
|
||||
|
||||
Check for:
|
||||
1. Error handling completeness
|
||||
2. Edge cases not covered
|
||||
3. Code duplication
|
||||
4. Naming consistency
|
||||
5. Missing or inadequate tests
|
||||
6. Performance implications
|
||||
7. Documentation gaps
|
||||
|
||||
Output findings with severity and suggestions.
|
||||
output_artifacts:
|
||||
- name: quality
|
||||
path: .wave/output/quality-review.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/quality-review.md
|
||||
|
||||
- id: summary
|
||||
persona: summarizer
|
||||
dependencies: [security-review, quality-review]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: security-review
|
||||
artifact: security
|
||||
as: security_findings
|
||||
- step: quality-review
|
||||
artifact: quality
|
||||
as: quality_findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize the review findings into a final verdict.
|
||||
|
||||
Produce a unified review with:
|
||||
1. Overall assessment (APPROVE / REQUEST_CHANGES / NEEDS_DISCUSSION)
|
||||
2. Critical issues that must be fixed
|
||||
3. Suggested improvements (optional but recommended)
|
||||
4. Positive observations
|
||||
|
||||
Format as a PR review comment ready to post.
|
||||
Do NOT include a title/header line — the publish step adds one.
|
||||
output_artifacts:
|
||||
- name: verdict
|
||||
path: .wave/output/review-summary.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/review-summary.md
|
||||
|
||||
- id: publish
|
||||
persona: github-commenter
|
||||
dependencies: [summary]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: summary
|
||||
artifact: verdict
|
||||
as: review_summary
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Post the code review summary as a PR comment.
|
||||
|
||||
The original input was: {{ input }}
|
||||
Extract the PR number or URL from the input.
|
||||
|
||||
1. Post the review as a PR comment using:
|
||||
gh pr comment <PR_NUMBER_OR_URL> --body "## Code Review (Wave Pipeline)
|
||||
|
||||
<review content>
|
||||
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) gh-pr-review pipeline*"
|
||||
|
||||
output_artifacts:
|
||||
- name: publish-result
|
||||
path: .wave/output/publish-result.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/publish-result.json
|
||||
schema_path: .wave/contracts/gh-pr-comment-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
outcomes:
|
||||
- type: url
|
||||
extract_from: .wave/output/publish-result.json
|
||||
json_path: .comment_url
|
||||
label: "Review Comment"
|
||||
184
.wave/pipelines/gh-refresh.yaml
Normal file
184
.wave/pipelines/gh-refresh.yaml
Normal file
@@ -0,0 +1,184 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gh-refresh
|
||||
description: "Refresh a stale GitHub issue by comparing it against recent codebase changes"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 45 -- acceptance criteria are outdated after the worktree refactor"
|
||||
schema:
|
||||
type: string
|
||||
description: "owner/repo number [-- optional criticism or direction]"
|
||||
|
||||
steps:
|
||||
- id: gather-context
|
||||
persona: github-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
MANDATORY: You MUST call the Bash tool. NEVER say "gh CLI not installed" without trying.
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
Parse the input:
|
||||
- Split on " -- " to separate the repo+number from optional criticism.
|
||||
- The first part is "<owner/repo> <number>". Extract REPO (first token) and NUMBER (second token).
|
||||
- If there is text after " -- ", that is the user's CRITICISM about what's wrong with the issue.
|
||||
- If there is no " -- ", criticism is empty.
|
||||
|
||||
Execute these commands using the Bash tool:
|
||||
|
||||
1. gh --version
|
||||
|
||||
2. Fetch the full issue:
|
||||
gh issue view NUMBER --repo REPO --json number,title,body,labels,url,createdAt,comments
|
||||
|
||||
3. Get commits since the issue was created (cap at 100):
|
||||
git log --since="<createdAt>" --oneline -100
|
||||
|
||||
4. Get releases since the issue was created:
|
||||
gh release list --repo REPO --limit 20
|
||||
Then filter to only releases after the issue's createdAt date.
|
||||
|
||||
5. Scan the issue body for file path references (anything matching patterns like
|
||||
`internal/...`, `cmd/...`, `.wave/...`, or backtick-quoted paths).
|
||||
For each referenced file, check if it still exists using `ls -la <path>`.
|
||||
|
||||
6. Read CLAUDE.md for current project context:
|
||||
Read the file CLAUDE.md from the repository root.
|
||||
|
||||
After gathering ALL data, produce a JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: issue_context
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-context.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: draft-update
|
||||
persona: github-analyst
|
||||
dependencies: [gather-context]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather-context
|
||||
artifact: issue_context
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
MANDATORY: You MUST call the Bash tool for any commands. NEVER generate fake output.
|
||||
|
||||
The context artifact contains the gathered issue context.
|
||||
|
||||
Your task: Compare the original issue against the codebase changes and draft an updated version.
|
||||
|
||||
Step 1: Analyze each section of the issue body. Classify each as:
|
||||
- STILL_VALID: Content is accurate and up-to-date
|
||||
- OUTDATED: Content references old behavior, removed files, or superseded patterns
|
||||
- INCOMPLETE: Content is partially correct but missing recent developments
|
||||
- WRONG: Content is factually incorrect given current codebase state
|
||||
|
||||
Step 2: If there is user criticism (non-empty "criticism" field), address EVERY point raised.
|
||||
The criticism takes priority — it represents what the issue author thinks is wrong.
|
||||
|
||||
Step 3: Draft the updated issue:
|
||||
- Preserve sections classified as STILL_VALID (do not rewrite what works)
|
||||
- Rewrite OUTDATED and WRONG sections to reflect current reality
|
||||
- Expand INCOMPLETE sections with missing information
|
||||
- If the title needs updating, draft a new title
|
||||
- Append a "---\n**Changes since original**" section at the bottom listing what changed and why
|
||||
|
||||
Step 4: If file paths in the issue body are now missing (from referenced_files.missing),
|
||||
update or remove those references.
|
||||
|
||||
Produce a JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: update_draft
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-draft.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: apply-update
|
||||
persona: github-enhancer
|
||||
dependencies: [draft-update]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: draft-update
|
||||
artifact: update_draft
|
||||
as: draft
|
||||
- step: gather-context
|
||||
artifact: issue_context
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
|
||||
|
||||
Step 1: Use Bash tool to verify gh works:
|
||||
gh --version
|
||||
|
||||
Step 2: Extract the repo as "<owner>/<name>" and the issue number from the available artifacts.
|
||||
|
||||
Step 3: Apply the update:
|
||||
- If title_changed is true:
|
||||
gh issue edit <NUMBER> --repo <REPO> --title "<updated_title>"
|
||||
- Write the updated_body to a temp file, then apply it:
|
||||
Write updated_body to /tmp/issue-body.md
|
||||
gh issue edit <NUMBER> --repo <REPO> --body-file /tmp/issue-body.md
|
||||
- Clean up /tmp/issue-body.md after applying.
|
||||
|
||||
Step 4: Verify the update was applied:
|
||||
gh issue view <NUMBER> --repo <REPO> --json number,title,body,url
|
||||
|
||||
Compare the returned title and body against what was intended. Flag any discrepancies.
|
||||
|
||||
Step 5: Record the results as a JSON object matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: update_result
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .url
|
||||
label: "Updated Issue"
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-result.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
255
.wave/pipelines/gh-research.yaml
Normal file
255
.wave/pipelines/gh-research.yaml
Normal file
@@ -0,0 +1,255 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gh-research
|
||||
description: Research a GitHub issue and post findings as a comment
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub repository and issue number (e.g. 'owner/repo number')"
|
||||
|
||||
steps:
|
||||
- id: fetch-issue
|
||||
persona: github-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fetch the GitHub issue specified in the input: {{ input }}
|
||||
|
||||
The input format is "owner/repo issue_number" (e.g., "re-cinq/CFOAgent 112").
|
||||
|
||||
Parse the input to extract the repository and issue number.
|
||||
Use the gh CLI to fetch the issue:
|
||||
|
||||
gh issue view <number> --repo <owner/repo> --json number,title,body,labels,state,author,createdAt,url,comments
|
||||
|
||||
Parse the output and produce structured JSON with the issue content.
|
||||
Include repository information in the output.
|
||||
output_artifacts:
|
||||
- name: issue-content
|
||||
path: .wave/output/issue-content.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/issue-content.json
|
||||
schema_path: .wave/contracts/issue-content.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
|
||||
- id: analyze-topics
|
||||
persona: researcher
|
||||
dependencies: [fetch-issue]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the GitHub issue and extract research topics.
|
||||
|
||||
Identify:
|
||||
1. Key technical questions that need external research
|
||||
2. Domain concepts that require clarification
|
||||
3. External dependencies, libraries, or tools to investigate
|
||||
4. Similar problems/solutions that might provide guidance
|
||||
|
||||
For each topic, provide:
|
||||
- A unique ID (TOPIC-001, TOPIC-002, etc.)
|
||||
- A clear title
|
||||
- Specific questions to answer (1-5 questions per topic)
|
||||
- Search keywords for web research
|
||||
- Priority (critical/high/medium/low based on relevance to solving the issue)
|
||||
- Category (technical/documentation/best_practices/security/performance/compatibility/other)
|
||||
|
||||
Focus on topics that will provide actionable insights for the issue author.
|
||||
Limit to 10 most important topics.
|
||||
output_artifacts:
|
||||
- name: topics
|
||||
path: .wave/output/research-topics.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-topics.json
|
||||
schema_path: .wave/contracts/research-topics.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: research-topics
|
||||
persona: researcher
|
||||
dependencies: [analyze-topics]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
- step: analyze-topics
|
||||
artifact: topics
|
||||
as: research_plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Research the topics identified in the research plan.
|
||||
|
||||
For each topic in the research plan:
|
||||
1. Execute web searches using the provided keywords
|
||||
2. Evaluate source credibility (official docs > authoritative > community)
|
||||
3. Extract relevant findings with key points
|
||||
4. Include direct quotes where helpful
|
||||
5. Rate your confidence in the answer (high/medium/low/inconclusive)
|
||||
|
||||
For each finding:
|
||||
- Assign a unique ID (FINDING-001, FINDING-002, etc.)
|
||||
- Provide a summary (20-2000 characters)
|
||||
- List key points as bullet items
|
||||
- Include source URL, title, and type
|
||||
- Rate relevance to the topic (0-1)
|
||||
|
||||
Always include source URLs for attribution.
|
||||
If a topic yields no useful results, mark confidence as "inconclusive".
|
||||
Document any gaps in the research.
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/research-findings.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-findings.json
|
||||
schema_path: .wave/contracts/research-findings.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: synthesize-report
|
||||
persona: summarizer
|
||||
dependencies: [research-topics]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: original_issue
|
||||
- step: research-topics
|
||||
artifact: findings
|
||||
as: research
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize the research findings into a coherent report for the GitHub issue.
|
||||
|
||||
Create a well-structured research report that includes:
|
||||
|
||||
1. Executive Summary:
|
||||
- Brief overview (50-1000 chars)
|
||||
- Key findings (1-7 bullet points)
|
||||
- Primary recommendation
|
||||
- Confidence assessment (high/medium/low)
|
||||
|
||||
2. Detailed Findings:
|
||||
- Organize by topic/section
|
||||
- Include code examples where relevant
|
||||
- Reference sources using SRC-### IDs
|
||||
|
||||
3. Recommendations:
|
||||
- Actionable items with IDs (REC-001, REC-002, etc.)
|
||||
- Priority and effort estimates
|
||||
- Maximum 10 recommendations
|
||||
|
||||
4. Sources:
|
||||
- List all sources with IDs (SRC-001, SRC-002, etc.)
|
||||
- Include URL, title, type, and reliability
|
||||
|
||||
5. Pre-rendered Markdown:
|
||||
- Generate complete markdown_content field ready for GitHub comment
|
||||
- Use proper headers, bullet points, and formatting
|
||||
- Include a header: "## Research Findings (Wave Pipeline)"
|
||||
- End with sources section
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/research-report.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-report.json
|
||||
schema_path: .wave/contracts/research-report.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: post-comment
|
||||
persona: github-commenter
|
||||
dependencies: [synthesize-report]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
- step: synthesize-report
|
||||
artifact: report
|
||||
as: report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Post the research report as a comment on the GitHub issue.
|
||||
|
||||
Steps:
|
||||
1. Read the issue details to get the repository and issue number
|
||||
2. Read the report to get the markdown_content
|
||||
3. Write the markdown content to a file, then use gh CLI to post the comment:
|
||||
|
||||
# Write to file to avoid shell escaping issues with large markdown
|
||||
cat > /tmp/comment-body.md << 'COMMENT_EOF'
|
||||
<markdown_content>
|
||||
COMMENT_EOF
|
||||
|
||||
gh issue comment <number> --repo <owner/repo> --body-file /tmp/comment-body.md
|
||||
|
||||
4. Add a footer to the comment:
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) issue-research pipeline*
|
||||
|
||||
5. Capture the result and verify success
|
||||
6. If successful, extract the comment URL from the output
|
||||
|
||||
Record the result with:
|
||||
- success: true/false
|
||||
- issue_reference: issue number and repository
|
||||
- comment: id, url, body_length (if successful)
|
||||
- error: code, message, retryable (if failed)
|
||||
- timestamp: current time
|
||||
output_artifacts:
|
||||
- name: comment-result
|
||||
path: .wave/output/comment-result.json
|
||||
type: json
|
||||
outcomes:
|
||||
- type: url
|
||||
extract_from: .wave/output/comment-result.json
|
||||
json_path: .comment.url
|
||||
label: "Research Comment"
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/comment-result.json
|
||||
schema_path: .wave/contracts/comment-result.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
99
.wave/pipelines/gh-rewrite.yaml
Normal file
99
.wave/pipelines/gh-rewrite.yaml
Normal file
@@ -0,0 +1,99 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gh-rewrite
|
||||
description: "Analyze and rewrite poorly documented GitHub issues"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42 or https://github.com/re-cinq/wave/issues/42"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub repo with optional issue number, or full issue URL"
|
||||
|
||||
steps:
|
||||
- id: scan-and-score
|
||||
persona: github-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input: {{ input }}
|
||||
|
||||
Step 1: Parse the input format.
|
||||
- If URL (https://github.com/OWNER/REPO/issues/NUM) → extract <REPO> and <NUM>
|
||||
- If "owner/repo NUM" → extract <REPO> and <NUM>
|
||||
- If "owner/repo" alone → batch mode, use {{ input }} as <REPO>
|
||||
|
||||
Step 2: Fetch issues via gh CLI.
|
||||
- Single: gh issue view <NUM> --repo <REPO> --json number,title,body,labels,url
|
||||
- Batch: gh issue list --repo {{ input }} --limit 10 --json number,title,body,labels,url
|
||||
|
||||
Step 3: Score each issue quality (0-100) on title clarity, description completeness, labels, and acceptance criteria.
|
||||
|
||||
Step 4: For issues scoring below 70, create an enhancement plan with:
|
||||
- suggested_title, body_template (preserving original content), suggested_labels, enhancements list
|
||||
|
||||
Output JSON with repository (owner/repo string), issues_to_enhance array, and total_to_enhance.
|
||||
output_artifacts:
|
||||
- name: enhancement_plan
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/github-enhancement-plan.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: apply-enhancements
|
||||
persona: github-enhancer
|
||||
dependencies: [scan-and-score]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-and-score
|
||||
artifact: enhancement_plan
|
||||
as: plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Read the "repository" field from the plan artifact to get <REPO>.
|
||||
|
||||
For each issue in issues_to_enhance:
|
||||
1. Apply title: gh issue edit <NUM> --repo <REPO> --title "suggested_title"
|
||||
2. Apply body: gh issue edit <NUM> --repo <REPO> --body "body_template"
|
||||
3. Add labels: gh issue edit <NUM> --repo <REPO> --add-label "label1,label2"
|
||||
4. Capture URL: gh issue view <NUM> --repo <REPO> --json url --jq .url
|
||||
|
||||
Output JSON with enhanced_issues (issue_number, success, changes_made, url),
|
||||
total_attempted, total_successful.
|
||||
output_artifacts:
|
||||
- name: enhancement_results
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .enhanced_issues[0].url
|
||||
label: "Enhanced Issue"
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/github-enhancement-results.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
173
.wave/pipelines/gh-scope.yaml
Normal file
173
.wave/pipelines/gh-scope.yaml
Normal file
@@ -0,0 +1,173 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gh-scope
|
||||
description: "Decompose a GitHub epic into well-scoped child issues"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 184"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub repository with epic issue number (e.g. 'owner/repo 42')"
|
||||
|
||||
steps:
|
||||
- id: fetch-epic
|
||||
persona: github-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
MANDATORY: You MUST call the Bash tool. NEVER say "gh CLI not installed" without trying.
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
Parse the input: extract the repo (first token) and the epic issue number (second token).
|
||||
|
||||
Execute these commands using the Bash tool:
|
||||
|
||||
1. gh --version
|
||||
|
||||
2. Fetch the epic issue with full details:
|
||||
gh issue view <NUMBER> --repo <REPO> --json number,title,body,labels,url,comments,author,state
|
||||
|
||||
3. List existing open issues to check for duplicates:
|
||||
gh issue list --repo <REPO> --limit 50 --json number,title,labels,url
|
||||
|
||||
After getting REAL results from Bash, analyze the epic:
|
||||
- Determine if this is truly an epic/umbrella issue (contains multiple work items)
|
||||
- Identify the key themes and work areas
|
||||
- Estimate overall complexity
|
||||
- Count how many sub-issues should be created (3-10)
|
||||
- List existing issues to avoid creating duplicates
|
||||
output_artifacts:
|
||||
- name: epic_assessment
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/epic-assessment.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: scope-and-create
|
||||
persona: github-scoper
|
||||
dependencies: [fetch-epic]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-epic
|
||||
artifact: epic_assessment
|
||||
as: assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
|
||||
|
||||
The assessment artifact contains the epic analysis. Use it to create well-scoped child issues.
|
||||
|
||||
Input: {{ input }}
|
||||
Parse the repo from the input (first token).
|
||||
|
||||
Step 1: Verify gh works:
|
||||
gh --version
|
||||
|
||||
Step 2: For each planned sub-issue, create it using:
|
||||
gh issue create --repo <REPO> --title "<title>" --body "<body>" --label "<labels>"
|
||||
|
||||
Each sub-issue body MUST include:
|
||||
- A "Parent: #<epic_number>" reference line
|
||||
- A clear Summary section
|
||||
- Acceptance Criteria as a checkbox list
|
||||
- Dependencies on other sub-issues if applicable
|
||||
- Scope Notes for what is explicitly excluded
|
||||
|
||||
Step 3: After creating all issues, capture each issue's number and URL from the creation output.
|
||||
|
||||
Step 4: Record the results with fields: parent_issue (number, url, repository),
|
||||
created_issues (array of number, title, url, labels, success, complexity, dependencies),
|
||||
total_created, total_failed.
|
||||
output_artifacts:
|
||||
- name: scope_plan
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .created_issues[0].url
|
||||
label: "First Sub-Issue"
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/scope-plan.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: verify-report
|
||||
persona: github-analyst
|
||||
dependencies: [scope-and-create]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scope-and-create
|
||||
artifact: scope_plan
|
||||
as: results
|
||||
- step: fetch-epic
|
||||
artifact: epic_assessment
|
||||
as: assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the created sub-issues and post a summary comment on the epic.
|
||||
|
||||
Input: {{ input }}
|
||||
Parse the repo (first token) and epic number (second token).
|
||||
|
||||
Step 1: For each created issue in the results, verify it exists:
|
||||
gh issue view <N> --repo <REPO> --json number,title,body,labels
|
||||
|
||||
Check that each issue:
|
||||
- Exists and is open
|
||||
- Has acceptance criteria in the body
|
||||
- References the parent epic
|
||||
|
||||
Step 2: Post a summary comment on the epic issue listing all created sub-issues:
|
||||
Create a markdown summary with a checklist of all sub-issues (- [ ] #<number> <title>)
|
||||
and post it using: gh issue comment <EPIC_NUMBER> --repo <REPO> --body "<summary>"
|
||||
|
||||
Step 3: Compile the verification report with fields:
|
||||
parent_issue (number, url), verified_issues (array of number, title, url, exists,
|
||||
has_acceptance_criteria, references_parent), summary (total_verified, total_valid,
|
||||
total_issues_created, comment_posted, comment_url).
|
||||
output_artifacts:
|
||||
- name: scope_report
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/scope-report.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
184
.wave/pipelines/gt-refresh.yaml
Normal file
184
.wave/pipelines/gt-refresh.yaml
Normal file
@@ -0,0 +1,184 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gt-refresh
|
||||
description: "Refresh a stale Gitea issue by comparing it against recent codebase changes"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 45 -- acceptance criteria are outdated after the worktree refactor"
|
||||
schema:
|
||||
type: string
|
||||
description: "owner/repo number [-- optional criticism or direction]"
|
||||
|
||||
steps:
|
||||
- id: gather-context
|
||||
persona: gitea-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
MANDATORY: You MUST call the Bash tool. NEVER say "tea CLI not installed" without trying.
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
Parse the input:
|
||||
- Split on " -- " to separate the repo+number from optional criticism.
|
||||
- The first part is "<owner/repo> <number>". Extract REPO (first token) and NUMBER (second token).
|
||||
- If there is text after " -- ", that is the user's CRITICISM about what's wrong with the issue.
|
||||
- If there is no " -- ", criticism is empty.
|
||||
|
||||
Execute these commands using the Bash tool:
|
||||
|
||||
1. tea --version
|
||||
|
||||
2. Fetch the full issue:
|
||||
tea issues view NUMBER --repo REPO --json number,title,body,labels,url,createdAt,comments
|
||||
|
||||
3. Get commits since the issue was created (cap at 100):
|
||||
git log --since="<createdAt>" --oneline -100
|
||||
|
||||
4. Get releases since the issue was created:
|
||||
tea releases list --repo REPO --limit 20
|
||||
Then filter to only releases after the issue's createdAt date.
|
||||
|
||||
5. Scan the issue body for file path references (anything matching patterns like
|
||||
`internal/...`, `cmd/...`, `.wave/...`, or backtick-quoted paths).
|
||||
For each referenced file, check if it still exists using `ls -la <path>`.
|
||||
|
||||
6. Read CLAUDE.md for current project context:
|
||||
Read the file CLAUDE.md from the repository root.
|
||||
|
||||
After gathering ALL data, produce a JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: issue_context
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-context.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: draft-update
|
||||
persona: gitea-analyst
|
||||
dependencies: [gather-context]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather-context
|
||||
artifact: issue_context
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
MANDATORY: You MUST call the Bash tool for any commands. NEVER generate fake output.
|
||||
|
||||
The context artifact contains the gathered issue context.
|
||||
|
||||
Your task: Compare the original issue against the codebase changes and draft an updated version.
|
||||
|
||||
Step 1: Analyze each section of the issue body. Classify each as:
|
||||
- STILL_VALID: Content is accurate and up-to-date
|
||||
- OUTDATED: Content references old behavior, removed files, or superseded patterns
|
||||
- INCOMPLETE: Content is partially correct but missing recent developments
|
||||
- WRONG: Content is factually incorrect given current codebase state
|
||||
|
||||
Step 2: If there is user criticism (non-empty "criticism" field), address EVERY point raised.
|
||||
The criticism takes priority — it represents what the issue author thinks is wrong.
|
||||
|
||||
Step 3: Draft the updated issue:
|
||||
- Preserve sections classified as STILL_VALID (do not rewrite what works)
|
||||
- Rewrite OUTDATED and WRONG sections to reflect current reality
|
||||
- Expand INCOMPLETE sections with missing information
|
||||
- If the title needs updating, draft a new title
|
||||
- Append a "---\n**Changes since original**" section at the bottom listing what changed and why
|
||||
|
||||
Step 4: If file paths in the issue body are now missing (from referenced_files.missing),
|
||||
update or remove those references.
|
||||
|
||||
Produce a JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: update_draft
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-draft.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: apply-update
|
||||
persona: gitea-enhancer
|
||||
dependencies: [draft-update]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: draft-update
|
||||
artifact: update_draft
|
||||
as: draft
|
||||
- step: gather-context
|
||||
artifact: issue_context
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
|
||||
|
||||
Step 1: Use Bash tool to verify tea works:
|
||||
tea --version
|
||||
|
||||
Step 2: Extract the repo as "<owner>/<name>" and the issue number from the available artifacts.
|
||||
|
||||
Step 3: Apply the update:
|
||||
- If title_changed is true:
|
||||
tea issues edit <NUMBER> --repo <REPO> --title "<updated_title>"
|
||||
- Write the updated_body to a temp file, then apply it:
|
||||
Write updated_body to /tmp/issue-body.md
|
||||
tea issues edit <NUMBER> --repo <REPO> --body-file /tmp/issue-body.md
|
||||
- Clean up /tmp/issue-body.md after applying.
|
||||
|
||||
Step 4: Verify the update was applied:
|
||||
tea issues view <NUMBER> --repo <REPO> --json number,title,body,url
|
||||
|
||||
Compare the returned title and body against what was intended. Flag any discrepancies.
|
||||
|
||||
Step 5: Record the results as a JSON object matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: update_result
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .url
|
||||
label: "Updated Issue"
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-result.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
255
.wave/pipelines/gt-research.yaml
Normal file
255
.wave/pipelines/gt-research.yaml
Normal file
@@ -0,0 +1,255 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gt-research
|
||||
description: Research a Gitea issue and post findings as a comment
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42"
|
||||
schema:
|
||||
type: string
|
||||
description: "Gitea repository and issue number (e.g. 'owner/repo number')"
|
||||
|
||||
steps:
|
||||
- id: fetch-issue
|
||||
persona: gitea-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fetch the Gitea issue specified in the input: {{ input }}
|
||||
|
||||
The input format is "owner/repo issue_number" (e.g., "re-cinq/CFOAgent 112").
|
||||
|
||||
Parse the input to extract the repository and issue number.
|
||||
Use the tea CLI to fetch the issue:
|
||||
|
||||
tea issues view <number> --repo <owner/repo> --json number,title,body,labels,state,author,createdAt,url,comments
|
||||
|
||||
Parse the output and produce structured JSON with the issue content.
|
||||
Include repository information in the output.
|
||||
output_artifacts:
|
||||
- name: issue-content
|
||||
path: .wave/output/issue-content.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/issue-content.json
|
||||
schema_path: .wave/contracts/issue-content.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
|
||||
- id: analyze-topics
|
||||
persona: researcher
|
||||
dependencies: [fetch-issue]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the Gitea issue and extract research topics.
|
||||
|
||||
Identify:
|
||||
1. Key technical questions that need external research
|
||||
2. Domain concepts that require clarification
|
||||
3. External dependencies, libraries, or tools to investigate
|
||||
4. Similar problems/solutions that might provide guidance
|
||||
|
||||
For each topic, provide:
|
||||
- A unique ID (TOPIC-001, TOPIC-002, etc.)
|
||||
- A clear title
|
||||
- Specific questions to answer (1-5 questions per topic)
|
||||
- Search keywords for web research
|
||||
- Priority (critical/high/medium/low based on relevance to solving the issue)
|
||||
- Category (technical/documentation/best_practices/security/performance/compatibility/other)
|
||||
|
||||
Focus on topics that will provide actionable insights for the issue author.
|
||||
Limit to 10 most important topics.
|
||||
output_artifacts:
|
||||
- name: topics
|
||||
path: .wave/output/research-topics.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-topics.json
|
||||
schema_path: .wave/contracts/research-topics.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: research-topics
|
||||
persona: researcher
|
||||
dependencies: [analyze-topics]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
- step: analyze-topics
|
||||
artifact: topics
|
||||
as: research_plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Research the topics identified in the research plan.
|
||||
|
||||
For each topic in the research plan:
|
||||
1. Execute web searches using the provided keywords
|
||||
2. Evaluate source credibility (official docs > authoritative > community)
|
||||
3. Extract relevant findings with key points
|
||||
4. Include direct quotes where helpful
|
||||
5. Rate your confidence in the answer (high/medium/low/inconclusive)
|
||||
|
||||
For each finding:
|
||||
- Assign a unique ID (FINDING-001, FINDING-002, etc.)
|
||||
- Provide a summary (20-2000 characters)
|
||||
- List key points as bullet items
|
||||
- Include source URL, title, and type
|
||||
- Rate relevance to the topic (0-1)
|
||||
|
||||
Always include source URLs for attribution.
|
||||
If a topic yields no useful results, mark confidence as "inconclusive".
|
||||
Document any gaps in the research.
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/research-findings.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-findings.json
|
||||
schema_path: .wave/contracts/research-findings.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: synthesize-report
|
||||
persona: summarizer
|
||||
dependencies: [research-topics]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: original_issue
|
||||
- step: research-topics
|
||||
artifact: findings
|
||||
as: research
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize the research findings into a coherent report for the Gitea issue.
|
||||
|
||||
Create a well-structured research report that includes:
|
||||
|
||||
1. Executive Summary:
|
||||
- Brief overview (50-1000 chars)
|
||||
- Key findings (1-7 bullet points)
|
||||
- Primary recommendation
|
||||
- Confidence assessment (high/medium/low)
|
||||
|
||||
2. Detailed Findings:
|
||||
- Organize by topic/section
|
||||
- Include code examples where relevant
|
||||
- Reference sources using SRC-### IDs
|
||||
|
||||
3. Recommendations:
|
||||
- Actionable items with IDs (REC-001, REC-002, etc.)
|
||||
- Priority and effort estimates
|
||||
- Maximum 10 recommendations
|
||||
|
||||
4. Sources:
|
||||
- List all sources with IDs (SRC-001, SRC-002, etc.)
|
||||
- Include URL, title, type, and reliability
|
||||
|
||||
5. Pre-rendered Markdown:
|
||||
- Generate complete markdown_content field ready for Gitea comment
|
||||
- Use proper headers, bullet points, and formatting
|
||||
- Include a header: "## Research Findings (Wave Pipeline)"
|
||||
- End with sources section
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/research-report.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-report.json
|
||||
schema_path: .wave/contracts/research-report.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: post-comment
|
||||
persona: gitea-commenter
|
||||
dependencies: [synthesize-report]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
- step: synthesize-report
|
||||
artifact: report
|
||||
as: report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Post the research report as a comment on the Gitea issue.
|
||||
|
||||
Steps:
|
||||
1. Read the issue details to get the repository and issue number
|
||||
2. Read the report to get the markdown_content
|
||||
3. Write the markdown content to a file, then use tea CLI to post the comment:
|
||||
|
||||
# Write to file to avoid shell escaping issues with large markdown
|
||||
cat > /tmp/comment-body.md << 'COMMENT_EOF'
|
||||
<markdown_content>
|
||||
COMMENT_EOF
|
||||
|
||||
tea issues comment <number> --repo <owner/repo> --body-file /tmp/comment-body.md
|
||||
|
||||
4. Add a footer to the comment:
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) issue-research pipeline*
|
||||
|
||||
5. Capture the result and verify success
|
||||
6. If successful, extract the comment URL from the output
|
||||
|
||||
Record the result with:
|
||||
- success: true/false
|
||||
- issue_reference: issue number and repository
|
||||
- comment: id, url, body_length (if successful)
|
||||
- error: code, message, retryable (if failed)
|
||||
- timestamp: current time
|
||||
output_artifacts:
|
||||
- name: comment-result
|
||||
path: .wave/output/comment-result.json
|
||||
type: json
|
||||
outcomes:
|
||||
- type: url
|
||||
extract_from: .wave/output/comment-result.json
|
||||
json_path: .comment.url
|
||||
label: "Research Comment"
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/comment-result.json
|
||||
schema_path: .wave/contracts/comment-result.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
98
.wave/pipelines/gt-rewrite.yaml
Normal file
98
.wave/pipelines/gt-rewrite.yaml
Normal file
@@ -0,0 +1,98 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gt-rewrite
|
||||
description: "Analyze and rewrite poorly documented Gitea issues"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42"
|
||||
schema:
|
||||
type: string
|
||||
description: "Gitea repo with optional issue number"
|
||||
|
||||
steps:
|
||||
- id: scan-and-score
|
||||
persona: gitea-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input: {{ input }}
|
||||
|
||||
Step 1: Parse input format.
|
||||
- If "owner/repo NUM" → single issue mode
|
||||
- If "owner/repo" alone → batch mode
|
||||
|
||||
Step 2: Fetch issues via tea CLI.
|
||||
- Single: tea issues view NUM --repo OWNER/REPO --json number,title,body,labels,url
|
||||
- Batch: tea issues list --repo OWNER/REPO --limit 10 --json number,title,body,labels,url
|
||||
|
||||
Step 3: Score each issue quality (0-100) on title clarity, description completeness, labels, and acceptance criteria.
|
||||
|
||||
Step 4: For issues scoring below 70, create an enhancement plan with:
|
||||
- suggested_title, body_template (preserving original content), suggested_labels, enhancements list
|
||||
|
||||
Output JSON with repository (owner/repo string), issues_to_enhance array, and total_to_enhance.
|
||||
output_artifacts:
|
||||
- name: enhancement_plan
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/github-enhancement-plan.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: apply-enhancements
|
||||
persona: gitea-enhancer
|
||||
dependencies: [scan-and-score]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-and-score
|
||||
artifact: enhancement_plan
|
||||
as: plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Read the repository field from the plan artifact for --repo flag.
|
||||
|
||||
For each issue in issues_to_enhance:
|
||||
1. Apply title: tea issues edit NUM --repo REPO --title "suggested_title"
|
||||
2. Apply body: tea issues edit NUM --repo REPO --body "body_template"
|
||||
3. Add labels: tea issues edit NUM --repo REPO --add-label "label1,label2"
|
||||
4. Capture URL: tea issues view NUM --repo REPO --json url --jq .url
|
||||
|
||||
Output JSON with enhanced_issues (issue_number, success, changes_made, url),
|
||||
total_attempted, total_successful.
|
||||
output_artifacts:
|
||||
- name: enhancement_results
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .enhanced_issues[0].url
|
||||
label: "Enhanced Issue"
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/github-enhancement-results.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
167
.wave/pipelines/gt-scope.yaml
Normal file
167
.wave/pipelines/gt-scope.yaml
Normal file
@@ -0,0 +1,167 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: gt-scope
|
||||
description: "Decompose a Gitea epic into well-scoped child issues"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42"
|
||||
schema:
|
||||
type: string
|
||||
description: "Gitea repository with epic issue number (e.g. 'owner/repo 42')"
|
||||
|
||||
steps:
|
||||
- id: fetch-epic
|
||||
persona: gitea-analyst
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
MANDATORY: You MUST call the Bash tool. NEVER say "tea CLI not installed" without trying.
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
Parse the input: extract the repo (first token) and the epic issue number (second token).
|
||||
|
||||
Execute these commands using the Bash tool:
|
||||
|
||||
1. tea --version
|
||||
|
||||
2. Fetch the epic issue with full details:
|
||||
tea issues view <NUMBER> --output json
|
||||
|
||||
3. List existing open issues to check for duplicates:
|
||||
tea issues list --limit 50 --output json
|
||||
|
||||
After getting REAL results from Bash, analyze the epic:
|
||||
- Determine if this is truly an epic/umbrella issue (contains multiple work items)
|
||||
- Identify the key themes and work areas
|
||||
- Estimate overall complexity
|
||||
- Count how many sub-issues should be created (3-10)
|
||||
- List existing issues to avoid creating duplicates
|
||||
output_artifacts:
|
||||
- name: epic_assessment
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/epic-assessment.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: scope-and-create
|
||||
persona: gitea-scoper
|
||||
dependencies: [fetch-epic]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-epic
|
||||
artifact: epic_assessment
|
||||
as: assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CRITICAL: You MUST use the Bash tool for all commands. Do NOT generate fake output.
|
||||
|
||||
The assessment artifact contains the epic analysis. Use it to create well-scoped child issues.
|
||||
|
||||
Step 1: Verify tea works:
|
||||
tea --version
|
||||
|
||||
Step 2: For each planned sub-issue, create it using:
|
||||
tea issues create --title "<title>" --body "<body>" --labels "<labels>"
|
||||
|
||||
Each sub-issue body MUST include:
|
||||
- A "Parent: #<epic_number>" reference line
|
||||
- A clear Summary section
|
||||
- Acceptance Criteria as a checkbox list
|
||||
- Dependencies on other sub-issues if applicable
|
||||
- Scope Notes for what is explicitly excluded
|
||||
|
||||
Step 3: After creating all issues, capture each issue's number and URL from the creation output.
|
||||
|
||||
Step 4: Record the results with fields: parent_issue (number, url, repository),
|
||||
created_issues (array of number, title, url, labels, success, complexity, dependencies),
|
||||
total_created, total_failed.
|
||||
output_artifacts:
|
||||
- name: scope_plan
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .created_issues[0].url
|
||||
label: "First Sub-Issue"
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/scope-plan.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: verify-report
|
||||
persona: gitea-analyst
|
||||
dependencies: [scope-and-create]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scope-and-create
|
||||
artifact: scope_plan
|
||||
as: results
|
||||
- step: fetch-epic
|
||||
artifact: epic_assessment
|
||||
as: assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the created sub-issues and post a summary comment on the epic.
|
||||
|
||||
Step 1: For each created issue in the results, verify it exists:
|
||||
tea issues view <N> --output json
|
||||
|
||||
Check that each issue:
|
||||
- Exists and is open
|
||||
- Has acceptance criteria in the body
|
||||
- References the parent epic
|
||||
|
||||
Step 2: Post a summary comment on the epic issue listing all created sub-issues:
|
||||
Create a markdown summary with a checklist of all sub-issues (- [ ] #<number> <title>)
|
||||
and post it using: tea issues comment <EPIC_NUMBER> --body "<summary>"
|
||||
|
||||
Step 3: Compile the verification report with fields:
|
||||
parent_issue (number, url), verified_issues (array of number, title, url, exists,
|
||||
has_acceptance_criteria, references_parent), summary (total_verified, total_valid,
|
||||
total_issues_created, comment_posted, comment_url).
|
||||
output_artifacts:
|
||||
- name: scope_report
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
handover:
|
||||
max_retries: 1
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/scope-report.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
214
.wave/pipelines/impl-feature.yaml
Normal file
214
.wave/pipelines/impl-feature.yaml
Normal file
@@ -0,0 +1,214 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-feature
|
||||
description: "Plan, implement, test, and commit a feature to a new branch"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "add a --dry-run flag to the run command"
|
||||
|
||||
steps:
|
||||
- id: explore
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Explore the codebase to plan this feature: {{ input }}
|
||||
|
||||
## Exploration
|
||||
|
||||
1. **Understand the request**: What is being asked? Assess scope
|
||||
(small = 1-2 files, medium = 3-7, large = 8+).
|
||||
|
||||
2. **Find related code**: Use Glob and Grep to find files related
|
||||
to the feature. Note paths, relevance, and key symbols.
|
||||
|
||||
3. **Identify patterns**: Read key files. Document conventions that
|
||||
must be followed (naming, error handling, testing patterns).
|
||||
|
||||
4. **Map affected modules**: Which packages are directly/indirectly affected?
|
||||
|
||||
5. **Survey tests**: Find related test files, testing patterns, gaps.
|
||||
|
||||
6. **Assess risks**: Breaking changes, performance, security implications.
|
||||
|
||||
Produce a structured JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: exploration
|
||||
path: .wave/output/exploration.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/exploration.json
|
||||
schema_path: .wave/contracts/feature-exploration.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: plan
|
||||
persona: planner
|
||||
dependencies: [explore]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: context
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Create an implementation plan for this feature.
|
||||
|
||||
Feature: {{ input }}
|
||||
|
||||
The codebase exploration has been injected into your workspace. Read it first.
|
||||
|
||||
Break the feature into ordered implementation steps:
|
||||
|
||||
1. For each step: what to do, which files to modify, acceptance criteria
|
||||
2. Dependencies between steps
|
||||
3. What tests to write
|
||||
4. Complexity estimate per step (S/M/L)
|
||||
|
||||
Produce a structured JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: plan
|
||||
path: .wave/output/plan.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/plan.json
|
||||
schema_path: .wave/contracts/feature-plan.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: implement
|
||||
persona: craftsman
|
||||
dependencies: [plan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: context
|
||||
- step: plan
|
||||
artifact: plan
|
||||
as: impl_plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "feat/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Implement the feature on this isolated worktree branch.
|
||||
|
||||
The codebase exploration and implementation plan have been injected into your
|
||||
workspace. Read them both before starting.
|
||||
|
||||
Feature: {{ input }}
|
||||
|
||||
## Process
|
||||
|
||||
1. **Implement step by step** following the plan:
|
||||
- Follow existing codebase patterns identified in exploration
|
||||
- Write tests alongside implementation
|
||||
- After each significant change, verify it compiles
|
||||
|
||||
2. Run the full test suite and fix any failures before proceeding.
|
||||
|
||||
3. **Commit**:
|
||||
```bash
|
||||
git add <specific-files>
|
||||
git commit -m "<commit_message_suggestion from plan>
|
||||
|
||||
Implementation following plan:
|
||||
- S01: <title>
|
||||
- S02: <title>
|
||||
..."
|
||||
```
|
||||
|
||||
Commit changes to the worktree branch.
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
compaction:
|
||||
trigger: "token_limit_80%"
|
||||
persona: summarizer
|
||||
output_artifacts:
|
||||
- name: result
|
||||
path: .wave/output/result.md
|
||||
type: markdown
|
||||
|
||||
# ── Publish ─────────────────────────────────────────────────────────
|
||||
- id: publish
|
||||
persona: craftsman
|
||||
dependencies: [implement]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: implement
|
||||
artifact: result
|
||||
as: result
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "feat/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
PUBLISH — push the branch and create a pull request.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Push the branch:
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
```
|
||||
|
||||
2. Create a pull request using the implementation result as context:
|
||||
```bash
|
||||
COMMIT_SUBJECT=$(git log --format=%s -1)
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} create --title "feat: $COMMIT_SUBJECT" --body-file .wave/artifacts/result
|
||||
```
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
118
.wave/pipelines/impl-hotfix.yaml
Normal file
118
.wave/pipelines/impl-hotfix.yaml
Normal file
@@ -0,0 +1,118 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-hotfix
|
||||
description: "Quick investigation and fix for production issues"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "fix panic in pipeline executor when step has nil context"
|
||||
|
||||
steps:
|
||||
- id: investigate
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Investigate this production issue: {{ input }}
|
||||
|
||||
1. Search for related code paths
|
||||
2. Check recent commits that may have introduced the bug
|
||||
3. Identify the root cause
|
||||
4. Assess blast radius (what else could be affected)
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/findings.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/findings.json
|
||||
schema_path: .wave/contracts/findings.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: fix
|
||||
persona: craftsman
|
||||
dependencies: [investigate]
|
||||
thread: hotfix
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: investigate
|
||||
artifact: findings
|
||||
as: investigation
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fix the production issue based on the investigation findings.
|
||||
|
||||
Requirements:
|
||||
1. Apply the minimal fix - don't refactor surrounding code
|
||||
2. Add a regression test that would have caught this bug
|
||||
3. Ensure all existing tests still pass
|
||||
4. Document the fix in a commit-ready message
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
compaction:
|
||||
trigger: "token_limit_80%"
|
||||
persona: summarizer
|
||||
|
||||
- id: run-tests
|
||||
type: command
|
||||
dependencies: [fix]
|
||||
script: "{{ project.contract_test_command }}"
|
||||
|
||||
- id: gate
|
||||
type: conditional
|
||||
dependencies: [run-tests]
|
||||
edges:
|
||||
- target: verify
|
||||
condition: "outcome=success"
|
||||
- target: fix
|
||||
|
||||
- id: verify
|
||||
persona: reviewer
|
||||
dependencies: [gate]
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the hotfix:
|
||||
|
||||
1. Is the fix minimal and focused? (no unrelated changes)
|
||||
2. Does the regression test actually test the reported issue?
|
||||
3. Are there other code paths with the same vulnerability?
|
||||
4. Is the fix safe for production deployment?
|
||||
|
||||
Output a go/no-go recommendation with reasoning.
|
||||
output_artifacts:
|
||||
- name: verdict
|
||||
path: .wave/output/verdict.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/verdict.md
|
||||
133
.wave/pipelines/impl-improve.yaml
Normal file
133
.wave/pipelines/impl-improve.yaml
Normal file
@@ -0,0 +1,133 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-improve
|
||||
description: "Analyze code and apply targeted improvements"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "improve error handling in internal/pipeline"
|
||||
|
||||
steps:
|
||||
- id: assess
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Assess the code for improvement opportunities: {{ input }}
|
||||
|
||||
## Assessment Areas
|
||||
|
||||
1. **Code quality**: Readability, naming, structure, duplication
|
||||
2. **Error handling**: Missing checks, swallowed errors, unclear messages
|
||||
3. **Performance**: Unnecessary allocations, N+1 patterns, missing caching
|
||||
4. **Testability**: Hard-to-test code, missing interfaces, tight coupling
|
||||
5. **Robustness**: Missing nil checks, race conditions, resource leaks
|
||||
6. **Maintainability**: Complex functions, deep nesting, magic numbers
|
||||
|
||||
For each finding, assess:
|
||||
- Impact: how much does fixing it improve the code?
|
||||
- Effort: how hard is the fix?
|
||||
- Risk: could the fix introduce regressions?
|
||||
output_artifacts:
|
||||
- name: assessment
|
||||
path: .wave/output/assessment.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/assessment.json
|
||||
schema_path: .wave/contracts/improvement-assessment.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: implement
|
||||
persona: craftsman
|
||||
dependencies: [assess]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: assess
|
||||
artifact: assessment
|
||||
as: findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Apply the recommended improvements to the codebase.
|
||||
|
||||
## Rules
|
||||
|
||||
1. **Start with quick wins**: Apply trivial/small effort fixes first
|
||||
2. **One improvement at a time**: Make each change, verify it compiles,
|
||||
then move to the next
|
||||
3. **Preserve behavior**: Improvements must not change external behavior
|
||||
4. **Run tests**: After each significant change, run relevant tests
|
||||
5. **Skip high-risk items**: Do not apply changes rated risk=high
|
||||
without explicit test coverage
|
||||
6. **Document changes**: Track what was changed and why
|
||||
|
||||
Focus on the findings with the best impact-to-effort ratio.
|
||||
Do NOT refactor beyond what was identified in the assessment.
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
compaction:
|
||||
trigger: "token_limit_80%"
|
||||
persona: summarizer
|
||||
|
||||
- id: verify
|
||||
persona: reviewer
|
||||
dependencies: [implement]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: assess
|
||||
artifact: assessment
|
||||
as: original_findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the improvements were applied correctly.
|
||||
|
||||
For each improvement that was applied:
|
||||
1. Is the fix correct and complete?
|
||||
2. Does it actually address the identified issue?
|
||||
3. Were any new issues introduced?
|
||||
4. Are tests still passing?
|
||||
|
||||
For improvements NOT applied, confirm they were appropriately skipped.
|
||||
|
||||
Produce a verification report covering:
|
||||
- Applied improvements (with before/after)
|
||||
- Skipped items (with justification)
|
||||
- New issues found (if any)
|
||||
- Overall quality delta assessment
|
||||
output_artifacts:
|
||||
- name: verification
|
||||
path: .wave/output/verification.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/verification.md
|
||||
155
.wave/pipelines/impl-issue.yaml
Normal file
155
.wave/pipelines/impl-issue.yaml
Normal file
@@ -0,0 +1,155 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-issue
|
||||
description: "Implement an issue end-to-end: fetch, assess, plan, implement, create PR"
|
||||
release: true
|
||||
|
||||
chat_context:
|
||||
artifact_summaries:
|
||||
- assessment
|
||||
- impl-plan
|
||||
- pr-result
|
||||
suggested_questions:
|
||||
- "Would you like to review the changes in the pull request?"
|
||||
- "Are there any failing tests to investigate?"
|
||||
- "Should we refine the implementation or add more test coverage?"
|
||||
focus_areas:
|
||||
- "Code changes and implementation quality"
|
||||
- "Test results and coverage"
|
||||
- "PR status and review readiness"
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
- gh-cli
|
||||
- software-design
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub repository and issue number"
|
||||
example: "re-cinq/wave 42"
|
||||
|
||||
steps:
|
||||
- id: fetch-assess
|
||||
persona: implementer
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/implement/fetch-assess.md
|
||||
output_artifacts:
|
||||
- name: assessment
|
||||
path: .wave/output/issue-assessment.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/issue-assessment.json
|
||||
schema_path: .wave/contracts/issue-assessment.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: plan
|
||||
persona: implementer
|
||||
dependencies: [fetch-assess]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-assess
|
||||
artifact: assessment
|
||||
as: issue_assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
base: main
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/implement/plan.md
|
||||
output_artifacts:
|
||||
- name: impl-plan
|
||||
path: .wave/output/impl-plan.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/impl-plan.json
|
||||
schema_path: .wave/contracts/issue-impl-plan.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: implement
|
||||
persona: craftsman
|
||||
thread: impl
|
||||
dependencies: [plan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-assess
|
||||
artifact: assessment
|
||||
as: issue_assessment
|
||||
- step: plan
|
||||
artifact: impl-plan
|
||||
as: impl_plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/implement/implement.md
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
compaction:
|
||||
trigger: "token_limit_80%"
|
||||
persona: summarizer
|
||||
|
||||
- id: create-pr
|
||||
persona: "gitea-commenter"
|
||||
dependencies: [implement]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-assess
|
||||
artifact: assessment
|
||||
as: issue_assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/implement/create-pr.md
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
367
.wave/pipelines/impl-prototype.yaml
Normal file
367
.wave/pipelines/impl-prototype.yaml
Normal file
@@ -0,0 +1,367 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-prototype
|
||||
description: "Prototype-driven implementation: spec → docs → dummy → implement → PR cycle"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "build a REST API for user management with CRUD operations"
|
||||
|
||||
steps:
|
||||
# Phase 1: Spec - Requirements capture with speckit integration
|
||||
- id: spec
|
||||
persona: craftsman
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are beginning the specification phase of a prototype-driven development pipeline.
|
||||
|
||||
Your goal is to analyze the project description and create a comprehensive feature specification:
|
||||
|
||||
Project description: {{ input }}
|
||||
|
||||
CRITICAL: Create both spec.md and requirements.md files:
|
||||
|
||||
spec.md should contain the complete feature specification including:
|
||||
- Feature overview and business value
|
||||
- User stories with acceptance criteria
|
||||
- Functional requirements
|
||||
- Success criteria and measurable outcomes
|
||||
- Constraints and assumptions
|
||||
|
||||
requirements.md should contain extracted requirements (optional additional detail).
|
||||
|
||||
Use speckit integration where available to enhance specification quality.
|
||||
|
||||
The specification must be technology-agnostic and focused on user value.
|
||||
|
||||
Create artifact.json with your results.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
output_artifacts:
|
||||
- name: spec
|
||||
path: spec.md
|
||||
type: markdown
|
||||
- name: requirements
|
||||
path: requirements.md
|
||||
type: markdown
|
||||
- name: contract_data
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/spec-phase.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# Phase 2: Docs - Generate runnable documentation from specification
|
||||
- id: docs
|
||||
persona: philosopher
|
||||
dependencies: [spec]
|
||||
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: spec
|
||||
artifact: spec
|
||||
as: input-spec.md
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are in the documentation phase of prototype-driven development.
|
||||
|
||||
Your goal is to create comprehensive, runnable documentation from the specification.
|
||||
|
||||
Create feature documentation from the injected specification that includes:
|
||||
- User-friendly explanation of the feature
|
||||
- Usage examples and scenarios
|
||||
- Integration guide for developers
|
||||
- Stakeholder summary for non-technical audiences
|
||||
|
||||
Generate VitePress-compatible markdown that can be served as runnable documentation.
|
||||
|
||||
CRITICAL: Create both feature-docs.md and stakeholder-summary.md files.
|
||||
|
||||
Create artifact.json with your results.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
output_artifacts:
|
||||
- name: feature-docs
|
||||
path: feature-docs.md
|
||||
type: markdown
|
||||
- name: stakeholder-summary
|
||||
path: stakeholder-summary.md
|
||||
type: markdown
|
||||
- name: contract_data
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/docs-phase.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# Phase 3: Dummy - Build authentic functional prototype
|
||||
- id: dummy
|
||||
persona: craftsman
|
||||
dependencies: [docs]
|
||||
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: docs
|
||||
artifact: feature-docs
|
||||
as: feature-docs.md
|
||||
- step: spec
|
||||
artifact: spec
|
||||
as: spec.md
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are in the dummy/prototype phase of development.
|
||||
|
||||
Your goal is to create a working prototype with authentic I/O handling but stub business logic.
|
||||
|
||||
Create a functional prototype that:
|
||||
- Handles real input and output properly
|
||||
- Implements all user interfaces and endpoints
|
||||
- Uses placeholder/stub implementations for business logic
|
||||
- Can be run and demonstrated to stakeholders
|
||||
- Shows the complete user experience flow
|
||||
|
||||
Focus on proving the interface design and user flows work correctly.
|
||||
|
||||
CRITICAL: Create prototype/ directory with working code and interfaces.md with interface definitions.
|
||||
|
||||
Create artifact.json with your results.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
output_artifacts:
|
||||
- name: prototype
|
||||
path: prototype/
|
||||
type: binary
|
||||
- name: interface-definitions
|
||||
path: interfaces.md
|
||||
type: markdown
|
||||
- name: contract_data
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/dummy-phase.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# Phase 4: Implement - Transition to full implementation
|
||||
- id: implement
|
||||
persona: craftsman
|
||||
dependencies: [dummy]
|
||||
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: spec
|
||||
artifact: spec
|
||||
as: spec.md
|
||||
- step: docs
|
||||
artifact: feature-docs
|
||||
as: feature-docs.md
|
||||
- step: dummy
|
||||
artifact: prototype
|
||||
as: prototype/
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are in the implementation phase - transitioning from prototype to production code.
|
||||
|
||||
Your goal is to provide implementation guidance and begin real implementation:
|
||||
- Review all previous artifacts for implementation readiness
|
||||
- Create implementation plan and checklist
|
||||
- Begin replacing stub logic with real implementations
|
||||
- Ensure test coverage for all functionality
|
||||
- Maintain compatibility with established interfaces
|
||||
|
||||
Focus on production-quality code that fulfills the original specification.
|
||||
|
||||
CRITICAL: Create implementation-plan.md and implementation-checklist.md files.
|
||||
|
||||
Create artifact.json with your results.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
output_artifacts:
|
||||
- name: implementation-plan
|
||||
path: implementation-plan.md
|
||||
type: markdown
|
||||
- name: progress-checklist
|
||||
path: implementation-checklist.md
|
||||
type: markdown
|
||||
- name: contract_data
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/implement-phase.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# Phase 5: PR-Cycle - Automated pull request lifecycle
|
||||
- id: pr-create
|
||||
persona: navigator
|
||||
dependencies: [implement]
|
||||
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: implement
|
||||
artifact: implementation-plan
|
||||
as: implementation-plan.md
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are creating a pull request for the implemented feature.
|
||||
|
||||
Create a comprehensive pull request:
|
||||
- Clear PR title and description
|
||||
- Link to related issues
|
||||
- Include testing instructions
|
||||
- Add appropriate labels and reviewers
|
||||
- Request Copilot review
|
||||
|
||||
Use GitHub CLI to create the PR and configure automated review workflow.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
output_artifacts:
|
||||
- name: pr-info
|
||||
path: pr-info.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: pr-info.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: pr-review
|
||||
persona: auditor
|
||||
model: claude-haiku
|
||||
dependencies: [pr-create]
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Monitor and manage the PR review process.
|
||||
|
||||
Poll for Copilot review completion and analyze feedback.
|
||||
Prepare response strategy for review comments.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
- id: pr-respond
|
||||
persona: philosopher
|
||||
dependencies: [pr-review]
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze review comments and prepare thoughtful responses.
|
||||
|
||||
Generate responses to review feedback that:
|
||||
- Address technical concerns professionally
|
||||
- Explain design decisions clearly
|
||||
- Propose solutions for identified issues
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
- id: pr-fix
|
||||
persona: craftsman
|
||||
dependencies: [pr-respond]
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Implement small fixes based on review feedback.
|
||||
|
||||
For larger changes, create follow-up issues instead of expanding this PR.
|
||||
Focus on quick, low-risk improvements that address reviewer concerns.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
|
||||
- id: pr-merge
|
||||
persona: navigator
|
||||
dependencies: [pr-fix]
|
||||
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Complete the PR lifecycle with merge.
|
||||
|
||||
Verify all checks pass, reviews are approved, and merge the PR.
|
||||
Clean up branch and notify stakeholders of completion.
|
||||
|
||||
workspace:
|
||||
mount:
|
||||
- source: .
|
||||
target: /project
|
||||
mode: readwrite
|
||||
560
.wave/pipelines/impl-recinq.yaml
Normal file
560
.wave/pipelines/impl-recinq.yaml
Normal file
@@ -0,0 +1,560 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-recinq
|
||||
description: "Rethink and simplify code using divergent-convergent thinking (Double Diamond)"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "internal/pipeline"
|
||||
|
||||
# Pipeline structure implements the Double Diamond:
|
||||
#
|
||||
# gather → diverge → converge → probe → distill → simplify → report
|
||||
# ╰─ Diamond 1 ─╯ ╰─ Diamond 2 ─╯ ╰implement╯
|
||||
# (discover) (define) (develop) (deliver)
|
||||
#
|
||||
# Each step gets its own context window and cognitive mode.
|
||||
# Fresh memory at every boundary — no mode-switching within a step.
|
||||
|
||||
steps:
|
||||
- id: gather
|
||||
persona: "gitea-analyst"
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CONTEXT GATHERING — parse input and fetch GitHub context if applicable.
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
## Instructions
|
||||
|
||||
Determine what kind of input this is:
|
||||
|
||||
1. **GitHub Issue URL**: Contains `github.com` and `/issues/`
|
||||
- Extract owner/repo and issue number from the URL
|
||||
- Run: `{{ forge.cli_tool }} issue view <number> --repo <owner/repo> --json title,body,labels`
|
||||
- Extract a `focus_hint` summarizing what should be simplified
|
||||
|
||||
2. **GitHub PR URL**: Contains `github.com` and `/pull/`
|
||||
- Extract owner/repo and PR number from the URL
|
||||
- Run: `{{ forge.cli_tool }} {{ forge.pr_command }} view <number> --repo <owner/repo> --json title,body,labels,files`
|
||||
- Extract a `focus_hint` summarizing what the PR is about
|
||||
|
||||
3. **Local path or description**: Anything else
|
||||
- Set `input_type` to `"local"`
|
||||
- Pass through the original input as-is
|
||||
|
||||
## Output
|
||||
|
||||
IMPORTANT: The output MUST be valid JSON. Do NOT include markdown fencing.
|
||||
output_artifacts:
|
||||
- name: context
|
||||
path: .wave/output/context.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/context.json
|
||||
schema_path: .wave/contracts/recinq-context.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# ── Diamond 1: Discover (DIVERGENT) ──────────────────────────────────
|
||||
- id: diverge
|
||||
persona: provocateur
|
||||
model: claude-haiku
|
||||
dependencies: [gather]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: context
|
||||
as: context
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
DIVERGENT THINKING — cast the widest net to find simplification opportunities.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
## Starting Point
|
||||
|
||||
The context artifact contains input context.
|
||||
If `input_type` is `"issue"` or `"pr"`, the `focus_hint` tells you WHERE to start looking —
|
||||
but do NOT limit yourself to what the issue describes. Use it as a seed, then expand outward.
|
||||
Follow dependency chains, trace callers, explore adjacent modules. The issue author doesn't
|
||||
know what they don't know — that's YOUR job.
|
||||
If `input_type` is `"local"`, use the `original_input` field as the target path.
|
||||
|
||||
If input is empty or "." — analyze the whole project.
|
||||
If input is a path — focus on that module/directory but consider its connections.
|
||||
|
||||
## Your Mission
|
||||
|
||||
Challenge EVERYTHING. Question every assumption. Hunt complexity.
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Premature abstractions**: Interfaces with one implementation. Generic code used once.
|
||||
"What if we just inlined this?"
|
||||
|
||||
2. **Unnecessary indirection**: Layers that pass-through without adding value.
|
||||
Wrappers around wrappers. "How many hops to get to the actual logic?"
|
||||
|
||||
3. **Overengineering**: Configuration for things that never change. Plugins with one plugin.
|
||||
Feature flags for features that are always on. "Is this complexity earning its keep?"
|
||||
|
||||
4. **YAGNI violations**: Code written for hypothetical future needs that never arrived.
|
||||
"When was this last changed? Does anyone actually use this path?"
|
||||
|
||||
5. **Accidental complexity**: Things that are hard because of how they're built, not because
|
||||
the problem is hard. "Could this be 10x simpler if we started over?"
|
||||
|
||||
6. **Copy-paste drift**: Similar-but-slightly-different code that should be unified or
|
||||
intentionally differentiated. "Are these differences meaningful or accidental?"
|
||||
|
||||
7. **Dead weight**: Unused exports, unreachable code, obsolete comments, stale TODOs.
|
||||
`grep -r` for references. If nothing uses it, flag it.
|
||||
|
||||
8. **Naming lies**: Names that don't match what the code actually does.
|
||||
"Does this 'manager' actually manage anything?"
|
||||
|
||||
9. **Dependency gravity**: Modules that pull in everything. Import graphs that are too dense.
|
||||
"What's the blast radius of changing this?"
|
||||
|
||||
## Evidence Requirements
|
||||
|
||||
For EVERY finding, gather concrete metrics:
|
||||
- `wc -l` for line counts
|
||||
- `grep -r` for usage/reference counts
|
||||
- `git log --oneline <file> | wc -l` for change frequency
|
||||
- List the actual files involved
|
||||
|
||||
## Output
|
||||
|
||||
Each finding gets a unique ID: DVG-001, DVG-002, etc.
|
||||
|
||||
Be AGGRESSIVE — flag everything suspicious. The convergent phase will filter.
|
||||
It's better to have 30 findings with 10 false positives than 5 findings that miss
|
||||
the big opportunities.
|
||||
|
||||
Include a metrics_summary with totals by category and severity, plus hotspot files
|
||||
that appear in multiple findings.
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/divergent-findings.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/divergent-findings.json
|
||||
schema_path: .wave/contracts/divergent-findings.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# ── Diamond 1: Define (CONVERGENT) ───────────────────────────────────
|
||||
- id: converge
|
||||
persona: validator
|
||||
dependencies: [diverge]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diverge
|
||||
artifact: findings
|
||||
as: divergent_findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CONVERGENT VALIDATION — separate signal from noise.
|
||||
|
||||
This is a purely CONVERGENT step. Your job is analytical, not creative.
|
||||
Judge every finding on technical merit alone. No speculation, no new ideas.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
## For Every DVG-xxx Finding
|
||||
|
||||
1. **Read the actual code** cited as evidence — don't trust the provocateur's summary
|
||||
2. **Verify the metrics** — check reference counts, line counts, change frequency
|
||||
3. **Assess**: Is this a real problem or a false positive?
|
||||
- Does the "premature abstraction" actually have a second implementation planned?
|
||||
- Is the "dead weight" actually used via reflection or codegen?
|
||||
- Is the "unnecessary indirection" actually providing error handling or logging?
|
||||
4. **Classify**:
|
||||
- `CONFIRMED` — real problem, metrics check out, code supports the claim
|
||||
- `PARTIALLY_CONFIRMED` — real concern but overstated, or scope is narrower than claimed
|
||||
- `REJECTED` — false positive, justified complexity, or incorrect metrics
|
||||
5. **Explain**: For every classification, write WHY. For rejections, explain what
|
||||
the provocateur got wrong.
|
||||
|
||||
Be RIGOROUS. The provocateur was told to be aggressive — your job is to be skeptical.
|
||||
A finding that survives your scrutiny is genuinely worth addressing.
|
||||
output_artifacts:
|
||||
- name: validated_findings
|
||||
path: .wave/output/validated-findings.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/validated-findings.json
|
||||
schema_path: .wave/contracts/validated-findings.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# ── Diamond 2: Develop (DIVERGENT) ───────────────────────────────────
|
||||
- id: probe
|
||||
persona: provocateur
|
||||
dependencies: [converge]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diverge
|
||||
artifact: findings
|
||||
as: divergent_findings
|
||||
- step: converge
|
||||
artifact: validated_findings
|
||||
as: validated_findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
SECOND DIVERGENCE — probe deeper into confirmed findings.
|
||||
|
||||
The first pass cast a wide net. The validator filtered it down.
|
||||
Now YOU go deeper on what survived. This is DIVERGENT thinking again —
|
||||
expand, connect, discover what the first pass missed.
|
||||
|
||||
Focus on findings with status CONFIRMED or PARTIALLY_CONFIRMED.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
## Your Mission
|
||||
|
||||
For each confirmed finding, probe OUTWARD:
|
||||
|
||||
1. **Trace the dependency graph**: What calls this code? What does it call?
|
||||
If we simplify X, what happens to its callers and callees?
|
||||
|
||||
2. **Find second-order effects**: If we remove abstraction A, does layer B
|
||||
also become unnecessary? Do test helpers simplify? Do error paths collapse?
|
||||
|
||||
3. **Spot patterns across findings**: Do three findings all stem from the same
|
||||
over-abstraction? Is there a root cause that would address multiple DVGs at once?
|
||||
|
||||
4. **Discover what was MISSED**: With the validated findings as context, look for
|
||||
related opportunities the first pass didn't see. The confirmed findings reveal
|
||||
the codebase's real pressure points — what else lurks nearby?
|
||||
|
||||
5. **Challenge the rejections**: Were any findings rejected too hastily?
|
||||
Read the validator's rationale — do you disagree?
|
||||
|
||||
## Evidence Requirements
|
||||
|
||||
Same standard as the first diverge pass:
|
||||
- `wc -l` for line counts
|
||||
- `grep -r` for usage/reference counts
|
||||
- `git log --oneline <file> | wc -l` for change frequency
|
||||
- Concrete file paths and code references
|
||||
|
||||
## Output
|
||||
|
||||
Go DEEP. The first pass was wide, this pass is deep. Follow every thread.
|
||||
output_artifacts:
|
||||
- name: probed_findings
|
||||
path: .wave/output/probed-findings.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/probed-findings.json
|
||||
schema_path: .wave/contracts/probed-findings.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# ── Diamond 2: Deliver (CONVERGENT) ──────────────────────────────────
|
||||
- id: distill
|
||||
persona: synthesizer
|
||||
dependencies: [probe]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: context
|
||||
as: context
|
||||
- step: converge
|
||||
artifact: validated_findings
|
||||
as: validated_findings
|
||||
- step: probe
|
||||
artifact: probed_findings
|
||||
as: probed_findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
FINAL CONVERGENCE — write a JSON object to `.wave/output/convergent-proposals.json`.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
Read ALL injected artifacts first:
|
||||
- `.wave/artifacts/context` — issue/PR context from the gather step
|
||||
- `.wave/artifacts/validated_findings` — findings that survived scrutiny
|
||||
- `.wave/artifacts/probed_findings` — deeper connections, patterns, new discoveries
|
||||
|
||||
Then write a SINGLE JSON object (no markdown, no prose, no code fences) to
|
||||
the output file using the Write tool. The file must start with `{` and end with `}`.
|
||||
|
||||
## How to populate each field
|
||||
|
||||
**`source_findings`**: Count how many findings you reviewed, confirmed, partially
|
||||
confirmed, or rejected. Include rejection reasons.
|
||||
|
||||
**`validation_summary`**: One paragraph describing the converge→diverge→converge
|
||||
validation process and what survived.
|
||||
|
||||
**`proposals`** array — for each proposal:
|
||||
- `id`: SMP-001, SMP-002, etc.
|
||||
- Group findings that share a root cause into a single proposal
|
||||
- Incorporate second-order effects from the probe step into `impact` estimates
|
||||
- Include DVG-NEW-xxx discoveries from the probe step (pre-validated)
|
||||
- If context shows `input_type` is `"issue"` or `"pr"`, use `focus_hint` as ONE
|
||||
input when assigning `tier`, but do not discard strong proposals outside scope
|
||||
- `tier`: 1=do now, 2=do next, 3=consider later
|
||||
- `files`: list actual file paths affected
|
||||
- `dependencies`: SMP-xxx IDs that must be applied first
|
||||
|
||||
**`eighty_twenty_analysis`**: Which 20% of proposals yield 80% of the benefit?
|
||||
|
||||
**`timestamp`**: ISO 8601 datetime.
|
||||
|
||||
IMPORTANT: The Write tool call must contain ONLY the JSON object.
|
||||
Contract validation will reject non-JSON output.
|
||||
output_artifacts:
|
||||
- name: proposals
|
||||
path: .wave/output/convergent-proposals.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/convergent-proposals.json
|
||||
schema_path: .wave/contracts/convergent-proposals.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# ── Implementation ───────────────────────────────────────────────────
|
||||
- id: simplify
|
||||
persona: craftsman
|
||||
dependencies: [distill]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: converge
|
||||
artifact: validated_findings
|
||||
as: validated_findings
|
||||
- step: distill
|
||||
artifact: proposals
|
||||
as: proposals
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "refactor/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
IMPLEMENTATION — apply the best simplification proposals.
|
||||
|
||||
## Process
|
||||
|
||||
Apply ONLY tier-1 proposals, in dependency order.
|
||||
|
||||
For each proposal (SMP-xxx):
|
||||
|
||||
1. **Announce**: Print which proposal you're applying and what it does
|
||||
2. **Apply**: Make the code changes
|
||||
3. **Build**: Run the project's build command — must succeed
|
||||
4. **Test**: Run the project's test suite — must pass
|
||||
5. **Commit**: If build and tests pass:
|
||||
```bash
|
||||
git add <specific-files>
|
||||
git commit -m "refactor: <proposal title>
|
||||
|
||||
Applies SMP-xxx: <brief description>
|
||||
Source findings: <DVG-xxx list>"
|
||||
```
|
||||
6. **Revert if failing**: If tests fail after applying, revert:
|
||||
```bash
|
||||
git checkout -- .
|
||||
```
|
||||
Log the failure and move to the next proposal.
|
||||
|
||||
## Final Verification
|
||||
|
||||
After all tier-1 proposals are applied (or attempted):
|
||||
1. Run the full test suite
|
||||
2. Run the project's build command
|
||||
3. Summarize what was applied, what was skipped, and net lines changed
|
||||
|
||||
## Important
|
||||
|
||||
- Each proposal gets its own atomic commit
|
||||
- Never combine proposals in a single commit
|
||||
- If a proposal depends on a failed proposal, skip it too
|
||||
- Commit each proposal as a separate atomic commit
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
compaction:
|
||||
trigger: "token_limit_80%"
|
||||
persona: summarizer
|
||||
output_artifacts:
|
||||
- name: result
|
||||
path: .wave/output/result.md
|
||||
type: markdown
|
||||
|
||||
# ── Reporting ────────────────────────────────────────────────────────
|
||||
- id: report
|
||||
persona: navigator
|
||||
dependencies: [simplify]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: distill
|
||||
artifact: proposals
|
||||
as: proposals
|
||||
- step: simplify
|
||||
artifact: result
|
||||
as: result
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "refactor/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
REPORT — compose a summary of what recinq found and applied.
|
||||
|
||||
Run `git log --oneline` to see the commits on this branch.
|
||||
|
||||
## Compose the Report
|
||||
|
||||
Write a markdown report containing:
|
||||
- **Summary**: One-paragraph overview of what recinq found and applied
|
||||
- **Proposals**: List of all proposals with their tier, impact, and status (applied/skipped/failed)
|
||||
- **Changes Applied**: Summary of commits made, files changed, net lines removed
|
||||
- **Remaining Opportunities**: Tier-2 and tier-3 proposals for future consideration
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/report.md
|
||||
|
||||
# ── Publish ─────────────────────────────────────────────────────────
|
||||
- id: publish
|
||||
persona: craftsman
|
||||
dependencies: [report, gather]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: context
|
||||
as: context
|
||||
- step: report
|
||||
artifact: report
|
||||
as: report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "refactor/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
PUBLISH — push the branch and create a pull request.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Push the branch:
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
```
|
||||
|
||||
2. Create a pull request using the report as the body:
|
||||
```bash
|
||||
COMMIT_SUBJECT=$(git log --format=%s -1)
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} create --title "refactor: $COMMIT_SUBJECT" --body-file .wave/artifacts/report
|
||||
```
|
||||
|
||||
3. If the context artifact shows `input_type` is `"issue"` or `"pr"`,
|
||||
post the PR URL as a comment on the source:
|
||||
```bash
|
||||
echo "Refactoring PR: <pr-url>" > /tmp/recinq-comment.md
|
||||
{{ forge.cli_tool }} issue comment <number> --repo <repo> --body-file /tmp/recinq-comment.md
|
||||
```
|
||||
or for PRs:
|
||||
```bash
|
||||
echo "Refactoring PR: <pr-url>" > /tmp/recinq-comment.md
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} comment <number> --repo <repo> --body-file /tmp/recinq-comment.md
|
||||
```
|
||||
|
||||
4. Write the JSON status report to the output artifact path.
|
||||
|
||||
If any `{{ forge.cli_tool }}` command fails, log the error and continue.
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
150
.wave/pipelines/impl-refactor.yaml
Normal file
150
.wave/pipelines/impl-refactor.yaml
Normal file
@@ -0,0 +1,150 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-refactor
|
||||
description: "Safe refactoring with comprehensive test coverage"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "extract workspace manager from executor into its own package"
|
||||
|
||||
steps:
|
||||
- id: analyze
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze refactoring scope for: {{ input }}
|
||||
|
||||
1. Identify all code that will be affected
|
||||
2. Map all callers/consumers of the code being refactored
|
||||
3. Find existing test coverage
|
||||
4. Identify integration points
|
||||
output_artifacts:
|
||||
- name: analysis
|
||||
path: .wave/output/refactor-analysis.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/refactor-analysis.json
|
||||
schema_path: .wave/contracts/refactor-analysis.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: test-baseline
|
||||
persona: craftsman
|
||||
dependencies: [analyze]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze
|
||||
artifact: analysis
|
||||
as: scope
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Before refactoring, ensure test coverage:
|
||||
|
||||
1. Run existing tests and record baseline
|
||||
2. Add characterization tests for uncovered code paths
|
||||
3. Add integration tests for affected callers
|
||||
4. Document current behavior for comparison
|
||||
|
||||
All tests must pass before proceeding.
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
output_artifacts:
|
||||
- name: baseline
|
||||
path: .wave/output/test-baseline.md
|
||||
type: markdown
|
||||
- id: refactor
|
||||
persona: craftsman
|
||||
dependencies: [test-baseline]
|
||||
thread: refactor
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze
|
||||
artifact: analysis
|
||||
as: scope
|
||||
- step: test-baseline
|
||||
artifact: baseline
|
||||
as: tests
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform the refactoring: {{ input }}
|
||||
|
||||
Guidelines:
|
||||
1. Make atomic, reviewable changes
|
||||
2. Preserve all existing behavior
|
||||
3. Run tests after each significant change
|
||||
4. Update affected callers as needed
|
||||
5. Keep commits small and focused
|
||||
|
||||
Do NOT change behavior — this is refactoring only.
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: false
|
||||
on_failure: retry
|
||||
compaction:
|
||||
trigger: "token_limit_80%"
|
||||
persona: summarizer
|
||||
|
||||
- id: verify
|
||||
persona: reviewer
|
||||
dependencies: [refactor]
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the refactoring:
|
||||
|
||||
1. Compare before/after behavior — any changes?
|
||||
2. Check test coverage didn't decrease
|
||||
3. Verify all callers still work correctly
|
||||
4. Look for missed edge cases
|
||||
5. Assess code quality improvement
|
||||
|
||||
Output: PASS (safe to merge) or FAIL (issues found)
|
||||
output_artifacts:
|
||||
- name: verification
|
||||
path: .wave/output/verification.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/verification.md
|
||||
28
.wave/pipelines/impl-research.yaml
Normal file
28
.wave/pipelines/impl-research.yaml
Normal file
@@ -0,0 +1,28 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-research
|
||||
description: "Research a GitHub issue, implement the solution, then review the PR"
|
||||
category: composition
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub issue reference (owner/repo number)"
|
||||
|
||||
steps:
|
||||
- id: research
|
||||
pipeline: plan-research
|
||||
input: "{{input}}"
|
||||
|
||||
- id: implement
|
||||
dependencies: [research]
|
||||
pipeline: impl-speckit
|
||||
input: "{{input}}"
|
||||
|
||||
- id: review
|
||||
dependencies: [implement]
|
||||
pipeline: ops-pr-review
|
||||
input: "{{input}}"
|
||||
256
.wave/pipelines/impl-speckit.yaml
Normal file
256
.wave/pipelines/impl-speckit.yaml
Normal file
@@ -0,0 +1,256 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: impl-speckit
|
||||
description: "Specification-driven implementation: specify → clarify → plan → tasks → implement → PR"
|
||||
release: true
|
||||
|
||||
requires:
|
||||
skills:
|
||||
speckit:
|
||||
check: specify check
|
||||
install: uv tool install --force specify-cli --from git+https://github.com/github/spec-kit.git
|
||||
init: specify init
|
||||
tools:
|
||||
- git
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "add user authentication with JWT tokens"
|
||||
schema:
|
||||
type: string
|
||||
description: "Natural language feature description to specify and implement"
|
||||
|
||||
steps:
|
||||
- id: specify
|
||||
persona: implementer
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
base: main
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/specify.md
|
||||
output_artifacts:
|
||||
- name: spec-status
|
||||
path: .wave/output/specify-status.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/specify-status.json
|
||||
schema_path: .wave/contracts/specify-status.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: clarify
|
||||
persona: implementer
|
||||
model: claude-haiku
|
||||
dependencies: [specify]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: specify
|
||||
artifact: spec-status
|
||||
as: spec_info
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/clarify.md
|
||||
output_artifacts:
|
||||
- name: clarify-status
|
||||
path: .wave/output/clarify-status.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/clarify-status.json
|
||||
schema_path: .wave/contracts/clarify-status.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: plan
|
||||
persona: implementer
|
||||
dependencies: [clarify]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: specify
|
||||
artifact: spec-status
|
||||
as: spec_info
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/plan.md
|
||||
output_artifacts:
|
||||
- name: plan-status
|
||||
path: .wave/output/plan-status.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/plan-status.json
|
||||
schema_path: .wave/contracts/plan-status.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: tasks
|
||||
persona: implementer
|
||||
dependencies: [plan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: specify
|
||||
artifact: spec-status
|
||||
as: spec_info
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/tasks.md
|
||||
output_artifacts:
|
||||
- name: tasks-status
|
||||
path: .wave/output/tasks-status.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/tasks-status.json
|
||||
schema_path: .wave/contracts/tasks-status.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: checklist
|
||||
persona: implementer
|
||||
dependencies: [tasks]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: specify
|
||||
artifact: spec-status
|
||||
as: spec_info
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/checklist.md
|
||||
output_artifacts:
|
||||
- name: checklist-status
|
||||
path: .wave/output/checklist-status.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/checklist-status.json
|
||||
schema_path: .wave/contracts/checklist-status.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: analyze
|
||||
persona: implementer
|
||||
model: claude-haiku
|
||||
dependencies: [checklist]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: specify
|
||||
artifact: spec-status
|
||||
as: spec_info
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/analyze.md
|
||||
output_artifacts:
|
||||
- name: analysis-report
|
||||
path: .wave/output/analysis-report.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/analysis-report.json
|
||||
schema_path: .wave/contracts/analysis-report.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: implement
|
||||
persona: craftsman
|
||||
dependencies: [analyze]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: specify
|
||||
artifact: spec-status
|
||||
as: spec_info
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/implement.md
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
compaction:
|
||||
trigger: "token_limit_80%"
|
||||
persona: summarizer
|
||||
|
||||
- id: create-pr
|
||||
persona: craftsman
|
||||
dependencies: [implement]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: specify
|
||||
artifact: spec-status
|
||||
as: spec_info
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source_path: .wave/prompts/speckit-flow/create-pr.md
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
123
.wave/pipelines/onboard.yaml
Normal file
123
.wave/pipelines/onboard.yaml
Normal file
@@ -0,0 +1,123 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: onboard
|
||||
description: "Generate a project onboarding guide for new contributors"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "create an onboarding guide for this project"
|
||||
|
||||
steps:
|
||||
- id: survey
|
||||
persona: navigator
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Survey this project to build an onboarding guide: {{ input }}
|
||||
|
||||
## Survey Checklist
|
||||
|
||||
1. **Project identity**: Find README, package manifests (go.mod, package.json),
|
||||
license, and config files. Determine language, framework, purpose.
|
||||
|
||||
2. **Build system**: How to build, test, and run the project.
|
||||
Find Makefiles, scripts, CI configs, Dockerfiles.
|
||||
|
||||
3. **Directory structure**: Map the top-level layout and key directories.
|
||||
What does each directory contain?
|
||||
|
||||
4. **Architecture**: Identify the main components and how they interact.
|
||||
Find entry points (main.go, index.ts, etc.).
|
||||
|
||||
5. **Dependencies**: List key dependencies and their purposes.
|
||||
Check go.mod, package.json, requirements.txt, etc.
|
||||
|
||||
6. **Configuration**: Find environment variables, config files, feature flags.
|
||||
|
||||
7. **Testing**: Where are tests? How to run them? What patterns are used?
|
||||
|
||||
8. **Development workflow**: Find contributing guides, PR templates,
|
||||
commit conventions, branch strategies.
|
||||
|
||||
9. **Documentation**: Where is documentation? Is it up to date?
|
||||
output_artifacts:
|
||||
- name: survey
|
||||
path: .wave/output/project-survey.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/project-survey.json
|
||||
schema_path: .wave/contracts/project-survey.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: guide
|
||||
persona: philosopher
|
||||
dependencies: [survey]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: survey
|
||||
artifact: survey
|
||||
as: project_info
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Write a comprehensive onboarding guide for new contributors.
|
||||
|
||||
Using the injected project survey data, write a guide with these sections:
|
||||
|
||||
# Onboarding Guide: [Project Name]
|
||||
|
||||
## Quick Start
|
||||
- Prerequisites (what to install)
|
||||
- Clone and build (exact commands)
|
||||
- Run tests (exact commands)
|
||||
- Run the project (exact commands)
|
||||
|
||||
## Project Overview
|
||||
- What this project does (2-3 sentences)
|
||||
- Key technologies and why they were chosen
|
||||
- High-level architecture (ASCII diagram)
|
||||
|
||||
## Directory Map
|
||||
- What each top-level directory contains
|
||||
- Where to find things (tests, configs, docs)
|
||||
|
||||
## Core Concepts
|
||||
- Key abstractions and terminology
|
||||
- How the main components interact
|
||||
- Data flow through the system
|
||||
|
||||
## Development Workflow
|
||||
- How to create a feature branch
|
||||
- Commit message conventions
|
||||
- How to run tests before pushing
|
||||
- PR process
|
||||
|
||||
## Common Tasks
|
||||
- "I want to add a new [feature/command/endpoint]" → where to start
|
||||
- "I want to fix a bug" → debugging approach
|
||||
- "I want to understand [component]" → where to look
|
||||
|
||||
## Helpful Resources
|
||||
- Documentation locations
|
||||
- Key files to read first
|
||||
- Related external docs
|
||||
|
||||
Write for someone on their first day with this codebase.
|
||||
Be specific — use real paths, real commands, real examples.
|
||||
output_artifacts:
|
||||
- name: guide
|
||||
path: .wave/output/onboarding-guide.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/onboarding-guide.md
|
||||
212
.wave/pipelines/ops-bootstrap.yaml
Normal file
212
.wave/pipelines/ops-bootstrap.yaml
Normal file
@@ -0,0 +1,212 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-bootstrap
|
||||
description: "Scaffold a greenfield project with language-appropriate structure, CI config, and initial files"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
schema:
|
||||
type: string
|
||||
description: "Project description and intent (e.g. 'Rust CLI tool for processing CSV files')"
|
||||
example: "Rust CLI tool for processing CSV files"
|
||||
|
||||
steps:
|
||||
- id: assess
|
||||
persona: navigator
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are assessing a greenfield project for scaffolding.
|
||||
|
||||
The user described this project as: {{ input }}
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Read `wave.yaml` to find `project.language`, `project.build_command`, and `project.test_command`
|
||||
2. List all existing files in the project directory to understand what already exists
|
||||
3. Read any README, ADR, or design docs for additional project intent
|
||||
4. Determine the project flavour (language/framework):
|
||||
- If `project.language` is set in wave.yaml, use that
|
||||
- Otherwise infer from existing files (package.json → node, Cargo.toml → rust, go.mod → go, etc.)
|
||||
- If nothing exists, infer from the user's description
|
||||
5. Recommend the appropriate project scaffold
|
||||
|
||||
## Output
|
||||
|
||||
Write a JSON file to `.wave/output/bootstrap-assessment.json` with this structure:
|
||||
```json
|
||||
{
|
||||
"flavour": "go|rust|node|bun|python|csharp|...",
|
||||
"project_intent": "description of what the project does",
|
||||
"existing_files": ["list", "of", "existing", "files"],
|
||||
"scaffold_recommendations": {
|
||||
"files_to_create": ["list of files to scaffold"],
|
||||
"build_system": "cargo|go|npm|bun|pip|dotnet",
|
||||
"ci_provider": "github-actions",
|
||||
"gitignore_patterns": ["patterns for .gitignore"]
|
||||
},
|
||||
"wave_config": {
|
||||
"language": "from wave.yaml if set",
|
||||
"build_command": "from wave.yaml if set",
|
||||
"test_command": "from wave.yaml if set"
|
||||
}
|
||||
}
|
||||
```
|
||||
output_artifacts:
|
||||
- name: assessment
|
||||
path: .wave/output/bootstrap-assessment.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/bootstrap-assessment.json
|
||||
schema_path: .wave/contracts/bootstrap-assessment.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
- id: scaffold
|
||||
persona: craftsman
|
||||
dependencies: [assess]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: assess
|
||||
artifact: assessment
|
||||
as: bootstrap_assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are scaffolding a new project based on the assessment.
|
||||
|
||||
Read the assessment artifact to understand the project flavour, intent, and recommendations.
|
||||
|
||||
## Instructions
|
||||
|
||||
Based on the detected flavour, create the appropriate project structure:
|
||||
|
||||
### Go
|
||||
- `go.mod` with appropriate module path
|
||||
- `main.go` or `cmd/<name>/main.go` for CLI tools
|
||||
- `internal/` directory structure
|
||||
- Basic test file
|
||||
- `.github/workflows/ci.yml` with go build and test
|
||||
|
||||
### Rust
|
||||
- `Cargo.toml` with project metadata
|
||||
- `src/main.rs` (binary) or `src/lib.rs` (library)
|
||||
- `tests/` directory with integration test stub
|
||||
- `.github/workflows/ci.yml` with cargo build and test
|
||||
|
||||
### Node / Bun
|
||||
- `package.json` with project metadata and scripts
|
||||
- `src/index.ts` entry point
|
||||
- `tsconfig.json` with strict mode
|
||||
- `.github/workflows/ci.yml` with install and test
|
||||
|
||||
### Python
|
||||
- `pyproject.toml` with project metadata
|
||||
- `src/<package_name>/__init__.py`
|
||||
- `tests/test_placeholder.py`
|
||||
- `.github/workflows/ci.yml` with pip install and pytest
|
||||
|
||||
### C#
|
||||
- `<ProjectName>.sln` solution file
|
||||
- `src/<ProjectName>/<ProjectName>.csproj` and `Program.cs`
|
||||
- `tests/<ProjectName>.Tests/` with test project
|
||||
- `.github/workflows/ci.yml` with dotnet build and test
|
||||
|
||||
### For ALL flavours
|
||||
- Create `.gitignore` appropriate for the language
|
||||
- Create `README.md` with project description, build instructions, and usage
|
||||
|
||||
## Verification
|
||||
|
||||
After creating all files:
|
||||
1. If a build command is available, run it to verify the project compiles
|
||||
2. If a test command is available, run it to verify tests pass
|
||||
|
||||
## Important
|
||||
|
||||
- Do NOT create files that already exist (check the assessment's existing_files list)
|
||||
- Use the project intent from the assessment to make README content meaningful
|
||||
- Follow standard conventions for the language ecosystem
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: false
|
||||
on_failure: retry
|
||||
|
||||
- id: commit
|
||||
persona: craftsman
|
||||
dependencies: [scaffold]
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: assess
|
||||
artifact: assessment
|
||||
as: bootstrap_assessment
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are creating the initial commit for a newly scaffolded project.
|
||||
|
||||
Read the assessment artifact to get the project flavour and the list of
|
||||
recommended files from `scaffold_recommendations.files_to_create`.
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Identify which files were actually created by the scaffold step:
|
||||
```bash
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
2. Stage ONLY the project files explicitly — NEVER use `git add -A` or `git add .`:
|
||||
```bash
|
||||
git add <file1> <file2> <file3> ...
|
||||
```
|
||||
Stage every new or modified project file shown by `git status`, but
|
||||
NEVER stage any of these paths:
|
||||
- `.wave/artifacts/`
|
||||
- `.wave/output/`
|
||||
- `.claude/`
|
||||
- `CLAUDE.md`
|
||||
|
||||
3. Create the initial commit with a conventional commit message:
|
||||
```bash
|
||||
git commit -m "feat: scaffold <flavour> project"
|
||||
```
|
||||
Replace `<flavour>` with the actual detected flavour from the assessment (e.g. "go", "rust", "node").
|
||||
|
||||
4. Check if a remote is configured:
|
||||
```bash
|
||||
git remote -v
|
||||
```
|
||||
|
||||
5. If a remote exists, push the branch:
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
```
|
||||
|
||||
## Important
|
||||
|
||||
- Do NOT include Co-Authored-By or AI attribution in the commit message
|
||||
- NEVER use `git add -A`, `git add .`, or `git add --all` — always stage files by explicit path
|
||||
- Do NOT commit .wave/artifacts/, .wave/output/, .claude/, or CLAUDE.md
|
||||
- If git push fails (no remote, auth issues), that's OK — just report the commit was created locally
|
||||
163
.wave/pipelines/ops-debug.yaml
Normal file
163
.wave/pipelines/ops-debug.yaml
Normal file
@@ -0,0 +1,163 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-debug
|
||||
description: "Systematic debugging with hypothesis testing"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "TestPipelineExecutor fails with nil pointer on resume"
|
||||
|
||||
steps:
|
||||
- id: reproduce
|
||||
persona: debugger
|
||||
model: claude-haiku
|
||||
thread: debug
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Reproduce and characterize the issue: {{ input }}
|
||||
|
||||
1. Understand expected vs actual behavior
|
||||
2. Create minimal reproduction steps
|
||||
3. Identify relevant code paths
|
||||
4. Note environmental factors (OS, versions, config)
|
||||
output_artifacts:
|
||||
- name: reproduction
|
||||
path: .wave/output/reproduction.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/reproduction.json
|
||||
schema_path: .wave/contracts/debug-reproduction.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: hypothesize
|
||||
persona: debugger
|
||||
thread: debug
|
||||
dependencies: [reproduce]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: reproduce
|
||||
artifact: reproduction
|
||||
as: issue
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Form hypotheses about the root cause.
|
||||
|
||||
For each hypothesis:
|
||||
1. What could cause this behavior?
|
||||
2. What evidence would confirm/refute it?
|
||||
3. How to test this hypothesis?
|
||||
|
||||
Rank by likelihood and ease of testing.
|
||||
output_artifacts:
|
||||
- name: hypotheses
|
||||
path: .wave/output/hypotheses.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/hypotheses.json
|
||||
schema_path: .wave/contracts/debug-hypotheses.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: investigate
|
||||
persona: debugger
|
||||
thread: debug
|
||||
dependencies: [hypothesize]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: reproduce
|
||||
artifact: reproduction
|
||||
as: issue
|
||||
- step: hypothesize
|
||||
artifact: hypotheses
|
||||
as: hypotheses
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Test each hypothesis systematically.
|
||||
|
||||
1. Start with most likely / easiest to test
|
||||
2. Use git bisect if needed to find regression
|
||||
3. Add diagnostic logging to trace execution
|
||||
4. Examine data flow and state changes
|
||||
5. Document findings for each hypothesis
|
||||
|
||||
Continue until root cause is identified.
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/investigation.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/investigation.md
|
||||
|
||||
- id: fix
|
||||
persona: craftsman
|
||||
dependencies: [investigate]
|
||||
thread: debug
|
||||
max_visits: 3
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: investigate
|
||||
artifact: findings
|
||||
as: root_cause
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fix the root cause identified in the investigation.
|
||||
|
||||
1. Implement the minimal fix
|
||||
2. Add a regression test that would have caught this
|
||||
3. Remove any diagnostic code added during debugging
|
||||
4. Verify the original reproduction no longer fails
|
||||
5. Check for similar issues elsewhere
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 3
|
||||
output_artifacts:
|
||||
- name: fix
|
||||
path: .wave/output/fix-summary.md
|
||||
type: markdown
|
||||
|
||||
- id: run-tests
|
||||
type: command
|
||||
dependencies: [fix]
|
||||
script: "{{ project.contract_test_command }}"
|
||||
|
||||
- id: verify-fix
|
||||
type: conditional
|
||||
dependencies: [run-tests]
|
||||
edges:
|
||||
- target: _complete
|
||||
condition: "outcome=success"
|
||||
- target: fix
|
||||
26
.wave/pipelines/ops-epic-runner.yaml
Normal file
26
.wave/pipelines/ops-epic-runner.yaml
Normal file
@@ -0,0 +1,26 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-epic-runner
|
||||
description: "Scope an epic, implement each child issue sequentially"
|
||||
category: composition
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub epic reference (owner/repo number)"
|
||||
|
||||
steps:
|
||||
- id: scope
|
||||
pipeline: plan-scope
|
||||
input: "{{input}}"
|
||||
|
||||
- id: implement-all
|
||||
dependencies: [scope]
|
||||
pipeline: impl-speckit
|
||||
input: "{{item.url}} — child of {{input}}, see parent for full context"
|
||||
iterate:
|
||||
over: "{{scope.output.child_issues}}"
|
||||
mode: sequential
|
||||
54
.wave/pipelines/ops-hello-world.yaml
Normal file
54
.wave/pipelines/ops-hello-world.yaml
Normal file
@@ -0,0 +1,54 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-hello-world
|
||||
description: "Simple test pipeline to verify Wave is working"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "testing Wave"
|
||||
|
||||
steps:
|
||||
- id: greet
|
||||
persona: craftsman
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are a simple greeting bot. The user said: "{{ input }}"
|
||||
|
||||
Your final response must be ONLY this text (nothing else - no explanation, no markdown):
|
||||
|
||||
Hello from Wave! Your message was: {{ input }}
|
||||
output_artifacts:
|
||||
- name: greeting
|
||||
path: greeting.txt
|
||||
type: text
|
||||
|
||||
- id: verify
|
||||
persona: navigator
|
||||
dependencies: [greet]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: greet
|
||||
artifact: greeting
|
||||
as: greeting_file
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the greeting artifact exists and contains content.
|
||||
|
||||
Output a JSON result confirming verification status.
|
||||
output_artifacts:
|
||||
- name: result
|
||||
path: .wave/output/result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/result.json
|
||||
schema_path: .wave/contracts/hello-world-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
146
.wave/pipelines/ops-implement-epic.yaml
Normal file
146
.wave/pipelines/ops-implement-epic.yaml
Normal file
@@ -0,0 +1,146 @@
|
||||
# Epic Runner — Composition Primitives Example
|
||||
#
|
||||
# Demonstrates: iterate (sequential), gate (ci_pass), forge template variables
|
||||
#
|
||||
# Execution flow:
|
||||
#
|
||||
# fetch-children ← persona step: fetch parent issue, emit child URLs as JSON
|
||||
# │
|
||||
# implement-children ← iterate (sequential): runs impl-issue for each child URL
|
||||
# │
|
||||
# ci-gate ← gate (ci_pass): block until CI is green across all PRs
|
||||
# │
|
||||
# summarise ← persona step: post a completion comment on the epic
|
||||
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-implement-epic
|
||||
description: "Implement all child issues from a parent epic, gate on CI, then summarise"
|
||||
category: composition
|
||||
release: true
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "https://github.com/re-cinq/wave/issues/184"
|
||||
schema:
|
||||
type: string
|
||||
description: "Parent epic URL (e.g. https://github.com/owner/repo/issues/N)"
|
||||
|
||||
steps:
|
||||
# ── Step 1: fetch parent issue, emit child issue URLs ─────────────────────
|
||||
- id: fetch-children
|
||||
persona: "gitea-analyst"
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input: {{ input }}
|
||||
|
||||
You are given a parent epic URL. Your job is to extract all child issue URLs
|
||||
listed in the epic body.
|
||||
|
||||
1. Parse the URL to get repo (owner/repo) and issue number.
|
||||
2. Fetch the issue:
|
||||
{{ forge.cli_tool }} issue view <NUMBER> --repo <REPO> \
|
||||
--json number,title,body,url
|
||||
3. Scan the body for linked child issues. Look for patterns like:
|
||||
- Checkbox lists: `- [ ] #123` or `- [ ] https://github.com/…/issues/123`
|
||||
- "Closes #N" / "Part of #N" references
|
||||
- Bulleted task lists pointing to issue URLs
|
||||
4. For each child issue URL found, verify it is open:
|
||||
{{ forge.cli_tool }} issue view <NUMBER> --repo <REPO> --json state,url
|
||||
Include only open issues.
|
||||
5. Output a JSON object with:
|
||||
{
|
||||
"parent_url": "<epic_url>",
|
||||
"repo": "<owner/repo>",
|
||||
"child_urls": ["https://…/issues/N", …]
|
||||
}
|
||||
|
||||
Write this JSON to .wave/output/epic-children.json.
|
||||
output_artifacts:
|
||||
- name: children
|
||||
path: .wave/output/epic-children.json
|
||||
type: json
|
||||
required: true
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/epic-children.json
|
||||
schema_path: .wave/contracts/epic-children.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
|
||||
# ── Step 2: iterate over child issues, run impl-issue for each ───────────
|
||||
- id: implement-children
|
||||
dependencies: [fetch-children]
|
||||
pipeline: impl-issue
|
||||
input: "{{ item }}"
|
||||
iterate:
|
||||
over: "{{ fetch-children.output.child_urls }}"
|
||||
mode: sequential
|
||||
max_concurrent: 3
|
||||
|
||||
# ── Step 3: gate — wait for CI to pass across all opened PRs ─────────────
|
||||
- id: ci-gate
|
||||
dependencies: [implement-children]
|
||||
gate:
|
||||
type: ci_pass
|
||||
timeout: 2h
|
||||
message: "Waiting for CI to pass on all PRs opened by child-issue implementations"
|
||||
|
||||
# ── Step 4: post a summary comment on the parent epic ────────────────────
|
||||
- id: summarise
|
||||
persona: "gitea-commenter"
|
||||
dependencies: [ci-gate]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-children
|
||||
artifact: children
|
||||
as: epic_children
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input: {{ input }}
|
||||
|
||||
All child issues have been implemented and CI has passed.
|
||||
|
||||
Read .wave/artifacts/epic_children to get the parent epic URL and repo.
|
||||
|
||||
Post a completion comment on the parent epic:
|
||||
{{ forge.cli_tool }} issue comment <NUMBER> --repo <REPO> --body-file /tmp/summary.md
|
||||
|
||||
The comment should include:
|
||||
- A completion header: "All child issues implemented — CI green"
|
||||
- A checkbox list of every child URL that was processed (mark each done)
|
||||
- A note to reviewers about next steps (merge the PRs in dependency order)
|
||||
|
||||
Write the comment body to /tmp/summary.md before posting.
|
||||
output_artifacts:
|
||||
- name: epic-summary
|
||||
path: .wave/output/epic-summary.json
|
||||
type: json
|
||||
outcomes:
|
||||
- type: url
|
||||
extract_from: .wave/output/epic-summary.json
|
||||
json_path: .comment_url
|
||||
label: "Epic Summary Comment"
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
195
.wave/pipelines/ops-issue-quality.yaml
Normal file
195
.wave/pipelines/ops-issue-quality.yaml
Normal file
@@ -0,0 +1,195 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-issue-quality
|
||||
description: "Scan open issues for quality and post enhancement suggestions on poor-scoring ones"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- gh-cli
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave"
|
||||
schema:
|
||||
type: string
|
||||
description: "Repository in owner/repo format"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
persona: "gitea-analyst"
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Scan all open issues in the repository and produce a quality report.
|
||||
|
||||
Repository: {{ input }}
|
||||
|
||||
Split the input on "/" to get OWNER and REPO.
|
||||
|
||||
## Fetch Open Issues
|
||||
|
||||
```bash
|
||||
{{ forge.cli_tool }} issue list --repo {{ input }} --state open --limit 200 \
|
||||
--json number,title,body,labels,url,assignees,milestone,createdAt,updatedAt,comments
|
||||
```
|
||||
|
||||
## Analyze Each Issue
|
||||
|
||||
For every issue (skip pull requests — items where the URL contains "/pull/"), evaluate:
|
||||
|
||||
1. **Title quality**
|
||||
- Too short (< 10 chars): -20 points
|
||||
- All lowercase (> 5 chars): -5 points
|
||||
- All uppercase (> 5 chars): -10 points
|
||||
- Vague terms ("issue", "problem", "help", "bug?", "question") with title < 30 chars: -10 points
|
||||
- Fewer than 3 words: -15 points
|
||||
|
||||
2. **Body quality**
|
||||
- Empty body: -40 points
|
||||
- Body < 50 chars: -25 points
|
||||
- Body < 100 chars: -10 points
|
||||
- Single sentence (< 2 sentence-ending punctuation marks) with body > 20 chars: -10 points
|
||||
- No structured sections (missing keywords like "steps to reproduce", "expected behavior", "actual behavior", "environment", "version", "screenshot", "reproduction") with body > 100 chars: -5 points
|
||||
|
||||
3. **Labels**
|
||||
- No labels: -10 points
|
||||
|
||||
Start every issue at score 100 and apply deductions. Floor at 0.
|
||||
|
||||
## Output
|
||||
|
||||
Write a JSON file to `.wave/artifacts/quality-report.json` with this structure:
|
||||
|
||||
```json
|
||||
{
|
||||
"repository": {
|
||||
"owner": "<OWNER>",
|
||||
"name": "<REPO>"
|
||||
},
|
||||
"total_issues": <count of issues fetched>,
|
||||
"analyzed_count": <count of issues analyzed (excluding PRs)>,
|
||||
"poor_quality_issues": [
|
||||
{
|
||||
"number": <issue number>,
|
||||
"title": "<issue title>",
|
||||
"body": "<issue body — truncate at 500 chars>",
|
||||
"quality_score": <0-100>,
|
||||
"problems": ["<problem description>", ...],
|
||||
"recommendations": ["<recommendation>", ...],
|
||||
"labels": ["<label name>", ...],
|
||||
"url": "<issue HTML URL>"
|
||||
}
|
||||
],
|
||||
"quality_threshold": 70,
|
||||
"timestamp": "<ISO 8601 timestamp>"
|
||||
}
|
||||
```
|
||||
|
||||
Include ALL issues with quality_score < 70 in `poor_quality_issues`, sorted by quality_score ascending (worst first).
|
||||
output_artifacts:
|
||||
- name: quality-report
|
||||
path: .wave/artifacts/quality-report.json
|
||||
type: json
|
||||
required: true
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/artifacts/quality-report.json
|
||||
schema_path: .wave/contracts/github-issue-analysis.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: enhance
|
||||
persona: "gitea-commenter"
|
||||
dependencies: [scan]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan
|
||||
artifact: quality-report
|
||||
as: quality_report
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Post enhancement suggestions as comments on the worst-scoring issues.
|
||||
|
||||
Repository: {{ input }}
|
||||
|
||||
## Read Quality Report
|
||||
|
||||
Read the injected quality_report artifact. It contains the full quality analysis with per-issue scores, problems, and recommendations.
|
||||
|
||||
## Select Issues to Enhance
|
||||
|
||||
From `poor_quality_issues`, select issues where `quality_score < 50`. Process at most 10 issues to avoid comment spam.
|
||||
|
||||
If no issues have score < 50, check issues with score < 70 and select the 5 worst.
|
||||
|
||||
If there are no poor quality issues at all, write a summary noting the repository has good issue quality and skip posting.
|
||||
|
||||
## Post Enhancement Comment
|
||||
|
||||
For each selected issue, compose a helpful comment and post it:
|
||||
|
||||
```bash
|
||||
cat > /tmp/issue-enhance-<NUMBER>.md <<'COMMENT_EOF'
|
||||
## Issue Quality Suggestions
|
||||
|
||||
This issue has been automatically reviewed for clarity and completeness.
|
||||
|
||||
**Quality Score**: <score>/100
|
||||
|
||||
**Problems identified**:
|
||||
<bulleted list of problems>
|
||||
|
||||
**Recommendations**:
|
||||
<bulleted list of recommendations>
|
||||
|
||||
Consider updating this issue with the suggestions above to help maintainers triage and implement it more effectively.
|
||||
|
||||
---
|
||||
*Posted by [Wave](https://github.com/re-cinq/wave) ops-issue-quality pipeline*
|
||||
COMMENT_EOF
|
||||
{{ forge.cli_tool }} issue comment <NUMBER> --repo {{ input }} --body-file /tmp/issue-enhance-<NUMBER>.md
|
||||
```
|
||||
|
||||
Clean up temp files after posting each comment.
|
||||
|
||||
## Write Summary
|
||||
|
||||
After processing all issues, write `.wave/artifacts/enhancement-summary.md` with:
|
||||
|
||||
1. **Repository**: owner/repo
|
||||
2. **Issues Scanned**: total count
|
||||
3. **Poor Quality Issues**: count below threshold
|
||||
4. **Comments Posted**: count and issue numbers
|
||||
5. **Skipped**: count of issues not commented on and reason (score >= 50, or batch limit reached)
|
||||
6. For each commented issue: number, title, score, and a brief note on the main problems addressed
|
||||
output_artifacts:
|
||||
- name: enhancement-summary
|
||||
path: .wave/artifacts/enhancement-summary.md
|
||||
type: markdown
|
||||
required: true
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/artifacts/enhancement-summary.md
|
||||
101
.wave/pipelines/ops-parallel-audit.yaml
Normal file
101
.wave/pipelines/ops-parallel-audit.yaml
Normal file
@@ -0,0 +1,101 @@
|
||||
# Parallel Multi-Audit — Composition Primitives Example
|
||||
#
|
||||
# Demonstrates: iterate (parallel), aggregate (merge_arrays)
|
||||
#
|
||||
# Execution flow:
|
||||
#
|
||||
# run-audits ← iterate (parallel, max_concurrent: 3): fan out over
|
||||
# ├─ audit-security │ [security, dead-code, dx] — each runs its audit
|
||||
# ├─ audit-dead-code │ pipeline and emits findings JSON
|
||||
# └─ audit-dx │
|
||||
# │
|
||||
# merge-findings ← aggregate (merge_arrays): collect all findings arrays
|
||||
# │ into one unified JSON list
|
||||
# report ← persona step: synthesise the merged findings into a
|
||||
# single prioritised markdown report
|
||||
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-parallel-audit
|
||||
description: "Fan out security, dead-code, and DX audits in parallel then merge findings"
|
||||
category: composition
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-design
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "internal/pipeline"
|
||||
schema:
|
||||
type: string
|
||||
description: "Scope to audit — a package path, directory, or free-form description"
|
||||
|
||||
steps:
|
||||
# ── Step 1: fan out over the three audit types in parallel ────────────────
|
||||
#
|
||||
# Each item is the name of an existing audit pipeline. The iterate primitive
|
||||
# spawns up to 3 workers simultaneously, one per audit type.
|
||||
- id: run-audits
|
||||
pipeline: "{{ item }}"
|
||||
input: "{{ input }}"
|
||||
iterate:
|
||||
over: '["audit-security", "audit-dead-code", "audit-dx"]'
|
||||
mode: parallel
|
||||
max_concurrent: 3
|
||||
|
||||
# ── Step 2: merge all findings arrays into one flat list ─────────────────
|
||||
#
|
||||
# The aggregate primitive reads the step outputs collected by run-audits and
|
||||
# flattens the three JSON arrays into a single findings array written to disk.
|
||||
- id: merge-findings
|
||||
dependencies: [run-audits]
|
||||
aggregate:
|
||||
from: "{{ run-audits.output }}"
|
||||
into: .wave/output/merged-findings.json
|
||||
strategy: merge_arrays
|
||||
|
||||
# ── Step 3: synthesise a unified markdown report ──────────────────────────
|
||||
- id: report
|
||||
persona: summarizer
|
||||
dependencies: [merge-findings]
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input scope: {{ input }}
|
||||
|
||||
Three parallel audits have completed — security, dead-code, and DX.
|
||||
Their findings have been merged into .wave/output/merged-findings.json.
|
||||
|
||||
Read that file, then produce a unified markdown report:
|
||||
|
||||
## Executive Summary
|
||||
Overall health rating (excellent / good / needs-work / critical) with one
|
||||
paragraph of justification.
|
||||
|
||||
## Critical Findings (severity: critical or high)
|
||||
For each: finding title, audit source, file/line if available, recommended fix.
|
||||
|
||||
## All Findings by Priority
|
||||
A table with columns: Priority | Source | Finding | File | Recommendation
|
||||
|
||||
## Positive Observations
|
||||
Anything that looked unexpectedly clean or well-maintained.
|
||||
|
||||
## Recommended Next Steps
|
||||
Ordered action list — highest impact first.
|
||||
|
||||
Write the report to .wave/output/parallel-audit-report.md.
|
||||
output_artifacts:
|
||||
- name: audit-report
|
||||
path: .wave/output/parallel-audit-report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/parallel-audit-report.md
|
||||
242
.wave/pipelines/ops-pr-review.yaml
Normal file
242
.wave/pipelines/ops-pr-review.yaml
Normal file
@@ -0,0 +1,242 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-pr-review
|
||||
description: "Pull request code review with automated security and quality analysis"
|
||||
release: true
|
||||
|
||||
chat_context:
|
||||
artifact_summaries:
|
||||
- diff
|
||||
- security
|
||||
- quality
|
||||
- verdict
|
||||
suggested_questions:
|
||||
- "What issues were found in the review?"
|
||||
- "Are there any blocking concerns that must be addressed before merging?"
|
||||
- "What should be fixed before the next review cycle?"
|
||||
focus_areas:
|
||||
- "Review findings and severity"
|
||||
- "Code quality and maintainability"
|
||||
- "Security concerns and vulnerabilities"
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
- gh-cli
|
||||
- software-design
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "https://github.com/owner/repo/pull/42"
|
||||
|
||||
steps:
|
||||
- id: diff-analysis
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the code changes for: {{ input }}
|
||||
|
||||
## Step 1: Extract PR Metadata
|
||||
|
||||
First, fetch PR metadata to populate the `pr_metadata` field in the output:
|
||||
```bash
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} view {{ input }} --json number,headRefName,baseRefName,url
|
||||
```
|
||||
Extract the number, url, headRefName (→ head_branch), and baseRefName (→ base_branch).
|
||||
|
||||
## Step 2: Checkout the PR Branch
|
||||
|
||||
Checkout the PR's head branch to ensure you analyze the correct code:
|
||||
```bash
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} checkout {{ input }}
|
||||
```
|
||||
|
||||
## Step 3: Analyze Changes
|
||||
|
||||
1. Identify all modified files and their purposes
|
||||
2. Map the change scope (which modules/packages affected)
|
||||
3. Find related tests that should be updated
|
||||
4. Check for breaking API changes
|
||||
|
||||
Produce a structured result matching the contract schema.
|
||||
The `pr_metadata` field must contain the PR number, URL, head branch, and base branch.
|
||||
output_artifacts:
|
||||
- name: diff
|
||||
path: .wave/output/diff-analysis.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/diff-analysis.json
|
||||
schema_path: .wave/contracts/diff-analysis.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: security-review
|
||||
persona: reviewer
|
||||
dependencies: [diff-analysis]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diff-analysis
|
||||
artifact: diff
|
||||
as: changes
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Security review of the PR changes.
|
||||
|
||||
Check for:
|
||||
1. SQL injection, XSS, CSRF vulnerabilities
|
||||
2. Hardcoded secrets or credentials
|
||||
3. Insecure deserialization
|
||||
4. Missing input validation
|
||||
5. Authentication/authorization gaps
|
||||
6. Sensitive data exposure
|
||||
|
||||
Output findings with severity (CRITICAL/HIGH/MEDIUM/LOW).
|
||||
output_artifacts:
|
||||
- name: security
|
||||
path: .wave/output/security-review.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: llm_judge
|
||||
source: .wave/output/security-review.md
|
||||
model: claude-haiku
|
||||
criteria:
|
||||
- "Identifies injection vulnerabilities (SQL, command, XSS) if present in the diff"
|
||||
- "Checks for hardcoded credentials or secrets"
|
||||
- "Assesses authentication and authorization correctness"
|
||||
- "Findings include severity levels and specific file references"
|
||||
threshold: 0.75
|
||||
on_failure: continue
|
||||
|
||||
- id: quality-review
|
||||
persona: reviewer
|
||||
dependencies: [diff-analysis]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diff-analysis
|
||||
artifact: diff
|
||||
as: changes
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Quality review of the PR changes.
|
||||
|
||||
Check for:
|
||||
1. Error handling completeness
|
||||
2. Edge cases not covered
|
||||
3. Code duplication
|
||||
4. Naming consistency
|
||||
5. Missing or inadequate tests
|
||||
6. Performance implications
|
||||
7. Documentation gaps
|
||||
|
||||
Output findings with severity and suggestions.
|
||||
output_artifacts:
|
||||
- name: quality
|
||||
path: .wave/output/quality-review.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/quality-review.md
|
||||
|
||||
- id: summary
|
||||
persona: summarizer
|
||||
model: claude-haiku
|
||||
dependencies: [security-review, quality-review]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: security-review
|
||||
artifact: security
|
||||
as: security_findings
|
||||
- step: quality-review
|
||||
artifact: quality
|
||||
as: quality_findings
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize the review findings into a final verdict.
|
||||
|
||||
Produce a unified review with:
|
||||
1. Overall assessment (APPROVE / REQUEST_CHANGES / NEEDS_DISCUSSION)
|
||||
2. Critical issues that must be fixed
|
||||
3. Suggested improvements (optional but recommended)
|
||||
4. Positive observations
|
||||
|
||||
Format as a PR review comment ready to post.
|
||||
Do NOT include a title/header line — the publish step adds one.
|
||||
output_artifacts:
|
||||
- name: verdict
|
||||
path: .wave/output/review-summary.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/review-summary.md
|
||||
|
||||
- id: publish
|
||||
persona: "gitea-commenter"
|
||||
dependencies: [summary]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: summary
|
||||
artifact: verdict
|
||||
as: review_summary
|
||||
- step: diff-analysis
|
||||
artifact: diff
|
||||
as: pr_context
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Post the code review summary as a PR comment.
|
||||
|
||||
Read the `pr_context` artifact first — it contains structured PR metadata
|
||||
with `pr_metadata.number` and `pr_metadata.url`. Use these to identify the
|
||||
target PR instead of parsing raw input text.
|
||||
|
||||
1. Read the `pr_context` artifact and extract `pr_metadata.number` for use in commands
|
||||
2. Write the review content to a temp file, then post it as a PR comment:
|
||||
cat > /tmp/pr-review-comment.md <<'REVIEW_EOF'
|
||||
## Code Review (Wave Pipeline)
|
||||
|
||||
<review content>
|
||||
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) pr-review pipeline*
|
||||
REVIEW_EOF
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} comment <PR_NUMBER from pr_metadata> --body-file /tmp/pr-review-comment.md
|
||||
|
||||
output_artifacts:
|
||||
- name: publish-result
|
||||
path: .wave/output/publish-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/publish-result.json
|
||||
schema_path: .wave/contracts/gh-pr-comment-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: url
|
||||
extract_from: .wave/output/publish-result.json
|
||||
json_path: .comment_url
|
||||
label: "Review Comment"
|
||||
186
.wave/pipelines/ops-refresh.yaml
Normal file
186
.wave/pipelines/ops-refresh.yaml
Normal file
@@ -0,0 +1,186 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-refresh
|
||||
description: "Refresh a stale issue by comparing it against recent codebase changes"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- gh-cli
|
||||
- software-design
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 45 -- acceptance criteria are outdated after the worktree refactor"
|
||||
schema:
|
||||
type: string
|
||||
description: "owner/repo number, or full issue URL [-- optional criticism or direction]"
|
||||
|
||||
steps:
|
||||
- id: gather-context
|
||||
persona: "gitea-analyst"
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input: {{ input }}
|
||||
|
||||
Parse the input:
|
||||
- If the input is a URL (https://github.com/OWNER/REPO/issues/NUM), extract REPO and NUMBER.
|
||||
- If the input is "owner/repo number", extract REPO (first token) and NUMBER (second token).
|
||||
- Split on " -- " to separate the identifier from optional criticism.
|
||||
- If there is text after " -- ", that is the user's CRITICISM about what's wrong with the issue.
|
||||
|
||||
Execute these steps:
|
||||
|
||||
1. Fetch the full issue:
|
||||
{{ forge.cli_tool }} issue view NUMBER --repo REPO --json number,title,body,labels,url,createdAt,comments
|
||||
|
||||
2. Get recent commits since the issue was created (cap at 30):
|
||||
git log --since="<createdAt>" --oneline -30
|
||||
|
||||
3. Get releases since the issue was created:
|
||||
{{ forge.cli_tool }} release list --repo REPO --limit 20
|
||||
Filter to only releases after the issue's createdAt date.
|
||||
|
||||
4. Scan the issue body for file path references (backtick-quoted paths, relative paths).
|
||||
For each referenced file, check if it still exists.
|
||||
|
||||
Produce a JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: issue_context
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-context.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: draft-update
|
||||
persona: "gitea-analyst"
|
||||
dependencies: [gather-context]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather-context
|
||||
artifact: issue_context
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
The context artifact contains the gathered issue context.
|
||||
|
||||
Compare the original issue against the codebase changes and draft an updated version.
|
||||
|
||||
Step 1: Analyze each section of the issue body. Classify each as:
|
||||
- STILL_VALID: Content is accurate and up-to-date
|
||||
- OUTDATED: Content references old behavior, removed files, or superseded patterns
|
||||
- INCOMPLETE: Content is partially correct but missing recent developments
|
||||
- WRONG: Content is factually incorrect given current codebase state
|
||||
|
||||
Step 2: If there is user criticism (non-empty "criticism" field), address EVERY point raised.
|
||||
The criticism takes priority — it represents what the issue author thinks is wrong.
|
||||
|
||||
Step 3: Draft the updated issue:
|
||||
- Preserve sections classified as STILL_VALID (do not rewrite what works)
|
||||
- Rewrite OUTDATED and WRONG sections to reflect current reality
|
||||
- Expand INCOMPLETE sections with missing information
|
||||
- If the title needs updating, draft a new title
|
||||
- Append a "---\n**Changes since original**" section at the bottom listing what changed and why
|
||||
|
||||
Step 4: If file paths in the issue body are now missing (from referenced_files.missing),
|
||||
update or remove those references.
|
||||
|
||||
Produce a JSON result matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: update_draft
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-draft.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: apply-update
|
||||
persona: "gitea-enhancer"
|
||||
dependencies: [draft-update]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: draft-update
|
||||
artifact: update_draft
|
||||
as: draft
|
||||
- step: gather-context
|
||||
artifact: issue_context
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Extract the repo and issue number from the available artifacts.
|
||||
|
||||
Apply the update:
|
||||
- If title_changed is true:
|
||||
{{ forge.cli_tool }} issue edit <NUMBER> --repo <REPO> --title '<updated_title>'
|
||||
- Write the updated_body to a temp file, then apply it:
|
||||
{{ forge.cli_tool }} issue edit <NUMBER> --repo <REPO> --body-file <temp_file>
|
||||
- Clean up the temp file after applying.
|
||||
|
||||
Verify the update was applied:
|
||||
{{ forge.cli_tool }} issue view <NUMBER> --repo <REPO> --json number,title,body,url
|
||||
|
||||
Compare the returned title and body against what was intended. Flag any discrepancies.
|
||||
|
||||
Record the results as a JSON object matching the contract schema.
|
||||
output_artifacts:
|
||||
- name: update_result
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .url
|
||||
label: "Updated Issue"
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/issue-update-result.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
35
.wave/pipelines/ops-release-harden.yaml
Normal file
35
.wave/pipelines/ops-release-harden.yaml
Normal file
@@ -0,0 +1,35 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-release-harden
|
||||
description: "Security scan, branch on severity, apply hotfixes, generate changelog"
|
||||
category: composition
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "v1.0.0"
|
||||
schema:
|
||||
type: string
|
||||
description: "Release version or branch to harden"
|
||||
|
||||
steps:
|
||||
- id: scan
|
||||
pipeline: audit-security
|
||||
input: "{{input}}"
|
||||
|
||||
- id: triage
|
||||
dependencies: [scan]
|
||||
branch:
|
||||
on: "{{scan.output.risk_level}}"
|
||||
cases:
|
||||
critical: impl-hotfix
|
||||
high: impl-hotfix
|
||||
medium: impl-improve
|
||||
low: skip
|
||||
|
||||
- id: gate
|
||||
dependencies: [triage]
|
||||
gate:
|
||||
type: approval
|
||||
message: "Review security fixes before release"
|
||||
timeout: "4h"
|
||||
123
.wave/pipelines/ops-rewrite.yaml
Normal file
123
.wave/pipelines/ops-rewrite.yaml
Normal file
@@ -0,0 +1,123 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-rewrite
|
||||
description: "Analyze and rewrite poorly documented issues"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- gh-cli
|
||||
- software-design
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42 or https://github.com/re-cinq/wave/issues/42"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub repo with optional issue number, or full issue URL"
|
||||
|
||||
steps:
|
||||
- id: scan-and-score
|
||||
persona: "gitea-analyst"
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input: {{ input }}
|
||||
|
||||
Step 1: Parse the input format.
|
||||
- If URL (https://github.com/OWNER/REPO/issues/NUM) → extract <REPO> and <NUM>
|
||||
- If "owner/repo NUM" → extract <REPO> and <NUM>
|
||||
- If "owner/repo" alone → batch mode, use {{ input }} as <REPO>
|
||||
|
||||
Step 2: Fetch issues via {{ forge.cli_tool }}.
|
||||
- Single: {{ forge.cli_tool }} issue view <NUM> --repo <REPO> --json number,title,body,labels,url
|
||||
- Batch: {{ forge.cli_tool }} issue list --repo {{ input }} --limit 10 --json number,title,body,labels,url
|
||||
|
||||
IMPORTANT: If a specific issue number was parsed from the input but the fetch
|
||||
fails (not found, permissions error), STOP. Output JSON with repository set
|
||||
to <REPO>, issues_to_enhance as empty array, and total_to_enhance: 0.
|
||||
Do NOT fall back to batch mode when a specific issue was requested.
|
||||
|
||||
Step 3: Score each issue quality (0-100) on title clarity, description completeness, labels, and acceptance criteria.
|
||||
|
||||
Step 4: For issues scoring below 70, create an enhancement plan with:
|
||||
- suggested_title, body_template (preserving original content), suggested_labels, enhancements list
|
||||
|
||||
Output JSON with repository (owner/repo string), issues_to_enhance array, and total_to_enhance.
|
||||
output_artifacts:
|
||||
- name: enhancement_plan
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/enhancement-plan.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: apply-enhancements
|
||||
persona: "gitea-enhancer"
|
||||
dependencies: [scan-and-score]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scan-and-score
|
||||
artifact: enhancement_plan
|
||||
as: impl_plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Read the "repository" field from the plan artifact to get <REPO>.
|
||||
|
||||
For each issue in issues_to_enhance:
|
||||
1. Apply title: {{ forge.cli_tool }} issue edit <NUM> --repo <REPO> --title 'suggested_title'
|
||||
2. Apply body — write to temp file first, then apply:
|
||||
cat > /tmp/issue-body.md <<'BODY_EOF'
|
||||
<body_template content>
|
||||
BODY_EOF
|
||||
{{ forge.cli_tool }} issue edit <NUM> --repo <REPO> --body-file /tmp/issue-body.md
|
||||
3. Add labels: {{ forge.cli_tool }} issue edit <NUM> --repo <REPO> --add-label "label1,label2"
|
||||
4. Capture URL: {{ forge.cli_tool }} issue view <NUM> --repo <REPO> --json url --jq .url
|
||||
|
||||
Output JSON with enhanced_issues (issue_number, success, changes_made, url),
|
||||
total_attempted, total_successful.
|
||||
output_artifacts:
|
||||
- name: enhancement_results
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .enhanced_issues[*].url
|
||||
label: "Enhanced Issue"
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/enhancement-results.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
178
.wave/pipelines/ops-supervise.yaml
Normal file
178
.wave/pipelines/ops-supervise.yaml
Normal file
@@ -0,0 +1,178 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: ops-supervise
|
||||
description: "Review work quality and process quality, including claudit session transcripts"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "review the last pipeline run for quality and process issues"
|
||||
|
||||
steps:
|
||||
- id: gather
|
||||
persona: supervisor
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Gather evidence for supervision of: {{ input }}
|
||||
|
||||
## Smart Input Detection
|
||||
|
||||
Determine what to inspect based on the input:
|
||||
- **Empty or "last pipeline run"**: Find the most recent pipeline run via `.wave/workspaces/` timestamps and recent git activity
|
||||
- **"current pr" or "PR #N"**: Inspect the current or specified pull request (`git log`, `{{ forge.cli_tool }} {{ forge.pr_command }} view`)
|
||||
- **Branch name**: Inspect all commits on that branch vs main
|
||||
- **Free-form description**: Use grep/git log to find relevant recent work
|
||||
|
||||
## Evidence Collection
|
||||
|
||||
1. **Git history**: Recent commits with diffs (`git log --stat`, `git diff`)
|
||||
2. **Session transcripts**: Check for claudit git notes (`git notes show <commit>` for each relevant commit). Summarize what happened in each session — tool calls, approach taken, detours, errors
|
||||
3. **Pipeline artifacts**: Scan `.wave/workspaces/` for the relevant pipeline run. List all output artifacts and their contents
|
||||
4. **Test state**: Run the project's test suite to capture current test status
|
||||
5. **Branch/PR context**: Branch name, ahead/behind status, PR state if applicable
|
||||
|
||||
## Output
|
||||
|
||||
Produce a comprehensive evidence bundle as structured JSON. Include all raw
|
||||
evidence — the evaluation step will interpret it.
|
||||
|
||||
Be thorough in transcript analysis — the process quality evaluation depends
|
||||
heavily on understanding what the agent actually did vs what it should have done.
|
||||
output_artifacts:
|
||||
- name: evidence
|
||||
path: .wave/output/supervision-evidence.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/supervision-evidence.json
|
||||
schema_path: .wave/contracts/supervision-evidence.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: evaluate
|
||||
persona: supervisor
|
||||
dependencies: [gather]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: evidence
|
||||
as: evidence
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Evaluate the work quality based on gathered evidence.
|
||||
|
||||
The gathered evidence has been injected into your workspace. Read it first.
|
||||
|
||||
## Output Quality Assessment
|
||||
|
||||
For each dimension, score as excellent/good/adequate/poor with specific findings:
|
||||
|
||||
1. **Correctness**: Does the code do what was intended? Check logic, edge cases, error handling
|
||||
2. **Completeness**: Are all requirements addressed? Any gaps or TODOs left?
|
||||
3. **Test coverage**: Are changes adequately tested? Run targeted tests if needed
|
||||
4. **Code quality**: Does it follow project conventions? Clean abstractions? Good naming?
|
||||
|
||||
## Process Quality Assessment
|
||||
|
||||
Using the session transcripts from the evidence:
|
||||
|
||||
1. **Efficiency**: Was the approach direct? Count unnecessary file reads, repeated searches, abandoned approaches visible in transcripts
|
||||
2. **Scope discipline**: Did the agent stay on task? Flag any scope creep — changes unrelated to the original goal
|
||||
3. **Tool usage**: Were the right tools used? (e.g., Read vs Bash cat, Glob vs find)
|
||||
4. **Token economy**: Was the work concise or bloated? Excessive context gathering? Redundant operations?
|
||||
|
||||
## Synthesis
|
||||
|
||||
- Overall score (excellent/good/adequate/poor)
|
||||
- Key strengths (what went well)
|
||||
- Key concerns (what needs attention)
|
||||
|
||||
Produce the evaluation as a structured JSON result.
|
||||
output_artifacts:
|
||||
- name: evaluation
|
||||
path: .wave/output/supervision-evaluation.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/supervision-evaluation.json
|
||||
schema_path: .wave/contracts/supervision-evaluation.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: verdict
|
||||
persona: reviewer
|
||||
dependencies: [evaluate]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: evidence
|
||||
as: evidence
|
||||
- step: evaluate
|
||||
artifact: evaluation
|
||||
as: evaluation
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize a final supervision verdict.
|
||||
|
||||
The gathered evidence and evaluation have been injected into your workspace.
|
||||
Read them both before proceeding.
|
||||
|
||||
## Independent Verification
|
||||
|
||||
1. Run the project's test suite
|
||||
2. Cross-check evaluation claims against actual code
|
||||
3. Verify any specific concerns raised in the evaluation
|
||||
|
||||
## Verdict
|
||||
|
||||
Issue one of:
|
||||
- **APPROVE**: Work is good quality, process was efficient. Ship it.
|
||||
- **PARTIAL_APPROVE**: Output is acceptable but process had issues worth noting for improvement.
|
||||
- **REWORK**: Significant issues found that need to be addressed before the work is acceptable.
|
||||
|
||||
## Action Items (if REWORK or PARTIAL_APPROVE)
|
||||
|
||||
For each issue requiring action:
|
||||
- Specific file and line references
|
||||
- What needs to change and why
|
||||
- Priority (must-fix vs should-fix)
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
What should be done differently next time? Process improvements, common pitfalls observed.
|
||||
|
||||
Produce the verdict as a markdown report with clear sections:
|
||||
## Verdict, ## Output Quality, ## Process Quality, ## Action Items, ## Lessons Learned
|
||||
output_artifacts:
|
||||
- name: verdict
|
||||
path: .wave/output/supervision-verdict.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/supervision-verdict.md
|
||||
237
.wave/pipelines/plan-adr.yaml
Normal file
237
.wave/pipelines/plan-adr.yaml
Normal file
@@ -0,0 +1,237 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: plan-adr
|
||||
description: "Create an Architecture Decision Record for a design choice"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-architecture
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "ADR: should we use SQLite or PostgreSQL for pipeline state?"
|
||||
|
||||
steps:
|
||||
- id: explore-context
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Explore the codebase to gather context for this architectural decision: {{ input }}
|
||||
|
||||
## Exploration
|
||||
|
||||
1. **Understand the decision space**: What part of the system is this about?
|
||||
Find all related code, configs, and documentation.
|
||||
|
||||
2. **Map current state**: How does the system work today?
|
||||
What would be affected by this decision?
|
||||
|
||||
3. **Find constraints**: What technical constraints exist?
|
||||
(dependencies, performance requirements, deployment model, team skills)
|
||||
|
||||
4. **Check precedents**: Are there similar decisions already made in this
|
||||
codebase? Look for ADRs, design docs, or relevant comments.
|
||||
|
||||
5. **Identify stakeholders**: Which components/teams/users are affected?
|
||||
|
||||
Write your findings as structured JSON.
|
||||
Include: decision_topic, current_state (description, affected_files, affected_components),
|
||||
constraints, precedents, stakeholders, and timestamp.
|
||||
output_artifacts:
|
||||
- name: context
|
||||
path: .wave/output/adr-context.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/adr-context.json
|
||||
schema_path: .wave/contracts/adr-context.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: analyze-options
|
||||
persona: planner
|
||||
model: claude-haiku
|
||||
dependencies: [explore-context]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore-context
|
||||
artifact: context
|
||||
as: decision_context
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the options for this architectural decision.
|
||||
|
||||
Original decision: {{ input }}
|
||||
|
||||
## Analysis
|
||||
|
||||
For each viable option:
|
||||
|
||||
1. **Describe it**: What would this option look like in practice?
|
||||
2. **Pros**: What are the benefits? Be specific to THIS project.
|
||||
3. **Cons**: What are the drawbacks? Be honest.
|
||||
4. **Effort**: How much work to implement?
|
||||
5. **Risk**: What could go wrong?
|
||||
6. **Reversibility**: How hard to undo if it's the wrong choice?
|
||||
7. **Compatibility**: How well does it fit with existing constraints?
|
||||
|
||||
Write your analysis as structured JSON.
|
||||
Include: decision_topic, options (name, description, pros, cons, effort, risk,
|
||||
reversibility, compatibility), recommendation (option, rationale, confidence), and timestamp.
|
||||
output_artifacts:
|
||||
- name: options
|
||||
path: .wave/output/adr-options.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/adr-options.json
|
||||
schema_path: .wave/contracts/adr-options.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: draft-record
|
||||
persona: philosopher
|
||||
dependencies: [analyze-options]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore-context
|
||||
artifact: context
|
||||
as: decision_context
|
||||
- step: analyze-options
|
||||
artifact: options
|
||||
as: analysis
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Draft the Architecture Decision Record using the injected context and analysis.
|
||||
|
||||
Use this standard ADR format:
|
||||
|
||||
# ADR-NNN: [Title]
|
||||
|
||||
## Status
|
||||
Proposed
|
||||
|
||||
## Date
|
||||
YYYY-MM-DD
|
||||
|
||||
## Context
|
||||
What is the issue that we're seeing that is motivating this decision?
|
||||
Include technical context from the codebase exploration.
|
||||
|
||||
## Decision
|
||||
What is the change that we're proposing and/or doing?
|
||||
State the recommended option clearly.
|
||||
|
||||
## Options Considered
|
||||
|
||||
### Option 1: [Name]
|
||||
Description, pros, cons.
|
||||
|
||||
### Option 2: [Name]
|
||||
Description, pros, cons.
|
||||
|
||||
(etc.)
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- What becomes easier or better?
|
||||
|
||||
### Negative
|
||||
- What becomes harder or worse?
|
||||
|
||||
### Neutral
|
||||
- What other changes are required?
|
||||
|
||||
## Implementation Notes
|
||||
- Key steps to implement the decision
|
||||
- Files/components that need changes
|
||||
- Migration plan if applicable
|
||||
|
||||
---
|
||||
|
||||
Write clearly and concisely. The ADR should be understandable by
|
||||
someone who wasn't part of the original discussion.
|
||||
output_artifacts:
|
||||
- name: adr
|
||||
path: .wave/output/adr.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/adr.md
|
||||
|
||||
- id: publish
|
||||
persona: craftsman
|
||||
dependencies: [draft-record]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: draft-record
|
||||
artifact: adr
|
||||
as: adr
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "docs/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
PUBLISH — commit the ADR and create a pull request.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Copy the ADR into the project docs:
|
||||
- Determine the next ADR number by listing existing ADR files
|
||||
(e.g., `ls docs/adr/` or similar convention)
|
||||
- Copy `.wave/artifacts/adr` to the appropriate location
|
||||
(e.g., `docs/adr/NNN-title.md`)
|
||||
|
||||
2. Commit:
|
||||
```bash
|
||||
git add docs/adr/
|
||||
git commit -m "docs: add ADR for <decision topic>"
|
||||
```
|
||||
|
||||
3. Push and create PR:
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
{{ forge.cli_tool }} {{ forge.pr_command }} create --title "docs: ADR — <decision topic>" --body-file .wave/artifacts/adr
|
||||
```
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
282
.wave/pipelines/plan-research.yaml
Normal file
282
.wave/pipelines/plan-research.yaml
Normal file
@@ -0,0 +1,282 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: plan-research
|
||||
description: Research an issue and post findings as a comment
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- gh-cli
|
||||
- software-design
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 42"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub repository and issue number (e.g. 'owner/repo number')"
|
||||
|
||||
steps:
|
||||
- id: fetch-issue
|
||||
persona: "gitea-analyst"
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Fetch the GitHub issue specified in the input: {{ input }}
|
||||
|
||||
CRITICAL: If the input above is empty, blank, or missing, you MUST immediately fail.
|
||||
Write an error JSON to .wave/output/issue-content.json with issue_number: 0 and
|
||||
title: "ERROR: No input provided — expected a GitHub issue URL or owner/repo number".
|
||||
Do NOT guess, infer, or use any example. Stop here.
|
||||
|
||||
Accepted input formats (parse the actual input, never fabricate values):
|
||||
- Full URL: https://github.com/owner/repo/issues/123
|
||||
- Short form: owner/repo 123
|
||||
|
||||
Parse the input to extract the repository and issue number.
|
||||
Fetch the issue:
|
||||
|
||||
{{ forge.cli_tool }} issue view <number> --repo <owner/repo> --json number,title,body,labels,state,author,createdAt,url,comments
|
||||
|
||||
Parse the output and produce structured JSON with the issue content.
|
||||
Include repository information in the output.
|
||||
output_artifacts:
|
||||
- name: issue-content
|
||||
path: .wave/output/issue-content.json
|
||||
type: json
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/issue-content.json
|
||||
schema_path: .wave/contracts/issue-content.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: analyze-topics
|
||||
persona: researcher
|
||||
model: claude-haiku
|
||||
dependencies: [fetch-issue]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze the GitHub issue and extract research topics.
|
||||
|
||||
Identify:
|
||||
1. Key technical questions that need external research
|
||||
2. Domain concepts that require clarification
|
||||
3. External dependencies, libraries, or tools to investigate
|
||||
4. Similar problems/solutions that might provide guidance
|
||||
|
||||
For each topic, provide:
|
||||
- A unique ID (TOPIC-001, TOPIC-002, etc.)
|
||||
- A clear title
|
||||
- Specific questions to answer (1-5 questions per topic)
|
||||
- Search keywords for web research
|
||||
- Priority (critical/high/medium/low based on relevance to solving the issue)
|
||||
- Category (technical/documentation/best_practices/security/performance/compatibility/other)
|
||||
|
||||
Focus on topics that will provide actionable insights for the issue author.
|
||||
Limit to 10 most important topics.
|
||||
output_artifacts:
|
||||
- name: topics
|
||||
path: .wave/output/research-topics.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-topics.json
|
||||
schema_path: .wave/contracts/research-topics.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: research-topics
|
||||
persona: researcher
|
||||
model: claude-haiku
|
||||
dependencies: [analyze-topics]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
- step: analyze-topics
|
||||
artifact: topics
|
||||
as: research_plan
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Research the topics identified in the research plan.
|
||||
|
||||
For each topic in the research plan:
|
||||
1. Execute web searches using the provided keywords
|
||||
2. Evaluate source credibility (official docs > authoritative > community)
|
||||
3. Extract relevant findings with key points
|
||||
4. Include direct quotes where helpful
|
||||
5. Rate your confidence in the answer (high/medium/low/inconclusive)
|
||||
|
||||
For each finding:
|
||||
- Assign a unique ID (FINDING-001, FINDING-002, etc.)
|
||||
- Provide a summary (20-2000 characters)
|
||||
- List key points as bullet items
|
||||
- Include source URL, title, and type
|
||||
- Rate relevance to the topic (0-1)
|
||||
|
||||
Always include source URLs for attribution.
|
||||
If a topic yields no useful results, mark confidence as "inconclusive".
|
||||
Document any gaps in the research.
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/research-findings.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-findings.json
|
||||
schema_path: .wave/contracts/research-findings.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: synthesize-report
|
||||
persona: summarizer
|
||||
dependencies: [research-topics]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: original_issue
|
||||
- step: research-topics
|
||||
artifact: findings
|
||||
as: research
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Synthesize the research findings into a coherent report for the GitHub issue.
|
||||
|
||||
Create a well-structured research report that includes:
|
||||
|
||||
1. Executive Summary:
|
||||
- Brief overview (50-1000 chars)
|
||||
- Key findings (1-7 bullet points)
|
||||
- Primary recommendation
|
||||
- Confidence assessment (high/medium/low)
|
||||
|
||||
2. Detailed Findings:
|
||||
- Organize by topic/section
|
||||
- Include code examples where relevant
|
||||
- Reference sources using SRC-### IDs
|
||||
|
||||
3. Recommendations:
|
||||
- Actionable items with IDs (REC-001, REC-002, etc.)
|
||||
- Priority and effort estimates
|
||||
- Maximum 10 recommendations
|
||||
|
||||
4. Sources:
|
||||
- List all sources with IDs (SRC-001, SRC-002, etc.)
|
||||
- Include URL, title, type, and reliability
|
||||
|
||||
5. Pre-rendered Markdown:
|
||||
- Generate complete markdown_content field ready for GitHub comment
|
||||
- Use proper headers, bullet points, and formatting
|
||||
- Include a header: "## Research Findings (Wave Pipeline)"
|
||||
- End with sources section
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/research-report.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/research-report.json
|
||||
schema_path: .wave/contracts/research-report.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: post-comment
|
||||
persona: "gitea-commenter"
|
||||
dependencies: [synthesize-report]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-issue
|
||||
artifact: issue-content
|
||||
as: issue
|
||||
- step: synthesize-report
|
||||
artifact: report
|
||||
as: report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Post the research report as a comment on the GitHub issue.
|
||||
|
||||
Steps:
|
||||
1. Read the issue details to get the repository and issue number
|
||||
2. Read the report to get the markdown_content
|
||||
3. Write the markdown content to a file, then use {{ forge.cli_tool }} to post the comment:
|
||||
|
||||
# Write to file to avoid shell escaping issues with large markdown
|
||||
cat > /tmp/comment-body.md << 'COMMENT_EOF'
|
||||
<markdown_content>
|
||||
COMMENT_EOF
|
||||
|
||||
{{ forge.cli_tool }} issue comment <number> --repo <owner/repo> --body-file /tmp/comment-body.md
|
||||
|
||||
4. Add a footer to the comment:
|
||||
---
|
||||
*Generated by [Wave](https://github.com/re-cinq/wave) issue-research pipeline*
|
||||
|
||||
5. Capture the result and verify success
|
||||
6. If successful, extract the comment URL from the output
|
||||
|
||||
Record the result with:
|
||||
- success: true/false
|
||||
- issue_reference: issue number and repository
|
||||
- comment: id, url, body_length (if successful)
|
||||
- error: code, message, retryable (if failed)
|
||||
- timestamp: current time
|
||||
output_artifacts:
|
||||
- name: comment-result
|
||||
path: .wave/output/comment-result.json
|
||||
type: json
|
||||
outcomes:
|
||||
- type: url
|
||||
extract_from: .wave/output/comment-result.json
|
||||
json_path: .comment.url
|
||||
label: "Research Comment"
|
||||
retry:
|
||||
policy: aggressive
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/comment-result.json
|
||||
schema_path: .wave/contracts/comment-result.schema.json
|
||||
on_failure: retry
|
||||
187
.wave/pipelines/plan-scope.yaml
Normal file
187
.wave/pipelines/plan-scope.yaml
Normal file
@@ -0,0 +1,187 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: plan-scope
|
||||
description: "Decompose an epic into well-scoped child issues"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- gh-cli
|
||||
- software-design
|
||||
|
||||
requires:
|
||||
tools:
|
||||
- gh
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "re-cinq/wave 184"
|
||||
schema:
|
||||
type: string
|
||||
description: "GitHub repository with epic issue number (e.g. 'owner/repo 42')"
|
||||
|
||||
steps:
|
||||
- id: fetch-epic
|
||||
persona: "gitea-analyst"
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Input: {{ input }}
|
||||
|
||||
Parse the input: extract the repo (first token) and the epic issue number (second token).
|
||||
|
||||
Execute these steps:
|
||||
|
||||
1. Fetch the epic issue with full details:
|
||||
{{ forge.cli_tool }} issue view <NUMBER> --repo <REPO> --json number,title,body,labels,url,comments,author,state
|
||||
|
||||
2. List existing open issues to check for duplicates:
|
||||
{{ forge.cli_tool }} issue list --repo <REPO> --limit 50 --json number,title,labels,url
|
||||
|
||||
Analyze the epic:
|
||||
- Determine if this is truly an epic/umbrella issue (contains multiple work items)
|
||||
- Identify the key themes and work areas
|
||||
- Estimate overall complexity
|
||||
- Count how many sub-issues should be created (3-10)
|
||||
- List existing issues to avoid creating duplicates
|
||||
output_artifacts:
|
||||
- name: epic_assessment
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/epic-assessment.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: scope-and-create
|
||||
persona: "gitea-scoper"
|
||||
dependencies: [fetch-epic]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: fetch-epic
|
||||
artifact: epic_assessment
|
||||
as: assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
The assessment artifact contains the epic analysis. Use it to create well-scoped child issues.
|
||||
|
||||
Input: {{ input }}
|
||||
Parse the repo from the input (first token).
|
||||
|
||||
For each planned sub-issue, write the body to a temp file and create the issue safely:
|
||||
cat > /tmp/issue-body.md <<'ISSUE_EOF'
|
||||
<body content here>
|
||||
ISSUE_EOF
|
||||
{{ forge.cli_tool }} issue create --repo <REPO> --title '<title>' --body-file /tmp/issue-body.md --label "<labels>"
|
||||
|
||||
Each sub-issue body MUST include:
|
||||
- A "Parent: #<epic_number>" reference line
|
||||
- A clear Summary section
|
||||
- Acceptance Criteria as a checkbox list
|
||||
- Dependencies on other sub-issues if applicable
|
||||
- Scope Notes for what is explicitly excluded
|
||||
|
||||
After creating all issues, capture each issue's number and URL from the creation output.
|
||||
|
||||
Record the results with fields: parent_issue (number, url, repository),
|
||||
created_issues (array of number, title, url, labels, success, complexity, dependencies),
|
||||
total_created, total_failed.
|
||||
output_artifacts:
|
||||
- name: scope_plan
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
outcomes:
|
||||
- type: issue
|
||||
extract_from: .wave/artifact.json
|
||||
json_path: .created_issues[*].url
|
||||
label: "First Sub-Issue"
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/scope-plan.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
|
||||
- id: verify-report
|
||||
persona: "gitea-analyst"
|
||||
dependencies: [scope-and-create]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: scope-and-create
|
||||
artifact: scope_plan
|
||||
as: results
|
||||
- step: fetch-epic
|
||||
artifact: epic_assessment
|
||||
as: assessment
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the created sub-issues and compile a verification report.
|
||||
|
||||
Input: {{ input }}
|
||||
Parse the repo (first token) and epic number (second token).
|
||||
|
||||
Step 1: For each created issue in the results, verify it exists:
|
||||
{{ forge.cli_tool }} issue view <N> --repo <REPO> --json number,title,body,labels
|
||||
|
||||
Check that each issue:
|
||||
- Exists and is open
|
||||
- Has acceptance criteria in the body
|
||||
- References the parent epic
|
||||
|
||||
Step 2: This step is READ-ONLY. Do NOT post comments -- the github-analyst
|
||||
persona does not have write permissions. Instead, include a pre-rendered markdown
|
||||
summary in the output JSON that a commenter persona could post later.
|
||||
|
||||
Create a markdown summary with a checklist of all sub-issues (- [ ] #<number> <title>).
|
||||
|
||||
Step 3: Compile the verification report with fields:
|
||||
parent_issue (number, url), verified_issues (array of number, title, url, exists,
|
||||
has_acceptance_criteria, references_parent), summary (total_verified, total_valid,
|
||||
total_issues_created, comment_posted, comment_url).
|
||||
output_artifacts:
|
||||
- name: scope_report
|
||||
path: .wave/artifact.json
|
||||
type: json
|
||||
required: true
|
||||
retry:
|
||||
policy: none
|
||||
max_attempts: 1
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
schema_path: .wave/contracts/scope-report.schema.json
|
||||
validate: true
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
allow_recovery: true
|
||||
recovery_level: progressive
|
||||
progressive_validation: false
|
||||
218
.wave/pipelines/plan-task.yaml
Normal file
218
.wave/pipelines/plan-task.yaml
Normal file
@@ -0,0 +1,218 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: plan-task
|
||||
description: "Break down a feature into actionable tasks with structured exploration, planning, and review"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- software-architecture
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "add webhook support for pipeline completion events"
|
||||
|
||||
steps:
|
||||
- id: explore
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are exploring a codebase to gather context for planning this feature or task:
|
||||
|
||||
{{ input }}
|
||||
|
||||
Your goal is to produce a rich, structured JSON exploration that a planner persona
|
||||
will use (without any other context) to break the work into tasks.
|
||||
|
||||
## Exploration Steps
|
||||
|
||||
1. **Understand the request**: Summarize what is being asked and assess scope
|
||||
(small = 1-2 files, medium = 3-7 files, large = 8-15 files, epic = 16+ files).
|
||||
|
||||
2. **Find related files**: Use Glob and Grep to find files related to the feature.
|
||||
For each file, note its path, relevance (primary/secondary/reference), why it
|
||||
matters, and key symbols (functions, types, constants) within it.
|
||||
|
||||
3. **Identify patterns**: Use Read to examine key files. Document codebase patterns
|
||||
and conventions. Assign each a PAT-### ID and relevance level:
|
||||
- must_follow: Violating this would break consistency or cause bugs
|
||||
- should_follow: Strong convention but exceptions exist
|
||||
- informational: Good to know but not binding
|
||||
|
||||
4. **Map affected modules**: Identify which packages/modules will be directly or
|
||||
indirectly affected. Note their dependencies and dependents.
|
||||
|
||||
5. **Survey testing landscape**: Find test files related to the affected code.
|
||||
Note testing patterns (table-driven, mocks, fixtures, etc.) and coverage gaps.
|
||||
|
||||
6. **Assess risks**: Identify potential risks (breaking changes, performance concerns,
|
||||
security implications). Rate severity (high/medium/low) and suggest mitigations.
|
||||
|
||||
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||
outside the file. The file must parse as valid JSON.
|
||||
output_artifacts:
|
||||
- name: exploration
|
||||
path: .wave/output/exploration.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/exploration.json
|
||||
schema_path: .wave/contracts/plan-exploration.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: breakdown
|
||||
persona: planner
|
||||
dependencies: [explore]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are breaking down a feature into actionable implementation tasks.
|
||||
|
||||
## Input
|
||||
|
||||
Feature request: {{ input }}
|
||||
|
||||
Codebase exploration has already been done and injected into your workspace.
|
||||
It contains structured JSON with: related files, codebase patterns,
|
||||
affected modules, testing landscape, and identified risks. Use ALL of this
|
||||
information to inform your task breakdown.
|
||||
|
||||
## Task Breakdown Rules
|
||||
|
||||
1. **Task IDs**: Use T01, T02, T03... format (zero-padded two digits).
|
||||
|
||||
2. **Personas**: Assign each task to the most appropriate persona:
|
||||
- navigator: architecture decisions, exploration, planning
|
||||
- craftsman: implementation, coding, file creation
|
||||
- philosopher: review, analysis, quality assessment
|
||||
- auditor: security review, compliance checking
|
||||
- implementer: focused implementation tasks
|
||||
- reviewer: code review tasks
|
||||
|
||||
3. **Dependencies**: Express as task IDs (e.g., ["T01", "T02"]).
|
||||
A task with no dependencies gets an empty array [].
|
||||
|
||||
4. **Complexity**: S (< 1hr), M (1-4hr), L (4-8hr), XL (> 1 day).
|
||||
|
||||
5. **Acceptance criteria**: Each task MUST have at least one concrete,
|
||||
verifiable acceptance criterion.
|
||||
|
||||
6. **Affected files**: List files each task will create or modify.
|
||||
|
||||
7. **Execution order**: Group tasks into phases. Tasks within a phase
|
||||
can run in parallel. Phase 1 has no dependencies, Phase 2 depends
|
||||
on Phase 1, etc.
|
||||
|
||||
8. **Risks**: Note task-specific risks from the exploration.
|
||||
|
||||
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||
outside the file. The file must parse as valid JSON.
|
||||
output_artifacts:
|
||||
- name: tasks
|
||||
path: .wave/output/tasks.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/tasks.json
|
||||
schema_path: .wave/contracts/plan-tasks.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: review
|
||||
persona: philosopher
|
||||
dependencies: [breakdown]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: context
|
||||
- step: breakdown
|
||||
artifact: tasks
|
||||
as: task_list
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are reviewing a task breakdown plan for quality, completeness, and correctness.
|
||||
|
||||
## Input
|
||||
|
||||
Feature request: {{ input }}
|
||||
|
||||
Two artifacts have been injected into your workspace: the codebase exploration
|
||||
and the task breakdown plan. Read them BOTH before proceeding.
|
||||
|
||||
The exploration contains: related files, patterns, affected modules, testing
|
||||
landscape, and risks. The task list contains: feature summary, tasks with
|
||||
dependencies and acceptance criteria, and execution order.
|
||||
|
||||
## Review Checklist
|
||||
|
||||
For EACH task in the plan, evaluate and assign a status:
|
||||
- ok: Task is well-defined and ready to execute
|
||||
- needs_refinement: Good idea but needs clearer description or criteria
|
||||
- missing_details: Lacks acceptance criteria, affected files, or dependencies
|
||||
- overcomplicated: Should be split or simplified
|
||||
- wrong_persona: Different persona would be more appropriate
|
||||
- bad_dependencies: Dependencies are incorrect or missing
|
||||
|
||||
For each issue found, assign a REV-### ID, severity, description, and suggestion.
|
||||
|
||||
## Cross-Cutting Concerns
|
||||
|
||||
Look for concerns that span multiple tasks (CC-### IDs):
|
||||
- Testing strategy: Are tests planned? Do they follow codebase patterns?
|
||||
- Security: Are security implications addressed?
|
||||
- Performance: Will changes affect performance?
|
||||
- Backwards compatibility: Are breaking changes handled?
|
||||
- Documentation: Is documentation updated?
|
||||
|
||||
## Recommendations
|
||||
|
||||
Provide actionable recommendations (REC-### IDs) with type:
|
||||
add_task, modify_task, remove_task, reorder, split_task, merge_tasks,
|
||||
change_persona, add_dependency
|
||||
|
||||
## Verdict
|
||||
|
||||
Provide an overall verdict:
|
||||
- approve: Plan is ready to execute as-is
|
||||
- approve_with_notes: Plan is good but has minor issues to note
|
||||
- revise: Plan needs significant changes before execution
|
||||
|
||||
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||
outside the file. The file must parse as valid JSON.
|
||||
output_artifacts:
|
||||
- name: review
|
||||
path: .wave/output/plan-review.json
|
||||
type: json
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/plan-review.json
|
||||
schema_path: .wave/contracts/plan-review.schema.json
|
||||
on_failure: retry
|
||||
208
.wave/pipelines/plan.yaml
Normal file
208
.wave/pipelines/plan.yaml
Normal file
@@ -0,0 +1,208 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: plan
|
||||
description: "Break down a feature into actionable tasks with structured exploration, planning, and review"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "add webhook support for pipeline completion events"
|
||||
|
||||
steps:
|
||||
- id: explore
|
||||
persona: navigator
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are exploring a codebase to gather context for planning this feature or task:
|
||||
|
||||
{{ input }}
|
||||
|
||||
Your goal is to produce a rich, structured JSON exploration that a planner persona
|
||||
will use (without any other context) to break the work into tasks.
|
||||
|
||||
## Exploration Steps
|
||||
|
||||
1. **Understand the request**: Summarize what is being asked and assess scope
|
||||
(small = 1-2 files, medium = 3-7 files, large = 8-15 files, epic = 16+ files).
|
||||
|
||||
2. **Find related files**: Use Glob and Grep to find files related to the feature.
|
||||
For each file, note its path, relevance (primary/secondary/reference), why it
|
||||
matters, and key symbols (functions, types, constants) within it.
|
||||
|
||||
3. **Identify patterns**: Use Read to examine key files. Document codebase patterns
|
||||
and conventions. Assign each a PAT-### ID and relevance level:
|
||||
- must_follow: Violating this would break consistency or cause bugs
|
||||
- should_follow: Strong convention but exceptions exist
|
||||
- informational: Good to know but not binding
|
||||
|
||||
4. **Map affected modules**: Identify which packages/modules will be directly or
|
||||
indirectly affected. Note their dependencies and dependents.
|
||||
|
||||
5. **Survey testing landscape**: Find test files related to the affected code.
|
||||
Note testing patterns (table-driven, mocks, fixtures, etc.) and coverage gaps.
|
||||
|
||||
6. **Assess risks**: Identify potential risks (breaking changes, performance concerns,
|
||||
security implications). Rate severity (high/medium/low) and suggest mitigations.
|
||||
|
||||
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||
outside the file. The file must parse as valid JSON.
|
||||
output_artifacts:
|
||||
- name: exploration
|
||||
path: .wave/output/exploration.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/exploration.json
|
||||
schema_path: .wave/contracts/plan-exploration.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: breakdown
|
||||
persona: planner
|
||||
dependencies: [explore]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: context
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are breaking down a feature into actionable implementation tasks.
|
||||
|
||||
## Input
|
||||
|
||||
Feature request: {{ input }}
|
||||
|
||||
Codebase exploration has already been done and injected into your workspace.
|
||||
It contains structured JSON with: related files, codebase patterns,
|
||||
affected modules, testing landscape, and identified risks. Use ALL of this
|
||||
information to inform your task breakdown.
|
||||
|
||||
## Task Breakdown Rules
|
||||
|
||||
1. **Task IDs**: Use T01, T02, T03... format (zero-padded two digits).
|
||||
|
||||
2. **Personas**: Assign each task to the most appropriate persona:
|
||||
- navigator: architecture decisions, exploration, planning
|
||||
- craftsman: implementation, coding, file creation
|
||||
- philosopher: review, analysis, quality assessment
|
||||
- auditor: security review, compliance checking
|
||||
- implementer: focused implementation tasks
|
||||
- reviewer: code review tasks
|
||||
|
||||
3. **Dependencies**: Express as task IDs (e.g., ["T01", "T02"]).
|
||||
A task with no dependencies gets an empty array [].
|
||||
|
||||
4. **Complexity**: S (< 1hr), M (1-4hr), L (4-8hr), XL (> 1 day).
|
||||
|
||||
5. **Acceptance criteria**: Each task MUST have at least one concrete,
|
||||
verifiable acceptance criterion.
|
||||
|
||||
6. **Affected files**: List files each task will create or modify.
|
||||
|
||||
7. **Execution order**: Group tasks into phases. Tasks within a phase
|
||||
can run in parallel. Phase 1 has no dependencies, Phase 2 depends
|
||||
on Phase 1, etc.
|
||||
|
||||
8. **Risks**: Note task-specific risks from the exploration.
|
||||
|
||||
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||
outside the file. The file must parse as valid JSON.
|
||||
output_artifacts:
|
||||
- name: tasks
|
||||
path: .wave/output/tasks.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/tasks.json
|
||||
schema_path: .wave/contracts/plan-tasks.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: review
|
||||
persona: philosopher
|
||||
dependencies: [breakdown]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: explore
|
||||
artifact: exploration
|
||||
as: context
|
||||
- step: breakdown
|
||||
artifact: tasks
|
||||
as: task_list
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
You are reviewing a task breakdown plan for quality, completeness, and correctness.
|
||||
|
||||
## Input
|
||||
|
||||
Feature request: {{ input }}
|
||||
|
||||
Two artifacts have been injected into your workspace: the codebase exploration
|
||||
and the task breakdown plan. Read them BOTH before proceeding.
|
||||
|
||||
The exploration contains: related files, patterns, affected modules, testing
|
||||
landscape, and risks. The task list contains: feature summary, tasks with
|
||||
dependencies and acceptance criteria, and execution order.
|
||||
|
||||
## Review Checklist
|
||||
|
||||
For EACH task in the plan, evaluate and assign a status:
|
||||
- ok: Task is well-defined and ready to execute
|
||||
- needs_refinement: Good idea but needs clearer description or criteria
|
||||
- missing_details: Lacks acceptance criteria, affected files, or dependencies
|
||||
- overcomplicated: Should be split or simplified
|
||||
- wrong_persona: Different persona would be more appropriate
|
||||
- bad_dependencies: Dependencies are incorrect or missing
|
||||
|
||||
For each issue found, assign a REV-### ID, severity, description, and suggestion.
|
||||
|
||||
## Cross-Cutting Concerns
|
||||
|
||||
Look for concerns that span multiple tasks (CC-### IDs):
|
||||
- Testing strategy: Are tests planned? Do they follow codebase patterns?
|
||||
- Security: Are security implications addressed?
|
||||
- Performance: Will changes affect performance?
|
||||
- Backwards compatibility: Are breaking changes handled?
|
||||
- Documentation: Is documentation updated?
|
||||
|
||||
## Recommendations
|
||||
|
||||
Provide actionable recommendations (REC-### IDs) with type:
|
||||
add_task, modify_task, remove_task, reorder, split_task, merge_tasks,
|
||||
change_persona, add_dependency
|
||||
|
||||
## Verdict
|
||||
|
||||
Provide an overall verdict:
|
||||
- approve: Plan is ready to execute as-is
|
||||
- approve_with_notes: Plan is good but has minor issues to note
|
||||
- revise: Plan needs significant changes before execution
|
||||
|
||||
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||
outside the file. The file must parse as valid JSON.
|
||||
output_artifacts:
|
||||
- name: review
|
||||
path: .wave/output/plan-review.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/plan-review.json
|
||||
schema_path: .wave/contracts/plan-review.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
535
.wave/pipelines/recinq.yaml
Normal file
535
.wave/pipelines/recinq.yaml
Normal file
@@ -0,0 +1,535 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: recinq
|
||||
description: "Rethink and simplify code using divergent-convergent thinking (Double Diamond)"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "internal/pipeline"
|
||||
|
||||
# Pipeline structure implements the Double Diamond:
|
||||
#
|
||||
# gather → diverge → converge → probe → distill → simplify → report
|
||||
# ╰─ Diamond 1 ─╯ ╰─ Diamond 2 ─╯ ╰implement╯
|
||||
# (discover) (define) (develop) (deliver)
|
||||
#
|
||||
# Each step gets its own context window and cognitive mode.
|
||||
# Fresh memory at every boundary — no mode-switching within a step.
|
||||
|
||||
steps:
|
||||
- id: gather
|
||||
persona: github-analyst
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CONTEXT GATHERING — parse input and fetch GitHub context if applicable.
|
||||
|
||||
Input: {{ input }}
|
||||
|
||||
## Instructions
|
||||
|
||||
Determine what kind of input this is:
|
||||
|
||||
1. **GitHub Issue URL**: Contains `github.com` and `/issues/`
|
||||
- Extract owner/repo and issue number from the URL
|
||||
- Run: `gh issue view <number> --repo <owner/repo> --json title,body,labels`
|
||||
- Extract a `focus_hint` summarizing what should be simplified
|
||||
|
||||
2. **GitHub PR URL**: Contains `github.com` and `/pull/`
|
||||
- Extract owner/repo and PR number from the URL
|
||||
- Run: `gh pr view <number> --repo <owner/repo> --json title,body,labels,files`
|
||||
- Extract a `focus_hint` summarizing what the PR is about
|
||||
|
||||
3. **Local path or description**: Anything else
|
||||
- Set `input_type` to `"local"`
|
||||
- Pass through the original input as-is
|
||||
|
||||
## Output
|
||||
|
||||
IMPORTANT: The output MUST be valid JSON. Do NOT include markdown fencing.
|
||||
output_artifacts:
|
||||
- name: context
|
||||
path: .wave/output/context.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/context.json
|
||||
schema_path: .wave/contracts/recinq-context.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
# ── Diamond 1: Discover (DIVERGENT) ──────────────────────────────────
|
||||
- id: diverge
|
||||
persona: provocateur
|
||||
dependencies: [gather]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: context
|
||||
as: context
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
DIVERGENT THINKING — cast the widest net to find simplification opportunities.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
## Starting Point
|
||||
|
||||
The context artifact contains input context.
|
||||
If `input_type` is `"issue"` or `"pr"`, the `focus_hint` tells you WHERE to start looking —
|
||||
but do NOT limit yourself to what the issue describes. Use it as a seed, then expand outward.
|
||||
Follow dependency chains, trace callers, explore adjacent modules. The issue author doesn't
|
||||
know what they don't know — that's YOUR job.
|
||||
If `input_type` is `"local"`, use the `original_input` field as the target path.
|
||||
|
||||
If input is empty or "." — analyze the whole project.
|
||||
If input is a path — focus on that module/directory but consider its connections.
|
||||
|
||||
## Your Mission
|
||||
|
||||
Challenge EVERYTHING. Question every assumption. Hunt complexity.
|
||||
|
||||
## What to Look For
|
||||
|
||||
1. **Premature abstractions**: Interfaces with one implementation. Generic code used once.
|
||||
"What if we just inlined this?"
|
||||
|
||||
2. **Unnecessary indirection**: Layers that pass-through without adding value.
|
||||
Wrappers around wrappers. "How many hops to get to the actual logic?"
|
||||
|
||||
3. **Overengineering**: Configuration for things that never change. Plugins with one plugin.
|
||||
Feature flags for features that are always on. "Is this complexity earning its keep?"
|
||||
|
||||
4. **YAGNI violations**: Code written for hypothetical future needs that never arrived.
|
||||
"When was this last changed? Does anyone actually use this path?"
|
||||
|
||||
5. **Accidental complexity**: Things that are hard because of how they're built, not because
|
||||
the problem is hard. "Could this be 10x simpler if we started over?"
|
||||
|
||||
6. **Copy-paste drift**: Similar-but-slightly-different code that should be unified or
|
||||
intentionally differentiated. "Are these differences meaningful or accidental?"
|
||||
|
||||
7. **Dead weight**: Unused exports, unreachable code, obsolete comments, stale TODOs.
|
||||
`grep -r` for references. If nothing uses it, flag it.
|
||||
|
||||
8. **Naming lies**: Names that don't match what the code actually does.
|
||||
"Does this 'manager' actually manage anything?"
|
||||
|
||||
9. **Dependency gravity**: Modules that pull in everything. Import graphs that are too dense.
|
||||
"What's the blast radius of changing this?"
|
||||
|
||||
## Evidence Requirements
|
||||
|
||||
For EVERY finding, gather concrete metrics:
|
||||
- `wc -l` for line counts
|
||||
- `grep -r` for usage/reference counts
|
||||
- `git log --oneline <file> | wc -l` for change frequency
|
||||
- List the actual files involved
|
||||
|
||||
## Output
|
||||
|
||||
Each finding gets a unique ID: DVG-001, DVG-002, etc.
|
||||
|
||||
Be AGGRESSIVE — flag everything suspicious. The convergent phase will filter.
|
||||
It's better to have 30 findings with 10 false positives than 5 findings that miss
|
||||
the big opportunities.
|
||||
|
||||
Include a metrics_summary with totals by category and severity, plus hotspot files
|
||||
that appear in multiple findings.
|
||||
output_artifacts:
|
||||
- name: findings
|
||||
path: .wave/output/divergent-findings.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/divergent-findings.json
|
||||
schema_path: .wave/contracts/divergent-findings.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
# ── Diamond 1: Define (CONVERGENT) ───────────────────────────────────
|
||||
- id: converge
|
||||
persona: validator
|
||||
dependencies: [diverge]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diverge
|
||||
artifact: findings
|
||||
as: divergent_findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
CONVERGENT VALIDATION — separate signal from noise.
|
||||
|
||||
This is a purely CONVERGENT step. Your job is analytical, not creative.
|
||||
Judge every finding on technical merit alone. No speculation, no new ideas.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
## For Every DVG-xxx Finding
|
||||
|
||||
1. **Read the actual code** cited as evidence — don't trust the provocateur's summary
|
||||
2. **Verify the metrics** — check reference counts, line counts, change frequency
|
||||
3. **Assess**: Is this a real problem or a false positive?
|
||||
- Does the "premature abstraction" actually have a second implementation planned?
|
||||
- Is the "dead weight" actually used via reflection or codegen?
|
||||
- Is the "unnecessary indirection" actually providing error handling or logging?
|
||||
4. **Classify**:
|
||||
- `CONFIRMED` — real problem, metrics check out, code supports the claim
|
||||
- `PARTIALLY_CONFIRMED` — real concern but overstated, or scope is narrower than claimed
|
||||
- `REJECTED` — false positive, justified complexity, or incorrect metrics
|
||||
5. **Explain**: For every classification, write WHY. For rejections, explain what
|
||||
the provocateur got wrong.
|
||||
|
||||
Be RIGOROUS. The provocateur was told to be aggressive — your job is to be skeptical.
|
||||
A finding that survives your scrutiny is genuinely worth addressing.
|
||||
output_artifacts:
|
||||
- name: validated_findings
|
||||
path: .wave/output/validated-findings.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/validated-findings.json
|
||||
schema_path: .wave/contracts/validated-findings.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
# ── Diamond 2: Develop (DIVERGENT) ───────────────────────────────────
|
||||
- id: probe
|
||||
persona: provocateur
|
||||
dependencies: [converge]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: diverge
|
||||
artifact: findings
|
||||
as: divergent_findings
|
||||
- step: converge
|
||||
artifact: validated_findings
|
||||
as: validated_findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
SECOND DIVERGENCE — probe deeper into confirmed findings.
|
||||
|
||||
The first pass cast a wide net. The validator filtered it down.
|
||||
Now YOU go deeper on what survived. This is DIVERGENT thinking again —
|
||||
expand, connect, discover what the first pass missed.
|
||||
|
||||
Focus on findings with status CONFIRMED or PARTIALLY_CONFIRMED.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
## Your Mission
|
||||
|
||||
For each confirmed finding, probe OUTWARD:
|
||||
|
||||
1. **Trace the dependency graph**: What calls this code? What does it call?
|
||||
If we simplify X, what happens to its callers and callees?
|
||||
|
||||
2. **Find second-order effects**: If we remove abstraction A, does layer B
|
||||
also become unnecessary? Do test helpers simplify? Do error paths collapse?
|
||||
|
||||
3. **Spot patterns across findings**: Do three findings all stem from the same
|
||||
over-abstraction? Is there a root cause that would address multiple DVGs at once?
|
||||
|
||||
4. **Discover what was MISSED**: With the validated findings as context, look for
|
||||
related opportunities the first pass didn't see. The confirmed findings reveal
|
||||
the codebase's real pressure points — what else lurks nearby?
|
||||
|
||||
5. **Challenge the rejections**: Were any findings rejected too hastily?
|
||||
Read the validator's rationale — do you disagree?
|
||||
|
||||
## Evidence Requirements
|
||||
|
||||
Same standard as the first diverge pass:
|
||||
- `wc -l` for line counts
|
||||
- `grep -r` for usage/reference counts
|
||||
- `git log --oneline <file> | wc -l` for change frequency
|
||||
- Concrete file paths and code references
|
||||
|
||||
## Output
|
||||
|
||||
Go DEEP. The first pass was wide, this pass is deep. Follow every thread.
|
||||
output_artifacts:
|
||||
- name: probed_findings
|
||||
path: .wave/output/probed-findings.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/probed-findings.json
|
||||
schema_path: .wave/contracts/probed-findings.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
# ── Diamond 2: Deliver (CONVERGENT) ──────────────────────────────────
|
||||
- id: distill
|
||||
persona: synthesizer
|
||||
dependencies: [probe]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: context
|
||||
as: context
|
||||
- step: converge
|
||||
artifact: validated_findings
|
||||
as: validated_findings
|
||||
- step: probe
|
||||
artifact: probed_findings
|
||||
as: probed_findings
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
FINAL CONVERGENCE — synthesize all findings into actionable proposals.
|
||||
|
||||
This is the last convergent step before implementation. You have:
|
||||
- Validated findings (what survived scrutiny)
|
||||
- Probed findings (deeper connections, patterns, new discoveries)
|
||||
- Optional issue/PR context (from the gather step)
|
||||
|
||||
Your job: synthesize everything into prioritized, implementable proposals.
|
||||
|
||||
Target: {{ input }}
|
||||
|
||||
## Synthesis
|
||||
|
||||
Transform the validated and probed findings into prioritized proposals:
|
||||
|
||||
1. **Group by pattern**: Use the `patterns` from the probe step. Findings that share
|
||||
a root cause become a single proposal addressing the root cause.
|
||||
|
||||
2. **Incorporate second-order effects**: The probe step found connections and cascading
|
||||
simplifications. Factor these into impact estimates.
|
||||
|
||||
3. **Include new discoveries**: The probe step may have found new findings (DVG-NEW-xxx).
|
||||
These are pre-validated by the provocateur's second pass — include them.
|
||||
|
||||
4. **Apply issue/PR context (if present)**: If the context artifact shows
|
||||
`input_type` is `"issue"` or `"pr"`, use the `focus_hint` as ONE input
|
||||
when assigning tiers. But do not discard strong proposals just because they
|
||||
fall outside the issue's scope. The best simplifications are often the ones
|
||||
the issue author didn't think to ask for.
|
||||
|
||||
5. **80/20 analysis**: which 20% of proposals yield 80% of the simplification?
|
||||
|
||||
6. **Dependency ordering**: what must be done first?
|
||||
|
||||
## Output
|
||||
|
||||
Do NOT write a markdown summary. Write the complete JSON object with every proposal fully populated.
|
||||
output_artifacts:
|
||||
- name: proposals
|
||||
path: .wave/output/convergent-proposals.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/convergent-proposals.json
|
||||
schema_path: .wave/contracts/convergent-proposals.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
# ── Implementation ───────────────────────────────────────────────────
|
||||
- id: simplify
|
||||
persona: craftsman
|
||||
dependencies: [distill]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: converge
|
||||
artifact: validated_findings
|
||||
as: validated_findings
|
||||
- step: distill
|
||||
artifact: proposals
|
||||
as: proposals
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "refactor/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
IMPLEMENTATION — apply the best simplification proposals.
|
||||
|
||||
## Process
|
||||
|
||||
Apply ONLY tier-1 proposals, in dependency order.
|
||||
|
||||
For each proposal (SMP-xxx):
|
||||
|
||||
1. **Announce**: Print which proposal you're applying and what it does
|
||||
2. **Apply**: Make the code changes
|
||||
3. **Build**: `go build ./...` — must succeed
|
||||
4. **Test**: `go test ./...` — must pass
|
||||
5. **Commit**: If build and tests pass:
|
||||
```bash
|
||||
git add <specific-files>
|
||||
git commit -m "refactor: <proposal title>
|
||||
|
||||
Applies SMP-xxx: <brief description>
|
||||
Source findings: <DVG-xxx list>"
|
||||
```
|
||||
6. **Revert if failing**: If tests fail after applying, revert:
|
||||
```bash
|
||||
git checkout -- .
|
||||
```
|
||||
Log the failure and move to the next proposal.
|
||||
|
||||
## Final Verification
|
||||
|
||||
After all tier-1 proposals are applied (or attempted):
|
||||
1. Run the full test suite: `go test -race ./...`
|
||||
2. Run the build: `go build ./...`
|
||||
3. Summarize what was applied, what was skipped, and net lines changed
|
||||
|
||||
## Important
|
||||
|
||||
- Each proposal gets its own atomic commit
|
||||
- Never combine proposals in a single commit
|
||||
- If a proposal depends on a failed proposal, skip it too
|
||||
- Commit each proposal as a separate atomic commit
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
output_artifacts:
|
||||
- name: result
|
||||
path: .wave/output/result.md
|
||||
type: markdown
|
||||
|
||||
# ── Reporting ────────────────────────────────────────────────────────
|
||||
- id: report
|
||||
persona: navigator
|
||||
dependencies: [simplify]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: distill
|
||||
artifact: proposals
|
||||
as: proposals
|
||||
- step: simplify
|
||||
artifact: result
|
||||
as: result
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "refactor/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
REPORT — compose a summary of what recinq found and applied.
|
||||
|
||||
Run `git log --oneline` to see the commits on this branch.
|
||||
|
||||
## Compose the Report
|
||||
|
||||
Write a markdown report containing:
|
||||
- **Summary**: One-paragraph overview of what recinq found and applied
|
||||
- **Proposals**: List of all proposals with their tier, impact, and status (applied/skipped/failed)
|
||||
- **Changes Applied**: Summary of commits made, files changed, net lines removed
|
||||
- **Remaining Opportunities**: Tier-2 and tier-3 proposals for future consideration
|
||||
output_artifacts:
|
||||
- name: report
|
||||
path: .wave/output/report.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/report.md
|
||||
|
||||
# ── Publish ─────────────────────────────────────────────────────────
|
||||
- id: publish
|
||||
persona: craftsman
|
||||
dependencies: [report, gather]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: gather
|
||||
artifact: context
|
||||
as: context
|
||||
- step: report
|
||||
artifact: report
|
||||
as: report
|
||||
workspace:
|
||||
type: worktree
|
||||
branch: "refactor/{{ pipeline_id }}"
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
PUBLISH — push the branch and create a pull request.
|
||||
|
||||
## Steps
|
||||
|
||||
1. Push the branch:
|
||||
```bash
|
||||
git push -u origin HEAD
|
||||
```
|
||||
|
||||
2. Create a pull request using the report as the body:
|
||||
```bash
|
||||
gh pr create --title "refactor: $(git log --format=%s -1)" --body-file .wave/artifacts/report
|
||||
```
|
||||
|
||||
3. If the context artifact shows `input_type` is `"issue"` or `"pr"`,
|
||||
post the PR URL as a comment on the source:
|
||||
```bash
|
||||
gh issue comment <number> --repo <repo> --body "Refactoring PR: <pr-url>"
|
||||
```
|
||||
or for PRs:
|
||||
```bash
|
||||
gh pr comment <number> --repo <repo> --body "Refactoring PR: <pr-url>"
|
||||
```
|
||||
|
||||
4. Write the JSON status report to the output artifact path.
|
||||
|
||||
If any `gh` command fails, log the error and continue.
|
||||
output_artifacts:
|
||||
- name: pr-result
|
||||
path: .wave/output/pr-result.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/pr-result.json
|
||||
schema_path: .wave/contracts/pr-result.schema.json
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
outcomes:
|
||||
- type: pr
|
||||
extract_from: .wave/output/pr-result.json
|
||||
json_path: .pr_url
|
||||
label: "Pull Request"
|
||||
136
.wave/pipelines/refactor.yaml
Normal file
136
.wave/pipelines/refactor.yaml
Normal file
@@ -0,0 +1,136 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: refactor
|
||||
description: "Safe refactoring with comprehensive test coverage"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "extract workspace manager from executor into its own package"
|
||||
|
||||
steps:
|
||||
- id: analyze
|
||||
persona: navigator
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze refactoring scope for: {{ input }}
|
||||
|
||||
1. Identify all code that will be affected
|
||||
2. Map all callers/consumers of the code being refactored
|
||||
3. Find existing test coverage
|
||||
4. Identify integration points
|
||||
output_artifacts:
|
||||
- name: analysis
|
||||
path: .wave/output/refactor-analysis.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/refactor-analysis.json
|
||||
schema_path: .wave/contracts/refactor-analysis.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: test-baseline
|
||||
persona: craftsman
|
||||
dependencies: [analyze]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze
|
||||
artifact: analysis
|
||||
as: scope
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Before refactoring, ensure test coverage:
|
||||
|
||||
1. Run existing tests and record baseline
|
||||
2. Add characterization tests for uncovered code paths
|
||||
3. Add integration tests for affected callers
|
||||
4. Document current behavior for comparison
|
||||
|
||||
All tests must pass before proceeding.
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
|
||||
must_pass: true
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
output_artifacts:
|
||||
- name: baseline
|
||||
path: .wave/output/test-baseline.md
|
||||
type: markdown
|
||||
- id: refactor
|
||||
persona: craftsman
|
||||
dependencies: [test-baseline]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze
|
||||
artifact: analysis
|
||||
as: scope
|
||||
- step: test-baseline
|
||||
artifact: baseline
|
||||
as: tests
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Perform the refactoring: {{ input }}
|
||||
|
||||
Guidelines:
|
||||
1. Make atomic, reviewable changes
|
||||
2. Preserve all existing behavior
|
||||
3. Run tests after each significant change
|
||||
4. Update affected callers as needed
|
||||
5. Keep commits small and focused
|
||||
|
||||
Do NOT change behavior — this is refactoring only.
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
|
||||
must_pass: false
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
|
||||
- id: verify
|
||||
persona: reviewer
|
||||
dependencies: [refactor]
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the refactoring:
|
||||
|
||||
1. Compare before/after behavior — any changes?
|
||||
2. Check test coverage didn't decrease
|
||||
3. Verify all callers still work correctly
|
||||
4. Look for missed edge cases
|
||||
5. Assess code quality improvement
|
||||
|
||||
Output: PASS (safe to merge) or FAIL (issues found)
|
||||
output_artifacts:
|
||||
- name: verification
|
||||
path: .wave/output/verification.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/verification.md
|
||||
125
.wave/pipelines/test-gen.yaml
Normal file
125
.wave/pipelines/test-gen.yaml
Normal file
@@ -0,0 +1,125 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: test-gen
|
||||
description: "Generate comprehensive test coverage"
|
||||
release: true
|
||||
|
||||
skills:
|
||||
- "{{ project.skill }}"
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "generate tests for internal/pipeline to improve coverage"
|
||||
|
||||
steps:
|
||||
- id: analyze-coverage
|
||||
persona: navigator
|
||||
model: claude-haiku
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze test coverage for: {{ input }}
|
||||
|
||||
1. Run coverage analysis using the project test command with coverage flags
|
||||
2. Identify uncovered functions and branches
|
||||
3. Find edge cases not tested
|
||||
4. Map dependencies that need mocking
|
||||
output_artifacts:
|
||||
- name: coverage
|
||||
path: .wave/output/coverage-analysis.json
|
||||
type: json
|
||||
retry:
|
||||
policy: patient
|
||||
max_attempts: 2
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/coverage-analysis.json
|
||||
schema_path: .wave/contracts/coverage-analysis.schema.json
|
||||
on_failure: retry
|
||||
|
||||
- id: generate-tests
|
||||
persona: craftsman
|
||||
dependencies: [analyze-coverage]
|
||||
thread: test-gen
|
||||
max_visits: 3
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze-coverage
|
||||
artifact: coverage
|
||||
as: gaps
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Generate tests to improve coverage for: {{ input }}
|
||||
|
||||
Requirements:
|
||||
1. Write table-driven tests where appropriate
|
||||
2. Cover happy path, error cases, and edge cases
|
||||
3. Use descriptive test names (TestFunction_Condition_Expected)
|
||||
4. Add mocks for external dependencies
|
||||
5. Include benchmarks for performance-critical code
|
||||
|
||||
Follow existing test patterns in the codebase.
|
||||
retry:
|
||||
policy: standard
|
||||
max_attempts: 3
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
|
||||
must_pass: false
|
||||
on_failure: retry
|
||||
output_artifacts:
|
||||
- name: tests
|
||||
path: .wave/output/generated-tests.md
|
||||
type: markdown
|
||||
|
||||
- id: run-tests
|
||||
type: command
|
||||
dependencies: [generate-tests]
|
||||
script: "{{ project.contract_test_command }}"
|
||||
|
||||
- id: check-quality
|
||||
type: conditional
|
||||
dependencies: [run-tests]
|
||||
edges:
|
||||
- target: verify-coverage
|
||||
condition: "outcome=success"
|
||||
- target: generate-tests
|
||||
|
||||
- id: verify-coverage
|
||||
persona: reviewer
|
||||
model: claude-haiku
|
||||
dependencies: [check-quality]
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the generated tests:
|
||||
|
||||
1. Run coverage again — did it improve?
|
||||
2. Are tests meaningful (not just line coverage)?
|
||||
3. Do tests actually catch bugs?
|
||||
4. Are mocks appropriate and minimal?
|
||||
5. Is test code maintainable?
|
||||
|
||||
Output: coverage delta and quality assessment
|
||||
output_artifacts:
|
||||
- name: verification
|
||||
path: .wave/output/coverage-verification.md
|
||||
type: markdown
|
||||
handover:
|
||||
contract:
|
||||
type: non_empty_file
|
||||
source: .wave/output/coverage-verification.md
|
||||
Reference in New Issue
Block a user