Add Wave general-purpose pipelines
ADR, changelog, code-review, debug, doc-sync, explain, feature, hotfix, improve, onboard, plan, prototype, refactor, security-scan, smoke-test, speckit-flow, supervise, test-gen, and more. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
222
.wave/pipelines/adr.yaml
Normal file
222
.wave/pipelines/adr.yaml
Normal file
@@ -0,0 +1,222 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: adr
|
||||||
|
description: "Create an Architecture Decision Record for a design choice"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "ADR: should we use SQLite or PostgreSQL for pipeline state?"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: explore-context
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Explore the codebase to gather context for this architectural decision: {{ input }}
|
||||||
|
|
||||||
|
## Exploration
|
||||||
|
|
||||||
|
1. **Understand the decision space**: What part of the system is this about?
|
||||||
|
Find all related code, configs, and documentation.
|
||||||
|
|
||||||
|
2. **Map current state**: How does the system work today?
|
||||||
|
What would be affected by this decision?
|
||||||
|
|
||||||
|
3. **Find constraints**: What technical constraints exist?
|
||||||
|
(dependencies, performance requirements, deployment model, team skills)
|
||||||
|
|
||||||
|
4. **Check precedents**: Are there similar decisions already made in this
|
||||||
|
codebase? Look for ADRs, design docs, or relevant comments.
|
||||||
|
|
||||||
|
5. **Identify stakeholders**: Which components/teams/users are affected?
|
||||||
|
|
||||||
|
Write your findings as structured JSON.
|
||||||
|
Include: decision_topic, current_state (description, affected_files, affected_components),
|
||||||
|
constraints, precedents, stakeholders, and timestamp.
|
||||||
|
output_artifacts:
|
||||||
|
- name: context
|
||||||
|
path: .wave/output/adr-context.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/adr-context.json
|
||||||
|
schema_path: .wave/contracts/adr-context.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: analyze-options
|
||||||
|
persona: planner
|
||||||
|
dependencies: [explore-context]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore-context
|
||||||
|
artifact: context
|
||||||
|
as: decision_context
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze the options for this architectural decision.
|
||||||
|
|
||||||
|
Original decision: {{ input }}
|
||||||
|
|
||||||
|
## Analysis
|
||||||
|
|
||||||
|
For each viable option:
|
||||||
|
|
||||||
|
1. **Describe it**: What would this option look like in practice?
|
||||||
|
2. **Pros**: What are the benefits? Be specific to THIS project.
|
||||||
|
3. **Cons**: What are the drawbacks? Be honest.
|
||||||
|
4. **Effort**: How much work to implement?
|
||||||
|
5. **Risk**: What could go wrong?
|
||||||
|
6. **Reversibility**: How hard to undo if it's the wrong choice?
|
||||||
|
7. **Compatibility**: How well does it fit with existing constraints?
|
||||||
|
|
||||||
|
Write your analysis as structured JSON.
|
||||||
|
Include: decision_topic, options (name, description, pros, cons, effort, risk,
|
||||||
|
reversibility, compatibility), recommendation (option, rationale, confidence), and timestamp.
|
||||||
|
output_artifacts:
|
||||||
|
- name: options
|
||||||
|
path: .wave/output/adr-options.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/adr-options.json
|
||||||
|
schema_path: .wave/contracts/adr-options.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: draft-record
|
||||||
|
persona: philosopher
|
||||||
|
dependencies: [analyze-options]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore-context
|
||||||
|
artifact: context
|
||||||
|
as: decision_context
|
||||||
|
- step: analyze-options
|
||||||
|
artifact: options
|
||||||
|
as: analysis
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Draft the Architecture Decision Record using the injected context and analysis.
|
||||||
|
|
||||||
|
Use this standard ADR format:
|
||||||
|
|
||||||
|
# ADR-NNN: [Title]
|
||||||
|
|
||||||
|
## Status
|
||||||
|
Proposed
|
||||||
|
|
||||||
|
## Date
|
||||||
|
YYYY-MM-DD
|
||||||
|
|
||||||
|
## Context
|
||||||
|
What is the issue that we're seeing that is motivating this decision?
|
||||||
|
Include technical context from the codebase exploration.
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
What is the change that we're proposing and/or doing?
|
||||||
|
State the recommended option clearly.
|
||||||
|
|
||||||
|
## Options Considered
|
||||||
|
|
||||||
|
### Option 1: [Name]
|
||||||
|
Description, pros, cons.
|
||||||
|
|
||||||
|
### Option 2: [Name]
|
||||||
|
Description, pros, cons.
|
||||||
|
|
||||||
|
(etc.)
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
- What becomes easier or better?
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
- What becomes harder or worse?
|
||||||
|
|
||||||
|
### Neutral
|
||||||
|
- What other changes are required?
|
||||||
|
|
||||||
|
## Implementation Notes
|
||||||
|
- Key steps to implement the decision
|
||||||
|
- Files/components that need changes
|
||||||
|
- Migration plan if applicable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
Write clearly and concisely. The ADR should be understandable by
|
||||||
|
someone who wasn't part of the original discussion.
|
||||||
|
output_artifacts:
|
||||||
|
- name: adr
|
||||||
|
path: .wave/output/adr.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: publish
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [draft-record]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: draft-record
|
||||||
|
artifact: adr
|
||||||
|
as: adr
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "docs/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
PUBLISH — commit the ADR and create a pull request.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Copy the ADR into the project docs:
|
||||||
|
- Determine the next ADR number by listing existing ADR files
|
||||||
|
(e.g., `ls docs/adr/` or similar convention)
|
||||||
|
- Copy `.wave/artifacts/adr` to the appropriate location
|
||||||
|
(e.g., `docs/adr/NNN-title.md`)
|
||||||
|
|
||||||
|
2. Commit:
|
||||||
|
```bash
|
||||||
|
git add docs/adr/
|
||||||
|
git commit -m "docs: add ADR for <decision topic>"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Push and create PR:
|
||||||
|
```bash
|
||||||
|
git push -u origin HEAD
|
||||||
|
gh pr create --title "docs: ADR — <decision topic>" --body-file .wave/artifacts/adr
|
||||||
|
```
|
||||||
|
output_artifacts:
|
||||||
|
- name: pr-result
|
||||||
|
path: .wave/output/pr-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/pr-result.json
|
||||||
|
schema_path: .wave/contracts/pr-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: pr
|
||||||
|
extract_from: .wave/output/pr-result.json
|
||||||
|
json_path: .pr_url
|
||||||
|
label: "Pull Request"
|
||||||
168
.wave/pipelines/blog-draft.yaml
Normal file
168
.wave/pipelines/blog-draft.yaml
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: blog-draft
|
||||||
|
description: "Draft a blog post from Zettelkasten notes and web research"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
examples:
|
||||||
|
- "Context Economy | context windows, token economics, agent memory"
|
||||||
|
- "The Frame Problem in Code | divergent thinking, convergent thinking, planner worker"
|
||||||
|
- "Temporal Code Graphs | git history, co-modification, relevance signals"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: zettel-search
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Search the Zettelkasten for notes relevant to a blog post.
|
||||||
|
|
||||||
|
Input: {{ input }}
|
||||||
|
|
||||||
|
The input format is "Title | keyword1, keyword2, keyword3".
|
||||||
|
Parse the title and keywords from the input.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. For each keyword, search with: `notesium lines --filter="keyword"`
|
||||||
|
2. Read the index note to find entry points for related sections
|
||||||
|
3. Read the most relevant notes found (up to 15 notes)
|
||||||
|
4. For each note, extract: filename, title, Folgezettel address, key quotes
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write the result as JSON to output/zettel-references.json matching the contract schema.
|
||||||
|
Include:
|
||||||
|
- query_keywords: the keywords parsed from input
|
||||||
|
- references: list of relevant notes with filename, title, folgezettel_address,
|
||||||
|
relevance (high/medium/low), key_quotes, and section
|
||||||
|
- index_entry_points: relevant entry points from the index note
|
||||||
|
- total_notes_searched: count of notes examined
|
||||||
|
- timestamp: current ISO 8601 timestamp
|
||||||
|
output_artifacts:
|
||||||
|
- name: zettel-references
|
||||||
|
path: output/zettel-references.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: output/zettel-references.json
|
||||||
|
schema_path: .wave/contracts/zettel-references.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: web-research
|
||||||
|
persona: scout
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Research the web for recent information on a blog post topic.
|
||||||
|
|
||||||
|
Input: {{ input }}
|
||||||
|
|
||||||
|
The input format is "Title | keyword1, keyword2, keyword3".
|
||||||
|
Parse the title and keywords from the input.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. WebSearch for the topic title + recent developments
|
||||||
|
2. WebSearch for each keyword individually
|
||||||
|
3. Fetch the top 3-5 most relevant and credible sources
|
||||||
|
4. Extract key ideas, direct quotes with attribution, and URLs
|
||||||
|
5. Focus on recent content (2025-2026) when available
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write the result as JSON to output/web-findings.json matching the contract schema.
|
||||||
|
Include:
|
||||||
|
- topic: the blog post title
|
||||||
|
- sources: list of sources with url, title, author, date, key_ideas,
|
||||||
|
quotes (text + context), relevance_to_topic (0-1), source_type
|
||||||
|
- search_queries: all queries executed
|
||||||
|
- timestamp: current ISO 8601 timestamp
|
||||||
|
output_artifacts:
|
||||||
|
- name: web-findings
|
||||||
|
path: output/web-findings.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: output/web-findings.json
|
||||||
|
schema_path: .wave/contracts/web-findings.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: draft
|
||||||
|
persona: scribe
|
||||||
|
dependencies: [zettel-search, web-research]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: zettel-search
|
||||||
|
artifact: zettel-references
|
||||||
|
as: zettel_refs
|
||||||
|
- step: web-research
|
||||||
|
artifact: web-findings
|
||||||
|
as: web_refs
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Draft a blog post using Zettelkasten notes and web research findings.
|
||||||
|
|
||||||
|
Input: {{ input }}
|
||||||
|
|
||||||
|
Parse the title from the input (format: "Title | keywords").
|
||||||
|
|
||||||
|
Read the artifacts:
|
||||||
|
cat artifacts/zettel_refs
|
||||||
|
cat artifacts/web_refs
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Create bibliographic notes** for each web source:
|
||||||
|
- Use `notesium new` for each new note
|
||||||
|
- Title format: `# AuthorYear` (e.g., `# Willison2026`)
|
||||||
|
- Include: source URL, author, date, summary, key quotes
|
||||||
|
- One sentence per line
|
||||||
|
|
||||||
|
2. **Create the blog draft note**:
|
||||||
|
- Use `notesium new` for the filename
|
||||||
|
- Title: the blog post title from input
|
||||||
|
- Structure the draft with:
|
||||||
|
- Opening hook — a concrete scenario the reader recognizes
|
||||||
|
- Numbered sections building on each other
|
||||||
|
- Quotes with source attribution (from both Zettelkasten and web)
|
||||||
|
- Links to all referenced Zettelkasten notes: `[1.2 Linking](filename.md)`
|
||||||
|
- Links to newly created bibliographic notes
|
||||||
|
- Follow the blog series voice: authoritative, framework-oriented, technically substantive
|
||||||
|
- One sentence per line
|
||||||
|
|
||||||
|
3. **Commit all new notes**:
|
||||||
|
- `git add *.md`
|
||||||
|
- `git commit -m "blog-draft: {title in lowercase}"`
|
||||||
|
|
||||||
|
4. **Write a summary** to output/draft-summary.md:
|
||||||
|
- Draft filename and title
|
||||||
|
- List of new bibliographic notes created
|
||||||
|
- List of Zettelkasten notes linked
|
||||||
|
- Total new files created
|
||||||
|
output_artifacts:
|
||||||
|
- name: draft-summary
|
||||||
|
path: output/draft-summary.md
|
||||||
|
type: markdown
|
||||||
137
.wave/pipelines/changelog.yaml
Normal file
137
.wave/pipelines/changelog.yaml
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: changelog
|
||||||
|
description: "Generate structured changelog from git history"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "generate changelog from v0.1.0 to HEAD"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: analyze-commits
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze git history for changelog generation: {{ input }}
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Determine range**: Parse input to identify the commit range.
|
||||||
|
If tags mentioned, use them. If time period, calculate dates.
|
||||||
|
If unclear, use last tag to HEAD (or last 50 commits).
|
||||||
|
|
||||||
|
2. **Extract commits**: Use `git log --format` to get hash, author,
|
||||||
|
date, subject, and body for each commit.
|
||||||
|
|
||||||
|
3. **Parse conventional commits**: Categorize by prefix:
|
||||||
|
feat → Features, fix → Fixes, docs → Documentation,
|
||||||
|
refactor → Refactoring, test → Testing, chore → Maintenance,
|
||||||
|
perf → Performance, ci → CI/CD, no prefix → Other
|
||||||
|
|
||||||
|
4. **Identify breaking changes**: Look for `BREAKING CHANGE:` in body,
|
||||||
|
`!` after prefix, API removals in body.
|
||||||
|
|
||||||
|
5. **Extract scope**: Parse from prefix (e.g., `fix(pipeline):` → "pipeline")
|
||||||
|
output_artifacts:
|
||||||
|
- name: commits
|
||||||
|
path: .wave/output/commit-analysis.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/commit-analysis.json
|
||||||
|
schema_path: .wave/contracts/commit-analysis.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: categorize
|
||||||
|
persona: planner
|
||||||
|
dependencies: [analyze-commits]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze-commits
|
||||||
|
artifact: commits
|
||||||
|
as: raw_commits
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Categorize and describe changes for a changelog using the injected commit analysis.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
1. **Group by type** into sections
|
||||||
|
2. **Write user-facing descriptions**: Rewrite technical messages into
|
||||||
|
clear descriptions focused on what changed and why it matters.
|
||||||
|
3. **Highlight breaking changes** first with migration notes
|
||||||
|
4. **Deduplicate**: Combine commits for the same logical change
|
||||||
|
5. **Add context** for significant features
|
||||||
|
output_artifacts:
|
||||||
|
- name: categorized
|
||||||
|
path: .wave/output/categorized-changes.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/categorized-changes.json
|
||||||
|
schema_path: .wave/contracts/categorized-changes.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: format
|
||||||
|
persona: philosopher
|
||||||
|
dependencies: [categorize]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze-commits
|
||||||
|
artifact: commits
|
||||||
|
as: raw_commits
|
||||||
|
- step: categorize
|
||||||
|
artifact: categorized
|
||||||
|
as: changes
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Format the injected commit analysis and categorized changes into a polished changelog.
|
||||||
|
|
||||||
|
Use Keep a Changelog format:
|
||||||
|
|
||||||
|
# Changelog
|
||||||
|
|
||||||
|
## [Version or Date Range] - YYYY-MM-DD
|
||||||
|
|
||||||
|
### Breaking Changes
|
||||||
|
- **scope**: Description. Migration: what to do
|
||||||
|
|
||||||
|
### Added
|
||||||
|
- **scope**: Feature description
|
||||||
|
|
||||||
|
### Fixed
|
||||||
|
- **scope**: Bug fix description
|
||||||
|
|
||||||
|
### Changed
|
||||||
|
- **scope**: Change description
|
||||||
|
|
||||||
|
### Security
|
||||||
|
- **scope**: Security fix description
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- Only include sections with entries
|
||||||
|
- Bold scope if present
|
||||||
|
- Most notable entries first per section
|
||||||
|
- One line per entry, concise
|
||||||
|
- Contributors list at bottom
|
||||||
|
output_artifacts:
|
||||||
|
- name: changelog
|
||||||
|
path: .wave/output/CHANGELOG.md
|
||||||
|
type: markdown
|
||||||
165
.wave/pipelines/code-review.yaml
Normal file
165
.wave/pipelines/code-review.yaml
Normal file
@@ -0,0 +1,165 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: code-review
|
||||||
|
description: "Comprehensive code review for pull requests"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "review the authentication module"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: diff-analysis
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze the code changes for: {{ input }}
|
||||||
|
|
||||||
|
1. Identify all modified files and their purposes
|
||||||
|
2. Map the change scope (which modules/packages affected)
|
||||||
|
3. Find related tests that should be updated
|
||||||
|
4. Check for breaking API changes
|
||||||
|
|
||||||
|
Produce a structured result matching the contract schema.
|
||||||
|
output_artifacts:
|
||||||
|
- name: diff
|
||||||
|
path: .wave/output/diff-analysis.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/diff-analysis.json
|
||||||
|
schema_path: .wave/contracts/diff-analysis.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: security-review
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [diff-analysis]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: diff-analysis
|
||||||
|
artifact: diff
|
||||||
|
as: changes
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Security review of the changes:
|
||||||
|
|
||||||
|
Check for:
|
||||||
|
1. SQL injection, XSS, CSRF vulnerabilities
|
||||||
|
2. Hardcoded secrets or credentials
|
||||||
|
3. Insecure deserialization
|
||||||
|
4. Missing input validation
|
||||||
|
5. Authentication/authorization gaps
|
||||||
|
6. Sensitive data exposure
|
||||||
|
|
||||||
|
Output findings with severity (CRITICAL/HIGH/MEDIUM/LOW).
|
||||||
|
output_artifacts:
|
||||||
|
- name: security
|
||||||
|
path: .wave/output/security-review.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: quality-review
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [diff-analysis]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: diff-analysis
|
||||||
|
artifact: diff
|
||||||
|
as: changes
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Quality review of the changes:
|
||||||
|
|
||||||
|
Check for:
|
||||||
|
1. Error handling completeness
|
||||||
|
2. Edge cases not covered
|
||||||
|
3. Code duplication
|
||||||
|
4. Naming consistency
|
||||||
|
5. Missing or inadequate tests
|
||||||
|
6. Performance implications
|
||||||
|
7. Documentation gaps
|
||||||
|
|
||||||
|
Output findings with severity and suggestions.
|
||||||
|
output_artifacts:
|
||||||
|
- name: quality
|
||||||
|
path: .wave/output/quality-review.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: summary
|
||||||
|
persona: summarizer
|
||||||
|
dependencies: [security-review, quality-review]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: security-review
|
||||||
|
artifact: security
|
||||||
|
as: security_findings
|
||||||
|
- step: quality-review
|
||||||
|
artifact: quality
|
||||||
|
as: quality_findings
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Synthesize the review findings into a final verdict:
|
||||||
|
|
||||||
|
1. Overall assessment (APPROVE / REQUEST_CHANGES / NEEDS_DISCUSSION)
|
||||||
|
2. Critical issues that must be fixed
|
||||||
|
3. Suggested improvements (optional but recommended)
|
||||||
|
4. Positive observations
|
||||||
|
|
||||||
|
Format as a PR review comment ready to post.
|
||||||
|
Do NOT include a title/header line — the publish step adds one.
|
||||||
|
output_artifacts:
|
||||||
|
- name: verdict
|
||||||
|
path: .wave/output/review-summary.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: publish
|
||||||
|
persona: github-commenter
|
||||||
|
dependencies: [summary]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: summary
|
||||||
|
artifact: verdict
|
||||||
|
as: review_summary
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Post the code review summary as a PR comment.
|
||||||
|
|
||||||
|
The original input was: {{ input }}
|
||||||
|
Extract the PR number or URL from the input.
|
||||||
|
|
||||||
|
1. Post the review as a PR comment using:
|
||||||
|
gh pr comment <PR_NUMBER_OR_URL> --body "## Code Review (Wave Pipeline)
|
||||||
|
|
||||||
|
<review content>
|
||||||
|
|
||||||
|
---
|
||||||
|
*Generated by [Wave](https://github.com/re-cinq/wave) code-review pipeline*"
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: publish-result
|
||||||
|
path: .wave/output/publish-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/publish-result.json
|
||||||
|
schema_path: .wave/contracts/publish-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: url
|
||||||
|
extract_from: .wave/output/publish-result.json
|
||||||
|
json_path: .comment_url
|
||||||
|
label: "Review Comment"
|
||||||
257
.wave/pipelines/dead-code.yaml
Normal file
257
.wave/pipelines/dead-code.yaml
Normal file
@@ -0,0 +1,257 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: dead-code
|
||||||
|
description: "Find dead or redundant code, remove it, and commit to a feature branch"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "find and remove dead code in internal/pipeline"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: scan
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Scan for dead or redundant code: {{ input }}
|
||||||
|
|
||||||
|
## What to Look For
|
||||||
|
|
||||||
|
1. **Unused exports**: Exported functions, types, constants, or variables
|
||||||
|
that are never referenced outside their package.
|
||||||
|
|
||||||
|
2. **Unreachable code**: Code after return/panic, impossible branches,
|
||||||
|
dead switch cases.
|
||||||
|
|
||||||
|
3. **Orphaned files**: Files not imported by any other file in the project.
|
||||||
|
|
||||||
|
4. **Redundant code**: Duplicate functions, copy-paste blocks,
|
||||||
|
wrappers that add no value.
|
||||||
|
|
||||||
|
5. **Stale tests**: Tests for functions that no longer exist,
|
||||||
|
or tests that test nothing meaningful.
|
||||||
|
|
||||||
|
6. **Unused dependencies**: Imports that are no longer needed.
|
||||||
|
|
||||||
|
7. **Commented-out code**: Large blocks of commented code that
|
||||||
|
should be deleted (git has history).
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
For each finding, verify it's truly dead:
|
||||||
|
- Grep for all references across the entire codebase
|
||||||
|
- Check for reflect-based or string-based usage
|
||||||
|
- Check if it's part of an interface implementation
|
||||||
|
- Check for build tag conditional compilation
|
||||||
|
|
||||||
|
Produce a structured JSON result matching the contract schema.
|
||||||
|
Only include findings with high or medium confidence. Skip low confidence.
|
||||||
|
output_artifacts:
|
||||||
|
- name: scan_results
|
||||||
|
path: .wave/output/dead-code-scan.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/dead-code-scan.json
|
||||||
|
schema_path: .wave/contracts/dead-code-scan.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: clean
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [scan]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan
|
||||||
|
artifact: scan_results
|
||||||
|
as: findings
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "chore/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Remove the dead code on this isolated worktree branch.
|
||||||
|
|
||||||
|
The scan findings have been injected into your workspace. Read them first.
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Remove dead code** — ONLY high-confidence findings:
|
||||||
|
- Start with unused imports (safest)
|
||||||
|
- Then commented-out code blocks
|
||||||
|
- Then unused exports
|
||||||
|
- Then orphaned files
|
||||||
|
- Skip anything with confidence=medium unless trivially safe
|
||||||
|
- After each removal, verify: `go build ./...`
|
||||||
|
|
||||||
|
2. **Run goimports** if available to clean up imports:
|
||||||
|
```bash
|
||||||
|
goimports -w <modified-files> 2>/dev/null || true
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Run full test suite**:
|
||||||
|
```bash
|
||||||
|
go test ./... -count=1
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Commit**:
|
||||||
|
```bash
|
||||||
|
git add <specific-files>
|
||||||
|
git commit -m "chore: remove dead code
|
||||||
|
|
||||||
|
Removed N items of dead code:
|
||||||
|
- DC-001: <symbol> (unused export)
|
||||||
|
- DC-002: <file> (orphaned file)
|
||||||
|
..."
|
||||||
|
```
|
||||||
|
|
||||||
|
If ANY test fails after a removal, revert that specific removal
|
||||||
|
and continue with the next item.
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
|
||||||
|
- id: verify
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [clean]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan
|
||||||
|
artifact: scan_results
|
||||||
|
as: original_findings
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Verify the dead code removal was safe.
|
||||||
|
|
||||||
|
The original scan findings have been injected into your workspace. Read them first.
|
||||||
|
|
||||||
|
Check:
|
||||||
|
1. Were only high-confidence items removed?
|
||||||
|
2. Are all tests still passing?
|
||||||
|
3. Does the project still build cleanly?
|
||||||
|
4. Were any false positives accidentally removed?
|
||||||
|
5. Is the commit focused (no unrelated changes)?
|
||||||
|
|
||||||
|
Produce a verification report covering:
|
||||||
|
- Items removed (with justification)
|
||||||
|
- Items skipped (with reason)
|
||||||
|
- Lines of code removed
|
||||||
|
- Test status
|
||||||
|
- Overall assessment: CLEAN / NEEDS_REVIEW
|
||||||
|
output_artifacts:
|
||||||
|
- name: verification
|
||||||
|
path: .wave/output/verification.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: create-pr
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [verify]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan
|
||||||
|
artifact: scan_results
|
||||||
|
as: findings
|
||||||
|
- step: verify
|
||||||
|
artifact: verification
|
||||||
|
as: verification_report
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "chore/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Create a pull request for the dead code removal.
|
||||||
|
|
||||||
|
## Working Directory
|
||||||
|
|
||||||
|
You are running in an **isolated git worktree** shared with previous pipeline steps.
|
||||||
|
Your working directory IS the project root. The branch already exists from the
|
||||||
|
clean step — just push it and create the PR.
|
||||||
|
|
||||||
|
## SAFETY: Do NOT Modify the Working Tree
|
||||||
|
|
||||||
|
This step MUST NOT run `git checkout`, `git stash`, or any command that changes
|
||||||
|
the current branch or working tree state.
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
### Step 1: Load Context
|
||||||
|
|
||||||
|
The scan findings and verification report have been injected into your workspace.
|
||||||
|
Read them both to understand what was found and the verification outcome.
|
||||||
|
|
||||||
|
### Step 2: Push the Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git push -u origin HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Create Pull Request
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh pr create --title "chore: remove dead code" --body "$(cat <<'PREOF'
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Automated dead code removal based on static analysis scan.
|
||||||
|
|
||||||
|
<summarize what was removed: N items, types, estimated lines saved>
|
||||||
|
|
||||||
|
## Verification
|
||||||
|
|
||||||
|
<summarize verification report: CLEAN or NEEDS_REVIEW, test status>
|
||||||
|
|
||||||
|
## Removed Items
|
||||||
|
|
||||||
|
<list each removed item with its ID, type, and location>
|
||||||
|
|
||||||
|
## Test Plan
|
||||||
|
|
||||||
|
- Full test suite passed after each removal
|
||||||
|
- Build verified clean after all removals
|
||||||
|
- Auditor persona verified no false positives
|
||||||
|
PREOF
|
||||||
|
)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Request Copilot Review (Best-Effort)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh pr edit --add-reviewer "copilot" 2>/dev/null || true
|
||||||
|
```
|
||||||
|
|
||||||
|
## CONSTRAINTS
|
||||||
|
|
||||||
|
- Do NOT spawn Task subagents — work directly in the main context
|
||||||
|
- Do NOT run `git checkout`, `git stash`, or any branch-switching commands
|
||||||
|
- Do NOT include Co-Authored-By or AI attribution in commits
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: pr-result
|
||||||
|
path: .wave/output/pr-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/pr-result.json
|
||||||
|
schema_path: .wave/contracts/pr-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: pr
|
||||||
|
extract_from: .wave/output/pr-result.json
|
||||||
|
json_path: .pr_url
|
||||||
|
label: "Pull Request"
|
||||||
138
.wave/pipelines/debug.yaml
Normal file
138
.wave/pipelines/debug.yaml
Normal file
@@ -0,0 +1,138 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: debug
|
||||||
|
description: "Systematic debugging with hypothesis testing"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "TestPipelineExecutor fails with nil pointer on resume"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: reproduce
|
||||||
|
persona: debugger
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Reproduce and characterize the issue: {{ input }}
|
||||||
|
|
||||||
|
1. Understand expected vs actual behavior
|
||||||
|
2. Create minimal reproduction steps
|
||||||
|
3. Identify relevant code paths
|
||||||
|
4. Note environmental factors (OS, versions, config)
|
||||||
|
output_artifacts:
|
||||||
|
- name: reproduction
|
||||||
|
path: .wave/output/reproduction.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/reproduction.json
|
||||||
|
schema_path: .wave/contracts/debug-reproduction.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: hypothesize
|
||||||
|
persona: debugger
|
||||||
|
dependencies: [reproduce]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: reproduce
|
||||||
|
artifact: reproduction
|
||||||
|
as: issue
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Form hypotheses about the root cause.
|
||||||
|
|
||||||
|
For each hypothesis:
|
||||||
|
1. What could cause this behavior?
|
||||||
|
2. What evidence would confirm/refute it?
|
||||||
|
3. How to test this hypothesis?
|
||||||
|
|
||||||
|
Rank by likelihood and ease of testing.
|
||||||
|
output_artifacts:
|
||||||
|
- name: hypotheses
|
||||||
|
path: .wave/output/hypotheses.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/hypotheses.json
|
||||||
|
schema_path: .wave/contracts/debug-hypotheses.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: investigate
|
||||||
|
persona: debugger
|
||||||
|
dependencies: [hypothesize]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: reproduce
|
||||||
|
artifact: reproduction
|
||||||
|
as: issue
|
||||||
|
- step: hypothesize
|
||||||
|
artifact: hypotheses
|
||||||
|
as: hypotheses
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Test each hypothesis systematically.
|
||||||
|
|
||||||
|
1. Start with most likely / easiest to test
|
||||||
|
2. Use git bisect if needed to find regression
|
||||||
|
3. Add diagnostic logging to trace execution
|
||||||
|
4. Examine data flow and state changes
|
||||||
|
5. Document findings for each hypothesis
|
||||||
|
|
||||||
|
Continue until root cause is identified.
|
||||||
|
output_artifacts:
|
||||||
|
- name: findings
|
||||||
|
path: .wave/output/investigation.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: fix
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [investigate]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: investigate
|
||||||
|
artifact: findings
|
||||||
|
as: root_cause
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Fix the root cause identified in the investigation.
|
||||||
|
|
||||||
|
1. Implement the minimal fix
|
||||||
|
2. Add a regression test that would have caught this
|
||||||
|
3. Remove any diagnostic code added during debugging
|
||||||
|
4. Verify the original reproduction no longer fails
|
||||||
|
5. Check for similar issues elsewhere
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
|
||||||
|
must_pass: false
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
output_artifacts:
|
||||||
|
- name: fix
|
||||||
|
path: .wave/output/fix-summary.md
|
||||||
|
type: markdown
|
||||||
251
.wave/pipelines/doc-loop.yaml
Normal file
251
.wave/pipelines/doc-loop.yaml
Normal file
@@ -0,0 +1,251 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: doc-loop
|
||||||
|
description: Pre-PR documentation consistency gate — scans changes, cross-references docs, and creates a GitHub issue with inconsistencies
|
||||||
|
release: false
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "full"
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
description: "Scan scope: empty for branch diff, 'full' for all files, or a git ref"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: scan-changes
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Scan the repository to identify changed files and capture the current documentation state.
|
||||||
|
|
||||||
|
## Determine Scan Scope
|
||||||
|
|
||||||
|
Input: {{ input }}
|
||||||
|
|
||||||
|
- If the input is empty or blank: use `git log --name-status main...HEAD` to find files changed on the current branch vs main.
|
||||||
|
- If the input is "full": skip the diff — treat ALL files as in-scope and scan all documentation.
|
||||||
|
- Otherwise, treat the input as a git ref and use `git log --name-status <input>...HEAD`.
|
||||||
|
|
||||||
|
Run `git log --oneline --name-status` with the appropriate range to get the list of changed files.
|
||||||
|
If no commits are found (e.g. on main with no branch divergence), fall back to `git status --porcelain` for uncommitted changes.
|
||||||
|
|
||||||
|
## Categorize Changed Files
|
||||||
|
|
||||||
|
Sort each changed file into one of these categories:
|
||||||
|
- **source_code**: source files matching the project language (excluding test files)
|
||||||
|
- **tests**: test files (files with test/spec in name or in test directories)
|
||||||
|
- **documentation**: markdown files, doc directories, README, CONTRIBUTING, CHANGELOG
|
||||||
|
- **configuration**: config files, schema files, environment configs
|
||||||
|
- **build**: build scripts, CI/CD configs, Makefiles, Dockerfiles
|
||||||
|
- **other**: everything else
|
||||||
|
|
||||||
|
## Read Documentation Surface Area
|
||||||
|
|
||||||
|
Discover and read key documentation files. Common locations include:
|
||||||
|
- Project root: README.md, CONTRIBUTING.md, CHANGELOG.md
|
||||||
|
- Documentation directories: docs/, doc/, wiki/
|
||||||
|
- Configuration docs: any files documenting config options or environment variables
|
||||||
|
- CLI/API docs: any files documenting commands, endpoints, or public interfaces
|
||||||
|
|
||||||
|
Adapt your scan to the actual project structure — do not assume a fixed layout.
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write your findings as structured JSON.
|
||||||
|
Include:
|
||||||
|
- scan_scope: mode ("diff" or "full"), range used, base_ref
|
||||||
|
- changed_files: total_count + categories object with arrays of file paths
|
||||||
|
- documentation_snapshot: array of {path, exists, summary} for each doc file
|
||||||
|
- timestamp: current ISO 8601 timestamp
|
||||||
|
output_artifacts:
|
||||||
|
- name: scan-results
|
||||||
|
path: .wave/output/scan-results.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/scan-results.json
|
||||||
|
schema_path: .wave/contracts/doc-scan-results.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: analyze-consistency
|
||||||
|
persona: reviewer
|
||||||
|
dependencies: [scan-changes]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan-changes
|
||||||
|
artifact: scan-results
|
||||||
|
as: scan
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze documentation consistency by cross-referencing code changes with documentation.
|
||||||
|
|
||||||
|
## Cross-Reference Checks
|
||||||
|
|
||||||
|
For each category of changed files, perform these checks:
|
||||||
|
|
||||||
|
**CLI/API surface** (changed command or endpoint files):
|
||||||
|
- Compare command definitions, endpoints, or public interfaces against documentation
|
||||||
|
- Check for new, removed, or changed options/parameters
|
||||||
|
- Verify documented examples still work
|
||||||
|
|
||||||
|
**Configuration** (changed config schemas or parsers):
|
||||||
|
- Compare documented options against actual config structure
|
||||||
|
- Check for undocumented settings or environment variables
|
||||||
|
|
||||||
|
**Source code** (changed source files):
|
||||||
|
- Check for new exported functions/types that might need API docs
|
||||||
|
- Look for stale code comments referencing removed features
|
||||||
|
- Verify public API descriptions in docs match actual behavior
|
||||||
|
|
||||||
|
**Environment variables**:
|
||||||
|
- Scan source code for environment variable access patterns
|
||||||
|
- Compare against documentation
|
||||||
|
- Flag undocumented environment variables
|
||||||
|
|
||||||
|
## Severity Rating
|
||||||
|
|
||||||
|
Rate each inconsistency:
|
||||||
|
- **CRITICAL**: Feature exists in code but completely missing from docs, or docs describe non-existent feature
|
||||||
|
- **HIGH**: Incorrect information in docs (wrong flag name, wrong description, wrong behavior)
|
||||||
|
- **MEDIUM**: Outdated information (stale counts, missing new options, incomplete lists)
|
||||||
|
- **LOW**: Minor style issues, slightly imprecise wording
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write your analysis as structured JSON.
|
||||||
|
Include:
|
||||||
|
- summary: total_count, by_severity counts, clean (true if zero inconsistencies)
|
||||||
|
- inconsistencies: array of {id (DOC-001 format), severity, category, title, description, source_location, doc_location, fix_description}
|
||||||
|
- timestamp: current ISO 8601 timestamp
|
||||||
|
|
||||||
|
If no inconsistencies are found, output an empty array with clean=true.
|
||||||
|
output_artifacts:
|
||||||
|
- name: consistency-report
|
||||||
|
path: .wave/output/consistency-report.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/consistency-report.json
|
||||||
|
schema_path: .wave/contracts/doc-consistency-report.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: compose-report
|
||||||
|
persona: navigator
|
||||||
|
dependencies: [analyze-consistency]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze-consistency
|
||||||
|
artifact: consistency-report
|
||||||
|
as: report
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Compose a documentation consistency report as a GitHub-ready markdown file.
|
||||||
|
|
||||||
|
## Check for Inconsistencies
|
||||||
|
|
||||||
|
If the consistency report has `summary.clean == true` (zero inconsistencies):
|
||||||
|
- Write a short "No inconsistencies found" message as the report
|
||||||
|
- Write the issue result with skipped=true and reason="clean"
|
||||||
|
|
||||||
|
## Compose the Report
|
||||||
|
|
||||||
|
Write the report as markdown:
|
||||||
|
|
||||||
|
```
|
||||||
|
## Documentation Consistency Report
|
||||||
|
|
||||||
|
**Scan date**: <timestamp from report>
|
||||||
|
**Inconsistencies found**: <total_count>
|
||||||
|
|
||||||
|
### Summary by Severity
|
||||||
|
| Severity | Count |
|
||||||
|
|----------|-------|
|
||||||
|
| Critical | N |
|
||||||
|
| High | N |
|
||||||
|
| Medium | N |
|
||||||
|
| Low | N |
|
||||||
|
|
||||||
|
### Task List
|
||||||
|
|
||||||
|
For each inconsistency (sorted by severity, critical first):
|
||||||
|
- [ ] **[DOC-001]** (CRITICAL) Title here — `doc_location`
|
||||||
|
Fix: fix_description
|
||||||
|
|
||||||
|
---
|
||||||
|
*Generated by [Wave](https://github.com/re-cinq/wave) doc-loop pipeline*
|
||||||
|
```
|
||||||
|
output_artifacts:
|
||||||
|
- name: report
|
||||||
|
path: .wave/output/report.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: publish
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [compose-report]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: compose-report
|
||||||
|
artifact: report
|
||||||
|
as: report
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
PUBLISH — create a GitHub issue from the documentation report.
|
||||||
|
|
||||||
|
If the report says "No inconsistencies found", skip issue creation and exit.
|
||||||
|
|
||||||
|
## Detect Repository
|
||||||
|
|
||||||
|
Run: `gh repo view --json nameWithOwner --jq .nameWithOwner`
|
||||||
|
|
||||||
|
## Create the Issue
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh issue create \
|
||||||
|
--title "docs: documentation consistency report" \
|
||||||
|
--body-file .wave/artifacts/report \
|
||||||
|
--label "documentation"
|
||||||
|
```
|
||||||
|
|
||||||
|
If the `documentation` label doesn't exist, create without labels.
|
||||||
|
If any `gh` command fails, log the error and continue.
|
||||||
|
|
||||||
|
## Capture Result
|
||||||
|
|
||||||
|
Write a JSON status report.
|
||||||
|
output_artifacts:
|
||||||
|
- name: issue-result
|
||||||
|
path: .wave/output/issue-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/issue-result.json
|
||||||
|
schema_path: .wave/contracts/doc-issue-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: issue
|
||||||
|
extract_from: .wave/output/issue-result.json
|
||||||
|
json_path: .issue_url
|
||||||
|
label: "Documentation Issue"
|
||||||
241
.wave/pipelines/doc-sync.yaml
Normal file
241
.wave/pipelines/doc-sync.yaml
Normal file
@@ -0,0 +1,241 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: doc-sync
|
||||||
|
description: "Scan documentation for inconsistencies, fix them, and commit to a feature branch"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "sync docs with current implementation"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: scan-changes
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Scan the repository for documentation inconsistencies: {{ input }}
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Identify documentation files**: Find all markdown files, README,
|
||||||
|
CONTRIBUTING, docs/ directory, inline code comments with doc references.
|
||||||
|
|
||||||
|
2. **Identify code surface area**: Scan for exported functions, CLI commands,
|
||||||
|
config options, environment variables, API endpoints.
|
||||||
|
|
||||||
|
3. **Cross-reference**: For each documented feature, verify it exists in code.
|
||||||
|
For each code feature, verify it's documented.
|
||||||
|
|
||||||
|
4. **Check accuracy**: Compare documented behavior, flags, options, and
|
||||||
|
examples against actual implementation.
|
||||||
|
|
||||||
|
5. **Categorize findings**:
|
||||||
|
- MISSING_DOCS: Feature in code, not in docs
|
||||||
|
- STALE_DOCS: Docs reference removed/changed feature
|
||||||
|
- INACCURATE: Docs describe wrong behavior
|
||||||
|
- INCOMPLETE: Docs exist but missing details
|
||||||
|
|
||||||
|
Write your findings as structured JSON.
|
||||||
|
Include: scan_scope, findings (id, type, severity, title, doc_location, code_location,
|
||||||
|
description, suggested_fix), summary (total_findings, by_type, by_severity, fixable_count),
|
||||||
|
and timestamp.
|
||||||
|
output_artifacts:
|
||||||
|
- name: scan_results
|
||||||
|
path: .wave/output/doc-scan.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/doc-scan.json
|
||||||
|
schema_path: .wave/contracts/doc-sync-scan.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: analyze
|
||||||
|
persona: reviewer
|
||||||
|
dependencies: [scan-changes]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan-changes
|
||||||
|
artifact: scan_results
|
||||||
|
as: scan
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Review the doc scan findings and prioritize fixes.
|
||||||
|
|
||||||
|
For each finding:
|
||||||
|
1. Verify it's a real inconsistency (not a false positive)
|
||||||
|
2. Assess if it can be fixed by editing docs alone
|
||||||
|
3. Prioritize: CRITICAL/HIGH first, then by effort
|
||||||
|
|
||||||
|
Produce a fix plan as markdown:
|
||||||
|
- Ordered list of fixes to apply
|
||||||
|
- For each: which file to edit, what to change, why
|
||||||
|
- Skip items that require code changes (docs-only fixes)
|
||||||
|
- Estimated scope of changes
|
||||||
|
output_artifacts:
|
||||||
|
- name: fix_plan
|
||||||
|
path: .wave/output/fix-plan.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: fix-docs
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [analyze]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan-changes
|
||||||
|
artifact: scan_results
|
||||||
|
as: scan
|
||||||
|
- step: analyze
|
||||||
|
artifact: fix_plan
|
||||||
|
as: plan
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "fix/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Fix the documentation inconsistencies on this isolated worktree branch.
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Apply fixes** following the priority order from the plan:
|
||||||
|
- Edit documentation files to fix each inconsistency
|
||||||
|
- Keep changes minimal and focused
|
||||||
|
- Preserve existing formatting and style
|
||||||
|
- Do NOT modify source code — docs-only changes
|
||||||
|
|
||||||
|
2. **Verify**: Ensure no broken links or formatting issues
|
||||||
|
|
||||||
|
3. **Commit**:
|
||||||
|
```bash
|
||||||
|
git add <changed-doc-files>
|
||||||
|
git commit -m "docs: sync documentation with implementation
|
||||||
|
|
||||||
|
Fix N documentation inconsistencies found by doc-sync pipeline:
|
||||||
|
- DOC-001: <title>
|
||||||
|
- DOC-002: <title>
|
||||||
|
..."
|
||||||
|
```
|
||||||
|
|
||||||
|
Write a summary including:
|
||||||
|
- Branch name
|
||||||
|
- List of files modified
|
||||||
|
- Findings fixed vs skipped
|
||||||
|
- Commit hash
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
must_pass: false
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
output_artifacts:
|
||||||
|
- name: result
|
||||||
|
path: .wave/output/result.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: create-pr
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [fix-docs]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan-changes
|
||||||
|
artifact: scan_results
|
||||||
|
as: scan
|
||||||
|
- step: fix-docs
|
||||||
|
artifact: result
|
||||||
|
as: fix_result
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "fix/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Create a pull request for the documentation fixes.
|
||||||
|
|
||||||
|
## Working Directory
|
||||||
|
|
||||||
|
You are running in an **isolated git worktree** shared with previous pipeline steps.
|
||||||
|
Your working directory IS the project root. The branch already exists from the
|
||||||
|
fix-docs step — just push it and create the PR.
|
||||||
|
|
||||||
|
## SAFETY: Do NOT Modify the Working Tree
|
||||||
|
|
||||||
|
This step MUST NOT run `git checkout`, `git stash`, or any command that changes
|
||||||
|
the current branch or working tree state.
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
### Step 1: Push the Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git push -u origin HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Create Pull Request
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh pr create --title "docs: sync documentation with implementation" --body "$(cat <<'PREOF'
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Automated documentation sync to fix inconsistencies between docs and code.
|
||||||
|
|
||||||
|
<summarize: N findings fixed, types of issues addressed>
|
||||||
|
|
||||||
|
## Changes
|
||||||
|
|
||||||
|
<list each doc file modified and what was fixed>
|
||||||
|
|
||||||
|
## Findings Addressed
|
||||||
|
|
||||||
|
<list each finding ID, type, and resolution>
|
||||||
|
|
||||||
|
## Skipped
|
||||||
|
|
||||||
|
<list any findings that were skipped and why>
|
||||||
|
PREOF
|
||||||
|
)"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Request Copilot Review (Best-Effort)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh pr edit --add-reviewer "copilot" 2>/dev/null || true
|
||||||
|
```
|
||||||
|
|
||||||
|
## CONSTRAINTS
|
||||||
|
|
||||||
|
- Do NOT spawn Task subagents — work directly in the main context
|
||||||
|
- Do NOT run `git checkout`, `git stash`, or any branch-switching commands
|
||||||
|
- Do NOT include Co-Authored-By or AI attribution in commits
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: pr-result
|
||||||
|
path: .wave/output/pr-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/pr-result.json
|
||||||
|
schema_path: .wave/contracts/pr-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: pr
|
||||||
|
extract_from: .wave/output/pr-result.json
|
||||||
|
json_path: .pr_url
|
||||||
|
label: "Pull Request"
|
||||||
121
.wave/pipelines/editing.yaml
Normal file
121
.wave/pipelines/editing.yaml
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: editing
|
||||||
|
description: "Apply editorial criticism to a blog post draft"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
examples:
|
||||||
|
- "69965d55.md | restructure section 3, the sandboxing analogy is weak, needs a concrete before/after code example"
|
||||||
|
- "698b6f33.md | too long, cut the history section, sharpen the thesis in the intro"
|
||||||
|
- "69965d55.md | tone is too cautious, rewrite to be more authoritative per the series voice"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: analyze
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze a blog draft and map editorial criticism to specific actions.
|
||||||
|
|
||||||
|
Input: {{ input }}
|
||||||
|
|
||||||
|
The input format is "filename.md | criticism".
|
||||||
|
Parse the filename and criticism from the input.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Read the draft**: Read the note file to understand its full content
|
||||||
|
2. **Map the structure**: Identify all sections with headings, line ranges,
|
||||||
|
summaries, and word counts
|
||||||
|
3. **Read linked notes**: Follow outgoing links to understand the context
|
||||||
|
the draft draws from
|
||||||
|
4. **Parse the criticism**: Break the author's feedback into discrete editorial concerns
|
||||||
|
5. **Create an editorial plan**: For each concern, produce a specific action:
|
||||||
|
- What type of edit (rewrite, restructure, cut, expand, add, replace_example,
|
||||||
|
fix_tone, fix_links)
|
||||||
|
- Which section it targets
|
||||||
|
- What specifically should change and why
|
||||||
|
- Which part of the criticism it addresses
|
||||||
|
- Priority (high/medium/low)
|
||||||
|
6. **Order by priority**: High-priority structural changes first, then rewrites,
|
||||||
|
then polish
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write the result as JSON to output/editorial-plan.json matching the contract schema.
|
||||||
|
output_artifacts:
|
||||||
|
- name: editorial-plan
|
||||||
|
path: output/editorial-plan.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: output/editorial-plan.json
|
||||||
|
schema_path: .wave/contracts/editorial-plan.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: revise
|
||||||
|
persona: scribe
|
||||||
|
dependencies: [analyze]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze
|
||||||
|
artifact: editorial-plan
|
||||||
|
as: plan
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Revise a blog draft by applying the editorial plan.
|
||||||
|
|
||||||
|
Input: {{ input }}
|
||||||
|
|
||||||
|
Parse the filename from the input (format: "filename.md | criticism").
|
||||||
|
|
||||||
|
Read the editorial plan: cat artifacts/plan
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Read the current draft** in full
|
||||||
|
|
||||||
|
2. **Apply each editorial action** in order from the plan:
|
||||||
|
- **rewrite**: Rewrite the target section preserving links and references
|
||||||
|
- **restructure**: Move, merge, or split sections as described
|
||||||
|
- **cut**: Remove the section or content, clean up any dangling references
|
||||||
|
- **expand**: Add depth, examples, or explanation to the section
|
||||||
|
- **add**: Insert new content at the specified location
|
||||||
|
- **replace_example**: Swap out the example for a better one
|
||||||
|
- **fix_tone**: Adjust voice (authoritative, not cautionary; framework-oriented)
|
||||||
|
- **fix_links**: Add missing links or fix broken references
|
||||||
|
|
||||||
|
3. **Maintain writing standards**:
|
||||||
|
- One sentence per line
|
||||||
|
- Contextual link explanations
|
||||||
|
- Blog series voice: authoritative, framework-oriented, technically substantive
|
||||||
|
- Keep quotes with proper attribution
|
||||||
|
|
||||||
|
4. **Commit the revision**:
|
||||||
|
- `git add *.md`
|
||||||
|
- `git commit -m "editing: {brief description of main changes}"`
|
||||||
|
|
||||||
|
5. **Write revision summary** to output/revision-summary.md:
|
||||||
|
- Draft filename and title
|
||||||
|
- List of editorial actions applied (EDIT-001, etc.)
|
||||||
|
- Sections modified
|
||||||
|
- Word count before and after
|
||||||
|
output_artifacts:
|
||||||
|
- name: revision-summary
|
||||||
|
path: output/revision-summary.md
|
||||||
|
type: markdown
|
||||||
127
.wave/pipelines/explain.yaml
Normal file
127
.wave/pipelines/explain.yaml
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: explain
|
||||||
|
description: "Deep-dive explanation of code, modules, or architectural patterns"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "explain the pipeline execution system and how steps are scheduled"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: explore
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Explore the codebase to understand: {{ input }}
|
||||||
|
|
||||||
|
## Exploration Steps
|
||||||
|
|
||||||
|
1. **Find relevant files**: Use Glob and Grep to locate all files related
|
||||||
|
to the topic. Cast a wide net — include implementations, tests, configs,
|
||||||
|
and documentation.
|
||||||
|
|
||||||
|
2. **Trace the call graph**: For key entry points, follow the execution flow.
|
||||||
|
Note which functions call which, and how data flows through the system.
|
||||||
|
|
||||||
|
3. **Identify key abstractions**: Find the core types, interfaces, and structs.
|
||||||
|
Note their responsibilities and relationships.
|
||||||
|
|
||||||
|
4. **Map dependencies**: Which packages/modules does this depend on?
|
||||||
|
Which depend on it?
|
||||||
|
|
||||||
|
5. **Find tests**: Locate test files that exercise this code.
|
||||||
|
Tests often reveal intended behavior and edge cases.
|
||||||
|
|
||||||
|
6. **Check configuration**: Find config files, constants, or environment
|
||||||
|
variables that affect behavior.
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: exploration
|
||||||
|
path: .wave/output/exploration.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/exploration.json
|
||||||
|
schema_path: .wave/contracts/explain-exploration.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: analyze
|
||||||
|
persona: planner
|
||||||
|
dependencies: [explore]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore
|
||||||
|
artifact: exploration
|
||||||
|
as: codebase_map
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze the architecture and design of the explored code.
|
||||||
|
|
||||||
|
Review the injected exploration data, then read the key source files identified. Focus on:
|
||||||
|
|
||||||
|
1. **Design patterns**: What patterns are used and why?
|
||||||
|
2. **Data flow**: How does data enter, transform, and exit?
|
||||||
|
3. **Error handling**: What's the error strategy?
|
||||||
|
4. **Concurrency model**: Goroutines, channels, mutexes?
|
||||||
|
5. **Extension points**: Where can new functionality be added?
|
||||||
|
6. **Design decisions**: What trade-offs were made?
|
||||||
|
output_artifacts:
|
||||||
|
- name: analysis
|
||||||
|
path: .wave/output/analysis.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/analysis.json
|
||||||
|
schema_path: .wave/contracts/explain-analysis.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: document
|
||||||
|
persona: philosopher
|
||||||
|
dependencies: [analyze]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore
|
||||||
|
artifact: exploration
|
||||||
|
as: codebase_map
|
||||||
|
- step: analyze
|
||||||
|
artifact: analysis
|
||||||
|
as: architecture
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Write a comprehensive explanation document.
|
||||||
|
|
||||||
|
Review the injected exploration and architecture data, then produce a markdown document with:
|
||||||
|
|
||||||
|
1. **Overview** — One paragraph summary
|
||||||
|
2. **Key Concepts** — Core abstractions and terminology (glossary)
|
||||||
|
3. **Architecture** — How pieces fit together (include ASCII diagram)
|
||||||
|
4. **How It Works** — Step-by-step main execution flow with file:line refs
|
||||||
|
5. **Design Decisions** — Decision → Rationale → Trade-off entries
|
||||||
|
6. **Extension Guide** — How to add new functionality
|
||||||
|
7. **Testing Strategy** — How the code is tested
|
||||||
|
8. **Common Pitfalls** — Things that trip people up
|
||||||
|
|
||||||
|
Write for an experienced developer new to this codebase.
|
||||||
|
Use real file paths, function names, and type names.
|
||||||
|
output_artifacts:
|
||||||
|
- name: explanation
|
||||||
|
path: .wave/output/explanation.md
|
||||||
|
type: markdown
|
||||||
201
.wave/pipelines/feature.yaml
Normal file
201
.wave/pipelines/feature.yaml
Normal file
@@ -0,0 +1,201 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: feature
|
||||||
|
description: "Plan, implement, test, and commit a feature to a new branch"
|
||||||
|
release: false
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "add a --dry-run flag to the run command"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: explore
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Explore the codebase to plan this feature: {{ input }}
|
||||||
|
|
||||||
|
## Exploration
|
||||||
|
|
||||||
|
1. **Understand the request**: What is being asked? Assess scope
|
||||||
|
(small = 1-2 files, medium = 3-7, large = 8+).
|
||||||
|
|
||||||
|
2. **Find related code**: Use Glob and Grep to find files related
|
||||||
|
to the feature. Note paths, relevance, and key symbols.
|
||||||
|
|
||||||
|
3. **Identify patterns**: Read key files. Document conventions that
|
||||||
|
must be followed (naming, error handling, testing patterns).
|
||||||
|
|
||||||
|
4. **Map affected modules**: Which packages are directly/indirectly affected?
|
||||||
|
|
||||||
|
5. **Survey tests**: Find related test files, testing patterns, gaps.
|
||||||
|
|
||||||
|
6. **Assess risks**: Breaking changes, performance, security implications.
|
||||||
|
|
||||||
|
Produce a structured JSON result matching the contract schema.
|
||||||
|
output_artifacts:
|
||||||
|
- name: exploration
|
||||||
|
path: .wave/output/exploration.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/exploration.json
|
||||||
|
schema_path: .wave/contracts/feature-exploration.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: plan
|
||||||
|
persona: planner
|
||||||
|
dependencies: [explore]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore
|
||||||
|
artifact: exploration
|
||||||
|
as: context
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Create an implementation plan for this feature.
|
||||||
|
|
||||||
|
Feature: {{ input }}
|
||||||
|
|
||||||
|
The codebase exploration has been injected into your workspace. Read it first.
|
||||||
|
|
||||||
|
Break the feature into ordered implementation steps:
|
||||||
|
|
||||||
|
1. For each step: what to do, which files to modify, acceptance criteria
|
||||||
|
2. Dependencies between steps
|
||||||
|
3. What tests to write
|
||||||
|
4. Complexity estimate per step (S/M/L)
|
||||||
|
|
||||||
|
Produce a structured JSON result matching the contract schema.
|
||||||
|
output_artifacts:
|
||||||
|
- name: plan
|
||||||
|
path: .wave/output/plan.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/plan.json
|
||||||
|
schema_path: .wave/contracts/feature-plan.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: implement
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [plan]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore
|
||||||
|
artifact: exploration
|
||||||
|
as: context
|
||||||
|
- step: plan
|
||||||
|
artifact: plan
|
||||||
|
as: plan
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "feat/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Implement the feature on this isolated worktree branch.
|
||||||
|
|
||||||
|
The codebase exploration and implementation plan have been injected into your
|
||||||
|
workspace. Read them both before starting.
|
||||||
|
|
||||||
|
Feature: {{ input }}
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
1. **Implement step by step** following the plan:
|
||||||
|
- Follow existing codebase patterns identified in exploration
|
||||||
|
- Write tests alongside implementation
|
||||||
|
- After each significant change, verify it compiles: `go build ./...`
|
||||||
|
|
||||||
|
2. **Run full test suite**:
|
||||||
|
```bash
|
||||||
|
go test ./... -count=1
|
||||||
|
```
|
||||||
|
Fix any failures before proceeding.
|
||||||
|
|
||||||
|
3. **Commit**:
|
||||||
|
```bash
|
||||||
|
git add <specific-files>
|
||||||
|
git commit -m "<commit_message_suggestion from plan>
|
||||||
|
|
||||||
|
Implementation following plan:
|
||||||
|
- S01: <title>
|
||||||
|
- S02: <title>
|
||||||
|
..."
|
||||||
|
```
|
||||||
|
|
||||||
|
Commit changes to the worktree branch.
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
output_artifacts:
|
||||||
|
- name: result
|
||||||
|
path: .wave/output/result.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
# ── Publish ─────────────────────────────────────────────────────────
|
||||||
|
- id: publish
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [implement]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: implement
|
||||||
|
artifact: result
|
||||||
|
as: result
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "feat/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
PUBLISH — push the branch and create a pull request.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Push the branch:
|
||||||
|
```bash
|
||||||
|
git push -u origin HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create a pull request using the implementation result as context:
|
||||||
|
```bash
|
||||||
|
gh pr create --title "feat: $(git log --format=%s -1)" --body-file .wave/artifacts/result
|
||||||
|
```
|
||||||
|
output_artifacts:
|
||||||
|
- name: pr-result
|
||||||
|
path: .wave/output/pr-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/pr-result.json
|
||||||
|
schema_path: .wave/contracts/pr-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: pr
|
||||||
|
extract_from: .wave/output/pr-result.json
|
||||||
|
json_path: .pr_url
|
||||||
|
label: "Pull Request"
|
||||||
51
.wave/pipelines/hello-world.yaml
Normal file
51
.wave/pipelines/hello-world.yaml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: hello-world
|
||||||
|
description: "Simple test pipeline to verify Wave is working"
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "testing Wave"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: greet
|
||||||
|
persona: craftsman
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are a simple greeting bot. The user said: "{{ input }}"
|
||||||
|
|
||||||
|
Your final response must be ONLY this text (nothing else - no explanation, no markdown):
|
||||||
|
|
||||||
|
Hello from Wave! Your message was: {{ input }}
|
||||||
|
output_artifacts:
|
||||||
|
- name: greeting
|
||||||
|
path: greeting.txt
|
||||||
|
type: text
|
||||||
|
|
||||||
|
- id: verify
|
||||||
|
persona: navigator
|
||||||
|
dependencies: [greet]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: greet
|
||||||
|
artifact: greeting
|
||||||
|
as: greeting_file
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Verify the greeting artifact exists and contains content.
|
||||||
|
|
||||||
|
Output a JSON result confirming verification status.
|
||||||
|
output_artifacts:
|
||||||
|
- name: result
|
||||||
|
path: .wave/output/result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/result.json
|
||||||
|
schema_path: .wave/contracts/hello-world-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
88
.wave/pipelines/hotfix.yaml
Normal file
88
.wave/pipelines/hotfix.yaml
Normal file
@@ -0,0 +1,88 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: hotfix
|
||||||
|
description: "Quick investigation and fix for production issues"
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "fix panic in pipeline executor when step has nil context"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: investigate
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Investigate this production issue: {{ input }}
|
||||||
|
|
||||||
|
1. Search for related code paths
|
||||||
|
2. Check recent commits that may have introduced the bug
|
||||||
|
3. Identify the root cause
|
||||||
|
4. Assess blast radius (what else could be affected)
|
||||||
|
output_artifacts:
|
||||||
|
- name: findings
|
||||||
|
path: .wave/output/findings.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/findings.json
|
||||||
|
schema_path: .wave/contracts/findings.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: fix
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [investigate]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: investigate
|
||||||
|
artifact: findings
|
||||||
|
as: investigation
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Fix the production issue based on the investigation findings.
|
||||||
|
|
||||||
|
Requirements:
|
||||||
|
1. Apply the minimal fix - don't refactor surrounding code
|
||||||
|
2. Add a regression test that would have caught this bug
|
||||||
|
3. Ensure all existing tests still pass
|
||||||
|
4. Document the fix in a commit-ready message
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
|
||||||
|
- id: verify
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [fix]
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Verify the hotfix:
|
||||||
|
|
||||||
|
1. Is the fix minimal and focused? (no unrelated changes)
|
||||||
|
2. Does the regression test actually test the reported issue?
|
||||||
|
3. Are there other code paths with the same vulnerability?
|
||||||
|
4. Is the fix safe for production deployment?
|
||||||
|
|
||||||
|
Output a go/no-go recommendation with reasoning.
|
||||||
|
output_artifacts:
|
||||||
|
- name: verdict
|
||||||
|
path: .wave/output/verdict.md
|
||||||
|
type: markdown
|
||||||
122
.wave/pipelines/housekeeping.yaml
Normal file
122
.wave/pipelines/housekeeping.yaml
Normal file
@@ -0,0 +1,122 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: housekeeping
|
||||||
|
description: "Audit and repair Zettelkasten link health, orphans, and index completeness"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
examples:
|
||||||
|
- ""
|
||||||
|
- "orphans only"
|
||||||
|
- "dangling links only"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: audit
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Audit the Zettelkasten for health issues.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Find orphans**: Run `notesium list --orphans`
|
||||||
|
- For each orphan, read the note to understand its content
|
||||||
|
- Suggest which existing note it should be linked from
|
||||||
|
|
||||||
|
2. **Find dangling links**: Run `notesium links --dangling`
|
||||||
|
- For each dangling link, identify the source file and broken target
|
||||||
|
- Suggest whether to retarget or remove the link
|
||||||
|
|
||||||
|
3. **Get stats**: Run `notesium stats`
|
||||||
|
- Record total notes, total links, label count
|
||||||
|
|
||||||
|
4. **Check index completeness**: Read the index note
|
||||||
|
- Compare sections listed in the index against known sections
|
||||||
|
- Identify any sections or major topics missing from the index
|
||||||
|
|
||||||
|
5. **Find dead ends**: Run `notesium links` to get all links
|
||||||
|
- Identify notes that have outgoing links but no incoming links
|
||||||
|
- These are reachable only via search, not by traversal
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write the result as JSON to output/audit-report.json matching the contract schema.
|
||||||
|
Include:
|
||||||
|
- orphans: list with filename, title, suggested_connection
|
||||||
|
- dangling_links: list with source_filename, target_filename, link_text, suggested_fix
|
||||||
|
- stats: total_notes, total_links, label_notes, orphan_count, dangling_count, avg_links_per_note
|
||||||
|
- index_gaps: list with section, description, suggested_entry_point
|
||||||
|
- dead_ends: list with filename, title, outgoing_links count
|
||||||
|
- timestamp: current ISO 8601 timestamp
|
||||||
|
output_artifacts:
|
||||||
|
- name: audit-report
|
||||||
|
path: output/audit-report.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: output/audit-report.json
|
||||||
|
schema_path: .wave/contracts/audit-report.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: repair
|
||||||
|
persona: scribe
|
||||||
|
dependencies: [audit]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: audit
|
||||||
|
artifact: audit-report
|
||||||
|
as: audit
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Repair issues found in the Zettelkasten audit.
|
||||||
|
|
||||||
|
Read the audit report: cat artifacts/audit
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Fix orphans**: For each orphan in the report:
|
||||||
|
- Read the orphan note and the suggested_connection note
|
||||||
|
- Add a contextual link from the suggested note to the orphan
|
||||||
|
- Explain *why* the connection exists
|
||||||
|
|
||||||
|
2. **Fix dangling links**: For each dangling link:
|
||||||
|
- If suggested_fix is "retarget": find the correct target note and update the link
|
||||||
|
- If suggested_fix is "remove": delete the broken link from the source file
|
||||||
|
|
||||||
|
3. **Fix index gaps**: For each index gap:
|
||||||
|
- Read the index note
|
||||||
|
- Add the missing keyword → entry point mapping
|
||||||
|
- Use the suggested_entry_point from the audit
|
||||||
|
|
||||||
|
4. **Skip dead ends** if the count is large — only fix the most obvious ones
|
||||||
|
(notes that clearly belong in an existing Folgezettel sequence)
|
||||||
|
|
||||||
|
5. **Commit all fixes**:
|
||||||
|
- `git add *.md`
|
||||||
|
- Count the total issues fixed
|
||||||
|
- `git commit -m "housekeeping: fix {n} issues"`
|
||||||
|
|
||||||
|
6. **Write repair log** to output/repair-log.md:
|
||||||
|
- Number of orphans linked
|
||||||
|
- Number of dangling links fixed
|
||||||
|
- Number of index entries added
|
||||||
|
- List of files modified
|
||||||
|
output_artifacts:
|
||||||
|
- name: repair-log
|
||||||
|
path: output/repair-log.md
|
||||||
|
type: markdown
|
||||||
117
.wave/pipelines/improve.yaml
Normal file
117
.wave/pipelines/improve.yaml
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: improve
|
||||||
|
description: "Analyze code and apply targeted improvements"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "improve error handling in internal/pipeline"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: assess
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Assess the code for improvement opportunities: {{ input }}
|
||||||
|
|
||||||
|
## Assessment Areas
|
||||||
|
|
||||||
|
1. **Code quality**: Readability, naming, structure, duplication
|
||||||
|
2. **Error handling**: Missing checks, swallowed errors, unclear messages
|
||||||
|
3. **Performance**: Unnecessary allocations, N+1 patterns, missing caching
|
||||||
|
4. **Testability**: Hard-to-test code, missing interfaces, tight coupling
|
||||||
|
5. **Robustness**: Missing nil checks, race conditions, resource leaks
|
||||||
|
6. **Maintainability**: Complex functions, deep nesting, magic numbers
|
||||||
|
|
||||||
|
For each finding, assess:
|
||||||
|
- Impact: how much does fixing it improve the code?
|
||||||
|
- Effort: how hard is the fix?
|
||||||
|
- Risk: could the fix introduce regressions?
|
||||||
|
output_artifacts:
|
||||||
|
- name: assessment
|
||||||
|
path: .wave/output/assessment.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/assessment.json
|
||||||
|
schema_path: .wave/contracts/improvement-assessment.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: implement
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [assess]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: assess
|
||||||
|
artifact: assessment
|
||||||
|
as: findings
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Apply the recommended improvements to the codebase.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
1. **Start with quick wins**: Apply trivial/small effort fixes first
|
||||||
|
2. **One improvement at a time**: Make each change, verify it compiles,
|
||||||
|
then move to the next
|
||||||
|
3. **Preserve behavior**: Improvements must not change external behavior
|
||||||
|
4. **Run tests**: After each significant change, run relevant tests
|
||||||
|
5. **Skip high-risk items**: Do not apply changes rated risk=high
|
||||||
|
without explicit test coverage
|
||||||
|
6. **Document changes**: Track what was changed and why
|
||||||
|
|
||||||
|
Focus on the findings with the best impact-to-effort ratio.
|
||||||
|
Do NOT refactor beyond what was identified in the assessment.
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
|
||||||
|
- id: verify
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [implement]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: assess
|
||||||
|
artifact: assessment
|
||||||
|
as: original_findings
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Verify the improvements were applied correctly.
|
||||||
|
|
||||||
|
For each improvement that was applied:
|
||||||
|
1. Is the fix correct and complete?
|
||||||
|
2. Does it actually address the identified issue?
|
||||||
|
3. Were any new issues introduced?
|
||||||
|
4. Are tests still passing?
|
||||||
|
|
||||||
|
For improvements NOT applied, confirm they were appropriately skipped.
|
||||||
|
|
||||||
|
Produce a verification report covering:
|
||||||
|
- Applied improvements (with before/after)
|
||||||
|
- Skipped items (with justification)
|
||||||
|
- New issues found (if any)
|
||||||
|
- Overall quality delta assessment
|
||||||
|
output_artifacts:
|
||||||
|
- name: verification
|
||||||
|
path: .wave/output/verification.md
|
||||||
|
type: markdown
|
||||||
173
.wave/pipelines/ingest.yaml
Normal file
173
.wave/pipelines/ingest.yaml
Normal file
@@ -0,0 +1,173 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: ingest
|
||||||
|
description: "Ingest a web article into the Zettelkasten as bibliographic and permanent notes"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
examples:
|
||||||
|
- "https://simonwillison.net/2026/Feb/7/software-factory/"
|
||||||
|
- "https://langfuse.com/blog/2025-03-observability"
|
||||||
|
- "https://arxiv.org/abs/2401.12345"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: fetch
|
||||||
|
persona: scout
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Fetch and extract structured content from a web article.
|
||||||
|
|
||||||
|
URL: {{ input }}
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Use WebFetch to retrieve the article content
|
||||||
|
2. Extract:
|
||||||
|
- title: article title
|
||||||
|
- author: author name (look in byline, meta tags, about section)
|
||||||
|
- date: publication date
|
||||||
|
- summary: 50-3000 character summary of the article
|
||||||
|
- key_concepts: list of key concepts with name and description
|
||||||
|
- notable_quotes: direct quotes with context
|
||||||
|
- author_year_key: generate AuthorYear key (e.g., Willison2026)
|
||||||
|
3. If the author name is unclear, use the domain name as author
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write the result as JSON to output/source-extract.json matching the contract schema.
|
||||||
|
output_artifacts:
|
||||||
|
- name: source-extract
|
||||||
|
path: output/source-extract.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: output/source-extract.json
|
||||||
|
schema_path: .wave/contracts/source-extract.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: connect
|
||||||
|
persona: navigator
|
||||||
|
dependencies: [fetch]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: fetch
|
||||||
|
artifact: source-extract
|
||||||
|
as: source
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Find connections between extracted source content and existing Zettelkasten notes.
|
||||||
|
|
||||||
|
Read the source extract: cat artifacts/source
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. For each key concept in the source, search for related notes:
|
||||||
|
- `notesium lines --filter="concept_name"`
|
||||||
|
- Read the most relevant matches
|
||||||
|
2. Identify the Folgezettel neighborhood where new notes belong:
|
||||||
|
- What section does this content fit in?
|
||||||
|
- What would be the parent note?
|
||||||
|
- What Folgezettel address should new notes get?
|
||||||
|
3. Check if the index note needs updating
|
||||||
|
4. Determine link directions (should new note link to existing, or existing link to new?)
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Write the result as JSON to output/connections.json matching the contract schema.
|
||||||
|
Include:
|
||||||
|
- source_title: title of the source being connected
|
||||||
|
- related_notes: list of related existing notes with filename, title,
|
||||||
|
folgezettel_address, relationship explanation, and link_direction
|
||||||
|
- suggested_placements: where new notes should go in the Folgezettel
|
||||||
|
with address, parent_note, section, rationale, and concept
|
||||||
|
- index_update_needed: boolean
|
||||||
|
- suggested_index_entries: new entries if needed
|
||||||
|
- timestamp: current ISO 8601 timestamp
|
||||||
|
output_artifacts:
|
||||||
|
- name: connections
|
||||||
|
path: output/connections.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: output/connections.json
|
||||||
|
schema_path: .wave/contracts/connections.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: create
|
||||||
|
persona: scribe
|
||||||
|
dependencies: [connect]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: fetch
|
||||||
|
artifact: source-extract
|
||||||
|
as: source
|
||||||
|
- step: connect
|
||||||
|
artifact: connections
|
||||||
|
as: connections
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Create Zettelkasten notes from an ingested web source.
|
||||||
|
|
||||||
|
Read the artifacts:
|
||||||
|
cat artifacts/source
|
||||||
|
cat artifacts/connections
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. **Create the bibliographic note**:
|
||||||
|
- Use `notesium new` for the filename
|
||||||
|
- Title: `# AuthorYear` using the author_year_key from the source extract
|
||||||
|
- Content: source URL, author, date, summary, key quotes
|
||||||
|
- One sentence per line
|
||||||
|
|
||||||
|
2. **Create permanent notes** for key ideas that warrant standalone Zettel:
|
||||||
|
- Use `notesium new` for each
|
||||||
|
- Use the Folgezettel address from suggested_placements
|
||||||
|
- Title: `# {address} {Concept-Name}`
|
||||||
|
- Write in own words — transform, don't copy
|
||||||
|
- Add contextual links to related notes (explain *why* the connection exists)
|
||||||
|
- Link back to the bibliographic note
|
||||||
|
|
||||||
|
3. **Update existing notes** if bidirectional links are suggested:
|
||||||
|
- Add links from existing notes to the new permanent notes
|
||||||
|
- Include contextual explanation for each link
|
||||||
|
|
||||||
|
4. **Update the index note** if index_update_needed is true:
|
||||||
|
- Add new keyword → entry point mappings
|
||||||
|
|
||||||
|
5. **Commit all changes**:
|
||||||
|
- `git add *.md`
|
||||||
|
- `git commit -m "ingest: {AuthorYear key in lowercase}"`
|
||||||
|
|
||||||
|
6. **Write summary** to output/ingest-summary.md:
|
||||||
|
- Bibliographic note created (filename, title)
|
||||||
|
- Permanent notes created (filename, title, Folgezettel address)
|
||||||
|
- Links added to existing notes
|
||||||
|
- Index updates made
|
||||||
|
output_artifacts:
|
||||||
|
- name: ingest-summary
|
||||||
|
path: output/ingest-summary.md
|
||||||
|
type: markdown
|
||||||
119
.wave/pipelines/onboard.yaml
Normal file
119
.wave/pipelines/onboard.yaml
Normal file
@@ -0,0 +1,119 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: onboard
|
||||||
|
description: "Generate a project onboarding guide for new contributors"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "create an onboarding guide for this project"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: survey
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Survey this project to build an onboarding guide: {{ input }}
|
||||||
|
|
||||||
|
## Survey Checklist
|
||||||
|
|
||||||
|
1. **Project identity**: Find README, package manifests (go.mod, package.json),
|
||||||
|
license, and config files. Determine language, framework, purpose.
|
||||||
|
|
||||||
|
2. **Build system**: How to build, test, and run the project.
|
||||||
|
Find Makefiles, scripts, CI configs, Dockerfiles.
|
||||||
|
|
||||||
|
3. **Directory structure**: Map the top-level layout and key directories.
|
||||||
|
What does each directory contain?
|
||||||
|
|
||||||
|
4. **Architecture**: Identify the main components and how they interact.
|
||||||
|
Find entry points (main.go, index.ts, etc.).
|
||||||
|
|
||||||
|
5. **Dependencies**: List key dependencies and their purposes.
|
||||||
|
Check go.mod, package.json, requirements.txt, etc.
|
||||||
|
|
||||||
|
6. **Configuration**: Find environment variables, config files, feature flags.
|
||||||
|
|
||||||
|
7. **Testing**: Where are tests? How to run them? What patterns are used?
|
||||||
|
|
||||||
|
8. **Development workflow**: Find contributing guides, PR templates,
|
||||||
|
commit conventions, branch strategies.
|
||||||
|
|
||||||
|
9. **Documentation**: Where is documentation? Is it up to date?
|
||||||
|
output_artifacts:
|
||||||
|
- name: survey
|
||||||
|
path: .wave/output/project-survey.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/project-survey.json
|
||||||
|
schema_path: .wave/contracts/project-survey.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: guide
|
||||||
|
persona: philosopher
|
||||||
|
dependencies: [survey]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: survey
|
||||||
|
artifact: survey
|
||||||
|
as: project_info
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Write a comprehensive onboarding guide for new contributors.
|
||||||
|
|
||||||
|
Using the injected project survey data, write a guide with these sections:
|
||||||
|
|
||||||
|
# Onboarding Guide: [Project Name]
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
- Prerequisites (what to install)
|
||||||
|
- Clone and build (exact commands)
|
||||||
|
- Run tests (exact commands)
|
||||||
|
- Run the project (exact commands)
|
||||||
|
|
||||||
|
## Project Overview
|
||||||
|
- What this project does (2-3 sentences)
|
||||||
|
- Key technologies and why they were chosen
|
||||||
|
- High-level architecture (ASCII diagram)
|
||||||
|
|
||||||
|
## Directory Map
|
||||||
|
- What each top-level directory contains
|
||||||
|
- Where to find things (tests, configs, docs)
|
||||||
|
|
||||||
|
## Core Concepts
|
||||||
|
- Key abstractions and terminology
|
||||||
|
- How the main components interact
|
||||||
|
- Data flow through the system
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
- How to create a feature branch
|
||||||
|
- Commit message conventions
|
||||||
|
- How to run tests before pushing
|
||||||
|
- PR process
|
||||||
|
|
||||||
|
## Common Tasks
|
||||||
|
- "I want to add a new [feature/command/endpoint]" → where to start
|
||||||
|
- "I want to fix a bug" → debugging approach
|
||||||
|
- "I want to understand [component]" → where to look
|
||||||
|
|
||||||
|
## Helpful Resources
|
||||||
|
- Documentation locations
|
||||||
|
- Key files to read first
|
||||||
|
- Related external docs
|
||||||
|
|
||||||
|
Write for someone on their first day with this codebase.
|
||||||
|
Be specific — use real paths, real commands, real examples.
|
||||||
|
output_artifacts:
|
||||||
|
- name: guide
|
||||||
|
path: .wave/output/onboarding-guide.md
|
||||||
|
type: markdown
|
||||||
208
.wave/pipelines/plan.yaml
Normal file
208
.wave/pipelines/plan.yaml
Normal file
@@ -0,0 +1,208 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: plan
|
||||||
|
description: "Break down a feature into actionable tasks with structured exploration, planning, and review"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "add webhook support for pipeline completion events"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: explore
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are exploring a codebase to gather context for planning this feature or task:
|
||||||
|
|
||||||
|
{{ input }}
|
||||||
|
|
||||||
|
Your goal is to produce a rich, structured JSON exploration that a planner persona
|
||||||
|
will use (without any other context) to break the work into tasks.
|
||||||
|
|
||||||
|
## Exploration Steps
|
||||||
|
|
||||||
|
1. **Understand the request**: Summarize what is being asked and assess scope
|
||||||
|
(small = 1-2 files, medium = 3-7 files, large = 8-15 files, epic = 16+ files).
|
||||||
|
|
||||||
|
2. **Find related files**: Use Glob and Grep to find files related to the feature.
|
||||||
|
For each file, note its path, relevance (primary/secondary/reference), why it
|
||||||
|
matters, and key symbols (functions, types, constants) within it.
|
||||||
|
|
||||||
|
3. **Identify patterns**: Use Read to examine key files. Document codebase patterns
|
||||||
|
and conventions. Assign each a PAT-### ID and relevance level:
|
||||||
|
- must_follow: Violating this would break consistency or cause bugs
|
||||||
|
- should_follow: Strong convention but exceptions exist
|
||||||
|
- informational: Good to know but not binding
|
||||||
|
|
||||||
|
4. **Map affected modules**: Identify which packages/modules will be directly or
|
||||||
|
indirectly affected. Note their dependencies and dependents.
|
||||||
|
|
||||||
|
5. **Survey testing landscape**: Find test files related to the affected code.
|
||||||
|
Note testing patterns (table-driven, mocks, fixtures, etc.) and coverage gaps.
|
||||||
|
|
||||||
|
6. **Assess risks**: Identify potential risks (breaking changes, performance concerns,
|
||||||
|
security implications). Rate severity (high/medium/low) and suggest mitigations.
|
||||||
|
|
||||||
|
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||||
|
outside the file. The file must parse as valid JSON.
|
||||||
|
output_artifacts:
|
||||||
|
- name: exploration
|
||||||
|
path: .wave/output/exploration.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/exploration.json
|
||||||
|
schema_path: .wave/contracts/plan-exploration.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: breakdown
|
||||||
|
persona: planner
|
||||||
|
dependencies: [explore]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore
|
||||||
|
artifact: exploration
|
||||||
|
as: context
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are breaking down a feature into actionable implementation tasks.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
Feature request: {{ input }}
|
||||||
|
|
||||||
|
Codebase exploration has already been done and injected into your workspace.
|
||||||
|
It contains structured JSON with: related files, codebase patterns,
|
||||||
|
affected modules, testing landscape, and identified risks. Use ALL of this
|
||||||
|
information to inform your task breakdown.
|
||||||
|
|
||||||
|
## Task Breakdown Rules
|
||||||
|
|
||||||
|
1. **Task IDs**: Use T01, T02, T03... format (zero-padded two digits).
|
||||||
|
|
||||||
|
2. **Personas**: Assign each task to the most appropriate persona:
|
||||||
|
- navigator: architecture decisions, exploration, planning
|
||||||
|
- craftsman: implementation, coding, file creation
|
||||||
|
- philosopher: review, analysis, quality assessment
|
||||||
|
- auditor: security review, compliance checking
|
||||||
|
- implementer: focused implementation tasks
|
||||||
|
- reviewer: code review tasks
|
||||||
|
|
||||||
|
3. **Dependencies**: Express as task IDs (e.g., ["T01", "T02"]).
|
||||||
|
A task with no dependencies gets an empty array [].
|
||||||
|
|
||||||
|
4. **Complexity**: S (< 1hr), M (1-4hr), L (4-8hr), XL (> 1 day).
|
||||||
|
|
||||||
|
5. **Acceptance criteria**: Each task MUST have at least one concrete,
|
||||||
|
verifiable acceptance criterion.
|
||||||
|
|
||||||
|
6. **Affected files**: List files each task will create or modify.
|
||||||
|
|
||||||
|
7. **Execution order**: Group tasks into phases. Tasks within a phase
|
||||||
|
can run in parallel. Phase 1 has no dependencies, Phase 2 depends
|
||||||
|
on Phase 1, etc.
|
||||||
|
|
||||||
|
8. **Risks**: Note task-specific risks from the exploration.
|
||||||
|
|
||||||
|
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||||
|
outside the file. The file must parse as valid JSON.
|
||||||
|
output_artifacts:
|
||||||
|
- name: tasks
|
||||||
|
path: .wave/output/tasks.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/tasks.json
|
||||||
|
schema_path: .wave/contracts/plan-tasks.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: review
|
||||||
|
persona: philosopher
|
||||||
|
dependencies: [breakdown]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: explore
|
||||||
|
artifact: exploration
|
||||||
|
as: context
|
||||||
|
- step: breakdown
|
||||||
|
artifact: tasks
|
||||||
|
as: task_list
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are reviewing a task breakdown plan for quality, completeness, and correctness.
|
||||||
|
|
||||||
|
## Input
|
||||||
|
|
||||||
|
Feature request: {{ input }}
|
||||||
|
|
||||||
|
Two artifacts have been injected into your workspace: the codebase exploration
|
||||||
|
and the task breakdown plan. Read them BOTH before proceeding.
|
||||||
|
|
||||||
|
The exploration contains: related files, patterns, affected modules, testing
|
||||||
|
landscape, and risks. The task list contains: feature summary, tasks with
|
||||||
|
dependencies and acceptance criteria, and execution order.
|
||||||
|
|
||||||
|
## Review Checklist
|
||||||
|
|
||||||
|
For EACH task in the plan, evaluate and assign a status:
|
||||||
|
- ok: Task is well-defined and ready to execute
|
||||||
|
- needs_refinement: Good idea but needs clearer description or criteria
|
||||||
|
- missing_details: Lacks acceptance criteria, affected files, or dependencies
|
||||||
|
- overcomplicated: Should be split or simplified
|
||||||
|
- wrong_persona: Different persona would be more appropriate
|
||||||
|
- bad_dependencies: Dependencies are incorrect or missing
|
||||||
|
|
||||||
|
For each issue found, assign a REV-### ID, severity, description, and suggestion.
|
||||||
|
|
||||||
|
## Cross-Cutting Concerns
|
||||||
|
|
||||||
|
Look for concerns that span multiple tasks (CC-### IDs):
|
||||||
|
- Testing strategy: Are tests planned? Do they follow codebase patterns?
|
||||||
|
- Security: Are security implications addressed?
|
||||||
|
- Performance: Will changes affect performance?
|
||||||
|
- Backwards compatibility: Are breaking changes handled?
|
||||||
|
- Documentation: Is documentation updated?
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
Provide actionable recommendations (REC-### IDs) with type:
|
||||||
|
add_task, modify_task, remove_task, reorder, split_task, merge_tasks,
|
||||||
|
change_persona, add_dependency
|
||||||
|
|
||||||
|
## Verdict
|
||||||
|
|
||||||
|
Provide an overall verdict:
|
||||||
|
- approve: Plan is ready to execute as-is
|
||||||
|
- approve_with_notes: Plan is good but has minor issues to note
|
||||||
|
- revise: Plan needs significant changes before execution
|
||||||
|
|
||||||
|
CRITICAL: Write ONLY the JSON object. No markdown wrapping, no explanation
|
||||||
|
outside the file. The file must parse as valid JSON.
|
||||||
|
output_artifacts:
|
||||||
|
- name: review
|
||||||
|
path: .wave/output/plan-review.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/plan-review.json
|
||||||
|
schema_path: .wave/contracts/plan-review.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
352
.wave/pipelines/prototype.yaml
Normal file
352
.wave/pipelines/prototype.yaml
Normal file
@@ -0,0 +1,352 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: prototype
|
||||||
|
description: "Prototype-driven development pipeline with 5 phases (9 steps): spec, docs, dummy, implement, pr-cycle"
|
||||||
|
release: false
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "build a REST API for user management with CRUD operations"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
# Phase 1: Spec - Requirements capture with speckit integration
|
||||||
|
- id: spec
|
||||||
|
persona: craftsman
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are beginning the specification phase of a prototype-driven development pipeline.
|
||||||
|
|
||||||
|
Your goal is to analyze the project description and create a comprehensive feature specification:
|
||||||
|
|
||||||
|
Project description: {{ input }}
|
||||||
|
|
||||||
|
CRITICAL: Create both spec.md and requirements.md files:
|
||||||
|
|
||||||
|
spec.md should contain the complete feature specification including:
|
||||||
|
- Feature overview and business value
|
||||||
|
- User stories with acceptance criteria
|
||||||
|
- Functional requirements
|
||||||
|
- Success criteria and measurable outcomes
|
||||||
|
- Constraints and assumptions
|
||||||
|
|
||||||
|
requirements.md should contain extracted requirements (optional additional detail).
|
||||||
|
|
||||||
|
Use speckit integration where available to enhance specification quality.
|
||||||
|
|
||||||
|
The specification must be technology-agnostic and focused on user value.
|
||||||
|
|
||||||
|
Create artifact.json with your results.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: spec
|
||||||
|
path: spec.md
|
||||||
|
type: markdown
|
||||||
|
- name: requirements
|
||||||
|
path: requirements.md
|
||||||
|
type: markdown
|
||||||
|
- name: contract_data
|
||||||
|
path: .wave/artifact.json
|
||||||
|
type: json
|
||||||
|
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
schema_path: .wave/contracts/spec-phase.schema.json
|
||||||
|
must_pass: true
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# Phase 2: Docs - Generate runnable documentation from specification
|
||||||
|
- id: docs
|
||||||
|
persona: philosopher
|
||||||
|
dependencies: [spec]
|
||||||
|
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: spec
|
||||||
|
artifact: spec
|
||||||
|
as: input-spec.md
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are in the documentation phase of prototype-driven development.
|
||||||
|
|
||||||
|
Your goal is to create comprehensive, runnable documentation from the specification.
|
||||||
|
|
||||||
|
Create feature documentation from the injected specification that includes:
|
||||||
|
- User-friendly explanation of the feature
|
||||||
|
- Usage examples and scenarios
|
||||||
|
- Integration guide for developers
|
||||||
|
- Stakeholder summary for non-technical audiences
|
||||||
|
|
||||||
|
Generate VitePress-compatible markdown that can be served as runnable documentation.
|
||||||
|
|
||||||
|
CRITICAL: Create both feature-docs.md and stakeholder-summary.md files.
|
||||||
|
|
||||||
|
Create artifact.json with your results.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: feature-docs
|
||||||
|
path: feature-docs.md
|
||||||
|
type: markdown
|
||||||
|
- name: stakeholder-summary
|
||||||
|
path: stakeholder-summary.md
|
||||||
|
type: markdown
|
||||||
|
- name: contract_data
|
||||||
|
path: .wave/artifact.json
|
||||||
|
type: json
|
||||||
|
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
schema_path: .wave/contracts/docs-phase.schema.json
|
||||||
|
must_pass: true
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# Phase 3: Dummy - Build authentic functional prototype
|
||||||
|
- id: dummy
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [docs]
|
||||||
|
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: docs
|
||||||
|
artifact: feature-docs
|
||||||
|
as: feature-docs.md
|
||||||
|
- step: spec
|
||||||
|
artifact: spec
|
||||||
|
as: spec.md
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are in the dummy/prototype phase of development.
|
||||||
|
|
||||||
|
Your goal is to create a working prototype with authentic I/O handling but stub business logic.
|
||||||
|
|
||||||
|
Create a functional prototype that:
|
||||||
|
- Handles real input and output properly
|
||||||
|
- Implements all user interfaces and endpoints
|
||||||
|
- Uses placeholder/stub implementations for business logic
|
||||||
|
- Can be run and demonstrated to stakeholders
|
||||||
|
- Shows the complete user experience flow
|
||||||
|
|
||||||
|
Focus on proving the interface design and user flows work correctly.
|
||||||
|
|
||||||
|
CRITICAL: Create prototype/ directory with working code and interfaces.md with interface definitions.
|
||||||
|
|
||||||
|
Create artifact.json with your results.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: prototype
|
||||||
|
path: prototype/
|
||||||
|
type: binary
|
||||||
|
- name: interface-definitions
|
||||||
|
path: interfaces.md
|
||||||
|
type: markdown
|
||||||
|
- name: contract_data
|
||||||
|
path: .wave/artifact.json
|
||||||
|
type: json
|
||||||
|
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
schema_path: .wave/contracts/dummy-phase.schema.json
|
||||||
|
must_pass: true
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# Phase 4: Implement - Transition to full implementation
|
||||||
|
- id: implement
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [dummy]
|
||||||
|
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: spec
|
||||||
|
artifact: spec
|
||||||
|
as: spec.md
|
||||||
|
- step: docs
|
||||||
|
artifact: feature-docs
|
||||||
|
as: feature-docs.md
|
||||||
|
- step: dummy
|
||||||
|
artifact: prototype
|
||||||
|
as: prototype/
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are in the implementation phase - transitioning from prototype to production code.
|
||||||
|
|
||||||
|
Your goal is to provide implementation guidance and begin real implementation:
|
||||||
|
- Review all previous artifacts for implementation readiness
|
||||||
|
- Create implementation plan and checklist
|
||||||
|
- Begin replacing stub logic with real implementations
|
||||||
|
- Ensure test coverage for all functionality
|
||||||
|
- Maintain compatibility with established interfaces
|
||||||
|
|
||||||
|
Focus on production-quality code that fulfills the original specification.
|
||||||
|
|
||||||
|
CRITICAL: Create implementation-plan.md and implementation-checklist.md files.
|
||||||
|
|
||||||
|
Create artifact.json with your results.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: implementation-plan
|
||||||
|
path: implementation-plan.md
|
||||||
|
type: markdown
|
||||||
|
- name: progress-checklist
|
||||||
|
path: implementation-checklist.md
|
||||||
|
type: markdown
|
||||||
|
- name: contract_data
|
||||||
|
path: .wave/artifact.json
|
||||||
|
type: json
|
||||||
|
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
schema_path: .wave/contracts/implement-phase.schema.json
|
||||||
|
must_pass: true
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# Phase 5: PR-Cycle - Automated pull request lifecycle
|
||||||
|
- id: pr-create
|
||||||
|
persona: navigator
|
||||||
|
dependencies: [implement]
|
||||||
|
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: implement
|
||||||
|
artifact: implementation-plan
|
||||||
|
as: implementation-plan.md
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
You are creating a pull request for the implemented feature.
|
||||||
|
|
||||||
|
Create a comprehensive pull request:
|
||||||
|
- Clear PR title and description
|
||||||
|
- Link to related issues
|
||||||
|
- Include testing instructions
|
||||||
|
- Add appropriate labels and reviewers
|
||||||
|
- Request Copilot review
|
||||||
|
|
||||||
|
Use GitHub CLI to create the PR and configure automated review workflow.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: pr-info
|
||||||
|
path: pr-info.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: pr-info.json
|
||||||
|
schema_path: .wave/contracts/pr-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: pr-review
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [pr-create]
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Monitor and manage the PR review process.
|
||||||
|
|
||||||
|
Poll for Copilot review completion and analyze feedback.
|
||||||
|
Prepare response strategy for review comments.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
- id: pr-respond
|
||||||
|
persona: philosopher
|
||||||
|
dependencies: [pr-review]
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze review comments and prepare thoughtful responses.
|
||||||
|
|
||||||
|
Generate responses to review feedback that:
|
||||||
|
- Address technical concerns professionally
|
||||||
|
- Explain design decisions clearly
|
||||||
|
- Propose solutions for identified issues
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
- id: pr-fix
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [pr-respond]
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Implement small fixes based on review feedback.
|
||||||
|
|
||||||
|
For larger changes, create follow-up issues instead of expanding this PR.
|
||||||
|
Focus on quick, low-risk improvements that address reviewer concerns.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
|
||||||
|
- id: pr-merge
|
||||||
|
persona: navigator
|
||||||
|
dependencies: [pr-fix]
|
||||||
|
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Complete the PR lifecycle with merge.
|
||||||
|
|
||||||
|
Verify all checks pass, reviews are approved, and merge the PR.
|
||||||
|
Clean up branch and notify stakeholders of completion.
|
||||||
|
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: .
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
531
.wave/pipelines/recinq.yaml
Normal file
531
.wave/pipelines/recinq.yaml
Normal file
@@ -0,0 +1,531 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: recinq
|
||||||
|
description: "Rethink and simplify code using divergent-convergent thinking (Double Diamond)"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "internal/pipeline"
|
||||||
|
|
||||||
|
# Pipeline structure implements the Double Diamond:
|
||||||
|
#
|
||||||
|
# gather → diverge → converge → probe → distill → simplify → report
|
||||||
|
# ╰─ Diamond 1 ─╯ ╰─ Diamond 2 ─╯ ╰implement╯
|
||||||
|
# (discover) (define) (develop) (deliver)
|
||||||
|
#
|
||||||
|
# Each step gets its own context window and cognitive mode.
|
||||||
|
# Fresh memory at every boundary — no mode-switching within a step.
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: gather
|
||||||
|
persona: github-analyst
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
CONTEXT GATHERING — parse input and fetch GitHub context if applicable.
|
||||||
|
|
||||||
|
Input: {{ input }}
|
||||||
|
|
||||||
|
## Instructions
|
||||||
|
|
||||||
|
Determine what kind of input this is:
|
||||||
|
|
||||||
|
1. **GitHub Issue URL**: Contains `github.com` and `/issues/`
|
||||||
|
- Extract owner/repo and issue number from the URL
|
||||||
|
- Run: `gh issue view <number> --repo <owner/repo> --json title,body,labels`
|
||||||
|
- Extract a `focus_hint` summarizing what should be simplified
|
||||||
|
|
||||||
|
2. **GitHub PR URL**: Contains `github.com` and `/pull/`
|
||||||
|
- Extract owner/repo and PR number from the URL
|
||||||
|
- Run: `gh pr view <number> --repo <owner/repo> --json title,body,labels,files`
|
||||||
|
- Extract a `focus_hint` summarizing what the PR is about
|
||||||
|
|
||||||
|
3. **Local path or description**: Anything else
|
||||||
|
- Set `input_type` to `"local"`
|
||||||
|
- Pass through the original input as-is
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
IMPORTANT: The output MUST be valid JSON. Do NOT include markdown fencing.
|
||||||
|
output_artifacts:
|
||||||
|
- name: context
|
||||||
|
path: .wave/output/context.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/context.json
|
||||||
|
schema_path: .wave/contracts/recinq-context.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# ── Diamond 1: Discover (DIVERGENT) ──────────────────────────────────
|
||||||
|
- id: diverge
|
||||||
|
persona: provocateur
|
||||||
|
dependencies: [gather]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: gather
|
||||||
|
artifact: context
|
||||||
|
as: context
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
DIVERGENT THINKING — cast the widest net to find simplification opportunities.
|
||||||
|
|
||||||
|
Target: {{ input }}
|
||||||
|
|
||||||
|
## Starting Point
|
||||||
|
|
||||||
|
The context artifact contains input context.
|
||||||
|
If `input_type` is `"issue"` or `"pr"`, the `focus_hint` tells you WHERE to start looking —
|
||||||
|
but do NOT limit yourself to what the issue describes. Use it as a seed, then expand outward.
|
||||||
|
Follow dependency chains, trace callers, explore adjacent modules. The issue author doesn't
|
||||||
|
know what they don't know — that's YOUR job.
|
||||||
|
If `input_type` is `"local"`, use the `original_input` field as the target path.
|
||||||
|
|
||||||
|
If input is empty or "." — analyze the whole project.
|
||||||
|
If input is a path — focus on that module/directory but consider its connections.
|
||||||
|
|
||||||
|
## Your Mission
|
||||||
|
|
||||||
|
Challenge EVERYTHING. Question every assumption. Hunt complexity.
|
||||||
|
|
||||||
|
## What to Look For
|
||||||
|
|
||||||
|
1. **Premature abstractions**: Interfaces with one implementation. Generic code used once.
|
||||||
|
"What if we just inlined this?"
|
||||||
|
|
||||||
|
2. **Unnecessary indirection**: Layers that pass-through without adding value.
|
||||||
|
Wrappers around wrappers. "How many hops to get to the actual logic?"
|
||||||
|
|
||||||
|
3. **Overengineering**: Configuration for things that never change. Plugins with one plugin.
|
||||||
|
Feature flags for features that are always on. "Is this complexity earning its keep?"
|
||||||
|
|
||||||
|
4. **YAGNI violations**: Code written for hypothetical future needs that never arrived.
|
||||||
|
"When was this last changed? Does anyone actually use this path?"
|
||||||
|
|
||||||
|
5. **Accidental complexity**: Things that are hard because of how they're built, not because
|
||||||
|
the problem is hard. "Could this be 10x simpler if we started over?"
|
||||||
|
|
||||||
|
6. **Copy-paste drift**: Similar-but-slightly-different code that should be unified or
|
||||||
|
intentionally differentiated. "Are these differences meaningful or accidental?"
|
||||||
|
|
||||||
|
7. **Dead weight**: Unused exports, unreachable code, obsolete comments, stale TODOs.
|
||||||
|
`grep -r` for references. If nothing uses it, flag it.
|
||||||
|
|
||||||
|
8. **Naming lies**: Names that don't match what the code actually does.
|
||||||
|
"Does this 'manager' actually manage anything?"
|
||||||
|
|
||||||
|
9. **Dependency gravity**: Modules that pull in everything. Import graphs that are too dense.
|
||||||
|
"What's the blast radius of changing this?"
|
||||||
|
|
||||||
|
## Evidence Requirements
|
||||||
|
|
||||||
|
For EVERY finding, gather concrete metrics:
|
||||||
|
- `wc -l` for line counts
|
||||||
|
- `grep -r` for usage/reference counts
|
||||||
|
- `git log --oneline <file> | wc -l` for change frequency
|
||||||
|
- List the actual files involved
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Each finding gets a unique ID: DVG-001, DVG-002, etc.
|
||||||
|
|
||||||
|
Be AGGRESSIVE — flag everything suspicious. The convergent phase will filter.
|
||||||
|
It's better to have 30 findings with 10 false positives than 5 findings that miss
|
||||||
|
the big opportunities.
|
||||||
|
|
||||||
|
Include a metrics_summary with totals by category and severity, plus hotspot files
|
||||||
|
that appear in multiple findings.
|
||||||
|
output_artifacts:
|
||||||
|
- name: findings
|
||||||
|
path: .wave/output/divergent-findings.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/divergent-findings.json
|
||||||
|
schema_path: .wave/contracts/divergent-findings.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# ── Diamond 1: Define (CONVERGENT) ───────────────────────────────────
|
||||||
|
- id: converge
|
||||||
|
persona: validator
|
||||||
|
dependencies: [diverge]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: diverge
|
||||||
|
artifact: findings
|
||||||
|
as: divergent_findings
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
CONVERGENT VALIDATION — separate signal from noise.
|
||||||
|
|
||||||
|
This is a purely CONVERGENT step. Your job is analytical, not creative.
|
||||||
|
Judge every finding on technical merit alone. No speculation, no new ideas.
|
||||||
|
|
||||||
|
Target: {{ input }}
|
||||||
|
|
||||||
|
## For Every DVG-xxx Finding
|
||||||
|
|
||||||
|
1. **Read the actual code** cited as evidence — don't trust the provocateur's summary
|
||||||
|
2. **Verify the metrics** — check reference counts, line counts, change frequency
|
||||||
|
3. **Assess**: Is this a real problem or a false positive?
|
||||||
|
- Does the "premature abstraction" actually have a second implementation planned?
|
||||||
|
- Is the "dead weight" actually used via reflection or codegen?
|
||||||
|
- Is the "unnecessary indirection" actually providing error handling or logging?
|
||||||
|
4. **Classify**:
|
||||||
|
- `CONFIRMED` — real problem, metrics check out, code supports the claim
|
||||||
|
- `PARTIALLY_CONFIRMED` — real concern but overstated, or scope is narrower than claimed
|
||||||
|
- `REJECTED` — false positive, justified complexity, or incorrect metrics
|
||||||
|
5. **Explain**: For every classification, write WHY. For rejections, explain what
|
||||||
|
the provocateur got wrong.
|
||||||
|
|
||||||
|
Be RIGOROUS. The provocateur was told to be aggressive — your job is to be skeptical.
|
||||||
|
A finding that survives your scrutiny is genuinely worth addressing.
|
||||||
|
output_artifacts:
|
||||||
|
- name: validated_findings
|
||||||
|
path: .wave/output/validated-findings.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/validated-findings.json
|
||||||
|
schema_path: .wave/contracts/validated-findings.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# ── Diamond 2: Develop (DIVERGENT) ───────────────────────────────────
|
||||||
|
- id: probe
|
||||||
|
persona: provocateur
|
||||||
|
dependencies: [converge]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: diverge
|
||||||
|
artifact: findings
|
||||||
|
as: divergent_findings
|
||||||
|
- step: converge
|
||||||
|
artifact: validated_findings
|
||||||
|
as: validated_findings
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
SECOND DIVERGENCE — probe deeper into confirmed findings.
|
||||||
|
|
||||||
|
The first pass cast a wide net. The validator filtered it down.
|
||||||
|
Now YOU go deeper on what survived. This is DIVERGENT thinking again —
|
||||||
|
expand, connect, discover what the first pass missed.
|
||||||
|
|
||||||
|
Focus on findings with status CONFIRMED or PARTIALLY_CONFIRMED.
|
||||||
|
|
||||||
|
Target: {{ input }}
|
||||||
|
|
||||||
|
## Your Mission
|
||||||
|
|
||||||
|
For each confirmed finding, probe OUTWARD:
|
||||||
|
|
||||||
|
1. **Trace the dependency graph**: What calls this code? What does it call?
|
||||||
|
If we simplify X, what happens to its callers and callees?
|
||||||
|
|
||||||
|
2. **Find second-order effects**: If we remove abstraction A, does layer B
|
||||||
|
also become unnecessary? Do test helpers simplify? Do error paths collapse?
|
||||||
|
|
||||||
|
3. **Spot patterns across findings**: Do three findings all stem from the same
|
||||||
|
over-abstraction? Is there a root cause that would address multiple DVGs at once?
|
||||||
|
|
||||||
|
4. **Discover what was MISSED**: With the validated findings as context, look for
|
||||||
|
related opportunities the first pass didn't see. The confirmed findings reveal
|
||||||
|
the codebase's real pressure points — what else lurks nearby?
|
||||||
|
|
||||||
|
5. **Challenge the rejections**: Were any findings rejected too hastily?
|
||||||
|
Read the validator's rationale — do you disagree?
|
||||||
|
|
||||||
|
## Evidence Requirements
|
||||||
|
|
||||||
|
Same standard as the first diverge pass:
|
||||||
|
- `wc -l` for line counts
|
||||||
|
- `grep -r` for usage/reference counts
|
||||||
|
- `git log --oneline <file> | wc -l` for change frequency
|
||||||
|
- Concrete file paths and code references
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Go DEEP. The first pass was wide, this pass is deep. Follow every thread.
|
||||||
|
output_artifacts:
|
||||||
|
- name: probed_findings
|
||||||
|
path: .wave/output/probed-findings.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/probed-findings.json
|
||||||
|
schema_path: .wave/contracts/probed-findings.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# ── Diamond 2: Deliver (CONVERGENT) ──────────────────────────────────
|
||||||
|
- id: distill
|
||||||
|
persona: synthesizer
|
||||||
|
dependencies: [probe]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: gather
|
||||||
|
artifact: context
|
||||||
|
as: context
|
||||||
|
- step: converge
|
||||||
|
artifact: validated_findings
|
||||||
|
as: validated_findings
|
||||||
|
- step: probe
|
||||||
|
artifact: probed_findings
|
||||||
|
as: probed_findings
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
FINAL CONVERGENCE — synthesize all findings into actionable proposals.
|
||||||
|
|
||||||
|
This is the last convergent step before implementation. You have:
|
||||||
|
- Validated findings (what survived scrutiny)
|
||||||
|
- Probed findings (deeper connections, patterns, new discoveries)
|
||||||
|
- Optional issue/PR context (from the gather step)
|
||||||
|
|
||||||
|
Your job: synthesize everything into prioritized, implementable proposals.
|
||||||
|
|
||||||
|
Target: {{ input }}
|
||||||
|
|
||||||
|
## Synthesis
|
||||||
|
|
||||||
|
Transform the validated and probed findings into prioritized proposals:
|
||||||
|
|
||||||
|
1. **Group by pattern**: Use the `patterns` from the probe step. Findings that share
|
||||||
|
a root cause become a single proposal addressing the root cause.
|
||||||
|
|
||||||
|
2. **Incorporate second-order effects**: The probe step found connections and cascading
|
||||||
|
simplifications. Factor these into impact estimates.
|
||||||
|
|
||||||
|
3. **Include new discoveries**: The probe step may have found new findings (DVG-NEW-xxx).
|
||||||
|
These are pre-validated by the provocateur's second pass — include them.
|
||||||
|
|
||||||
|
4. **Apply issue/PR context (if present)**: If the context artifact shows
|
||||||
|
`input_type` is `"issue"` or `"pr"`, use the `focus_hint` as ONE input
|
||||||
|
when assigning tiers. But do not discard strong proposals just because they
|
||||||
|
fall outside the issue's scope. The best simplifications are often the ones
|
||||||
|
the issue author didn't think to ask for.
|
||||||
|
|
||||||
|
5. **80/20 analysis**: which 20% of proposals yield 80% of the simplification?
|
||||||
|
|
||||||
|
6. **Dependency ordering**: what must be done first?
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Do NOT write a markdown summary. Write the complete JSON object with every proposal fully populated.
|
||||||
|
output_artifacts:
|
||||||
|
- name: proposals
|
||||||
|
path: .wave/output/convergent-proposals.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/convergent-proposals.json
|
||||||
|
schema_path: .wave/contracts/convergent-proposals.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
# ── Implementation ───────────────────────────────────────────────────
|
||||||
|
- id: simplify
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [distill]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: converge
|
||||||
|
artifact: validated_findings
|
||||||
|
as: validated_findings
|
||||||
|
- step: distill
|
||||||
|
artifact: proposals
|
||||||
|
as: proposals
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "refactor/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
IMPLEMENTATION — apply the best simplification proposals.
|
||||||
|
|
||||||
|
## Process
|
||||||
|
|
||||||
|
Apply ONLY tier-1 proposals, in dependency order.
|
||||||
|
|
||||||
|
For each proposal (SMP-xxx):
|
||||||
|
|
||||||
|
1. **Announce**: Print which proposal you're applying and what it does
|
||||||
|
2. **Apply**: Make the code changes
|
||||||
|
3. **Build**: `go build ./...` — must succeed
|
||||||
|
4. **Test**: `go test ./...` — must pass
|
||||||
|
5. **Commit**: If build and tests pass:
|
||||||
|
```bash
|
||||||
|
git add <specific-files>
|
||||||
|
git commit -m "refactor: <proposal title>
|
||||||
|
|
||||||
|
Applies SMP-xxx: <brief description>
|
||||||
|
Source findings: <DVG-xxx list>"
|
||||||
|
```
|
||||||
|
6. **Revert if failing**: If tests fail after applying, revert:
|
||||||
|
```bash
|
||||||
|
git checkout -- .
|
||||||
|
```
|
||||||
|
Log the failure and move to the next proposal.
|
||||||
|
|
||||||
|
## Final Verification
|
||||||
|
|
||||||
|
After all tier-1 proposals are applied (or attempted):
|
||||||
|
1. Run the full test suite: `go test -race ./...`
|
||||||
|
2. Run the build: `go build ./...`
|
||||||
|
3. Summarize what was applied, what was skipped, and net lines changed
|
||||||
|
|
||||||
|
## Important
|
||||||
|
|
||||||
|
- Each proposal gets its own atomic commit
|
||||||
|
- Never combine proposals in a single commit
|
||||||
|
- If a proposal depends on a failed proposal, skip it too
|
||||||
|
- Commit each proposal as a separate atomic commit
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
output_artifacts:
|
||||||
|
- name: result
|
||||||
|
path: .wave/output/result.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
# ── Reporting ────────────────────────────────────────────────────────
|
||||||
|
- id: report
|
||||||
|
persona: navigator
|
||||||
|
dependencies: [simplify]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: distill
|
||||||
|
artifact: proposals
|
||||||
|
as: proposals
|
||||||
|
- step: simplify
|
||||||
|
artifact: result
|
||||||
|
as: result
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "refactor/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
REPORT — compose a summary of what recinq found and applied.
|
||||||
|
|
||||||
|
Run `git log --oneline` to see the commits on this branch.
|
||||||
|
|
||||||
|
## Compose the Report
|
||||||
|
|
||||||
|
Write a markdown report containing:
|
||||||
|
- **Summary**: One-paragraph overview of what recinq found and applied
|
||||||
|
- **Proposals**: List of all proposals with their tier, impact, and status (applied/skipped/failed)
|
||||||
|
- **Changes Applied**: Summary of commits made, files changed, net lines removed
|
||||||
|
- **Remaining Opportunities**: Tier-2 and tier-3 proposals for future consideration
|
||||||
|
output_artifacts:
|
||||||
|
- name: report
|
||||||
|
path: .wave/output/report.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
# ── Publish ─────────────────────────────────────────────────────────
|
||||||
|
- id: publish
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [report, gather]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: gather
|
||||||
|
artifact: context
|
||||||
|
as: context
|
||||||
|
- step: report
|
||||||
|
artifact: report
|
||||||
|
as: report
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "refactor/{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
PUBLISH — push the branch and create a pull request.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
1. Push the branch:
|
||||||
|
```bash
|
||||||
|
git push -u origin HEAD
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create a pull request using the report as the body:
|
||||||
|
```bash
|
||||||
|
gh pr create --title "refactor: $(git log --format=%s -1)" --body-file .wave/artifacts/report
|
||||||
|
```
|
||||||
|
|
||||||
|
3. If the context artifact shows `input_type` is `"issue"` or `"pr"`,
|
||||||
|
post the PR URL as a comment on the source:
|
||||||
|
```bash
|
||||||
|
gh issue comment <number> --repo <repo> --body "Refactoring PR: <pr-url>"
|
||||||
|
```
|
||||||
|
or for PRs:
|
||||||
|
```bash
|
||||||
|
gh pr comment <number> --repo <repo> --body "Refactoring PR: <pr-url>"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. Write the JSON status report to the output artifact path.
|
||||||
|
|
||||||
|
If any `gh` command fails, log the error and continue.
|
||||||
|
output_artifacts:
|
||||||
|
- name: pr-result
|
||||||
|
path: .wave/output/pr-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/pr-result.json
|
||||||
|
schema_path: .wave/contracts/pr-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: pr
|
||||||
|
extract_from: .wave/output/pr-result.json
|
||||||
|
json_path: .pr_url
|
||||||
|
label: "Pull Request"
|
||||||
133
.wave/pipelines/refactor.yaml
Normal file
133
.wave/pipelines/refactor.yaml
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: refactor
|
||||||
|
description: "Safe refactoring with comprehensive test coverage"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "extract workspace manager from executor into its own package"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: analyze
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze refactoring scope for: {{ input }}
|
||||||
|
|
||||||
|
1. Identify all code that will be affected
|
||||||
|
2. Map all callers/consumers of the code being refactored
|
||||||
|
3. Find existing test coverage
|
||||||
|
4. Identify integration points
|
||||||
|
output_artifacts:
|
||||||
|
- name: analysis
|
||||||
|
path: .wave/output/refactor-analysis.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/refactor-analysis.json
|
||||||
|
schema_path: .wave/contracts/refactor-analysis.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: test-baseline
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [analyze]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze
|
||||||
|
artifact: analysis
|
||||||
|
as: scope
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Before refactoring, ensure test coverage:
|
||||||
|
|
||||||
|
1. Run existing tests and record baseline
|
||||||
|
2. Add characterization tests for uncovered code paths
|
||||||
|
3. Add integration tests for affected callers
|
||||||
|
4. Document current behavior for comparison
|
||||||
|
|
||||||
|
All tests must pass before proceeding.
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
output_artifacts:
|
||||||
|
- name: baseline
|
||||||
|
path: .wave/output/test-baseline.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: refactor
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [test-baseline]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze
|
||||||
|
artifact: analysis
|
||||||
|
as: scope
|
||||||
|
- step: test-baseline
|
||||||
|
artifact: baseline
|
||||||
|
as: tests
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Perform the refactoring: {{ input }}
|
||||||
|
|
||||||
|
Guidelines:
|
||||||
|
1. Make atomic, reviewable changes
|
||||||
|
2. Preserve all existing behavior
|
||||||
|
3. Run tests after each significant change
|
||||||
|
4. Update affected callers as needed
|
||||||
|
5. Keep commits small and focused
|
||||||
|
|
||||||
|
Do NOT change behavior — this is refactoring only.
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
|
||||||
|
must_pass: false
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
|
||||||
|
- id: verify
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [refactor]
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Verify the refactoring:
|
||||||
|
|
||||||
|
1. Compare before/after behavior — any changes?
|
||||||
|
2. Check test coverage didn't decrease
|
||||||
|
3. Verify all callers still work correctly
|
||||||
|
4. Look for missed edge cases
|
||||||
|
5. Assess code quality improvement
|
||||||
|
|
||||||
|
Output: PASS (safe to merge) or FAIL (issues found)
|
||||||
|
output_artifacts:
|
||||||
|
- name: verification
|
||||||
|
path: .wave/output/verification.md
|
||||||
|
type: markdown
|
||||||
147
.wave/pipelines/security-scan.yaml
Normal file
147
.wave/pipelines/security-scan.yaml
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: security-scan
|
||||||
|
description: "Comprehensive security vulnerability audit"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "audit the authentication module for vulnerabilities"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: scan
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Perform a security scan of: {{ input }}
|
||||||
|
|
||||||
|
## Scan Process
|
||||||
|
|
||||||
|
1. **Map attack surface**: Identify all entry points (HTTP handlers, CLI args,
|
||||||
|
file parsers, IPC endpoints, database queries, external API calls)
|
||||||
|
|
||||||
|
2. **Check OWASP Top 10**:
|
||||||
|
- Injection (SQL, command, LDAP, XPath)
|
||||||
|
- Broken authentication/authorization
|
||||||
|
- Sensitive data exposure
|
||||||
|
- XML external entities (XXE)
|
||||||
|
- Broken access control
|
||||||
|
- Security misconfiguration
|
||||||
|
- Cross-site scripting (XSS)
|
||||||
|
- Insecure deserialization
|
||||||
|
- Using components with known vulnerabilities
|
||||||
|
- Insufficient logging and monitoring
|
||||||
|
|
||||||
|
3. **Scan for common Go vulnerabilities** (if Go project):
|
||||||
|
- Unchecked errors on security-critical operations
|
||||||
|
- Race conditions on shared state
|
||||||
|
- Path traversal via unsanitized file paths
|
||||||
|
- Template injection
|
||||||
|
- Unsafe use of reflect or unsafe packages
|
||||||
|
|
||||||
|
4. **Check secrets and configuration**:
|
||||||
|
- Hardcoded credentials, API keys, tokens
|
||||||
|
- Insecure default configurations
|
||||||
|
- Missing TLS/encryption
|
||||||
|
- Overly permissive file permissions
|
||||||
|
|
||||||
|
5. **Review dependency usage**:
|
||||||
|
- Known vulnerable patterns in dependency usage
|
||||||
|
- Outdated security practices
|
||||||
|
|
||||||
|
output_artifacts:
|
||||||
|
- name: scan_results
|
||||||
|
path: .wave/output/security-scan.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/security-scan.json
|
||||||
|
schema_path: .wave/contracts/security-scan.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: deep-dive
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [scan]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan
|
||||||
|
artifact: scan_results
|
||||||
|
as: scan_findings
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Perform a deep security analysis based on the injected scan results.
|
||||||
|
|
||||||
|
For each finding with severity HIGH or CRITICAL:
|
||||||
|
|
||||||
|
1. **Verify the finding**: Read the actual source code at the reported location.
|
||||||
|
Confirm the vulnerability exists (eliminate false positives).
|
||||||
|
|
||||||
|
2. **Trace the data flow**: Follow untrusted input from entry point to sink.
|
||||||
|
Identify all transformations and validation (or lack thereof).
|
||||||
|
|
||||||
|
3. **Assess exploitability**: Could an attacker realistically exploit this?
|
||||||
|
What preconditions are needed? What's the impact?
|
||||||
|
|
||||||
|
4. **Check for related patterns**: Search for similar vulnerable patterns
|
||||||
|
elsewhere in the codebase using Grep.
|
||||||
|
|
||||||
|
5. **Propose remediation**: Specific, actionable fix with code examples.
|
||||||
|
Prioritize by effort vs. impact.
|
||||||
|
|
||||||
|
For MEDIUM and LOW findings, do a lighter review confirming they're real.
|
||||||
|
|
||||||
|
Produce a markdown report with these sections:
|
||||||
|
- Executive Summary
|
||||||
|
- Confirmed Vulnerabilities (with severity badges)
|
||||||
|
- False Positives Eliminated
|
||||||
|
- Data Flow Analysis
|
||||||
|
- Remediation Plan (ordered by priority)
|
||||||
|
- Related Patterns Found
|
||||||
|
output_artifacts:
|
||||||
|
- name: deep_dive
|
||||||
|
path: .wave/output/security-deep-dive.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: report
|
||||||
|
persona: summarizer
|
||||||
|
dependencies: [deep-dive]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: scan
|
||||||
|
artifact: scan_results
|
||||||
|
as: scan_findings
|
||||||
|
- step: deep-dive
|
||||||
|
artifact: deep_dive
|
||||||
|
as: analysis
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Synthesize the injected scan findings and deep-dive analysis into a final report.
|
||||||
|
|
||||||
|
Create a concise, actionable security report:
|
||||||
|
|
||||||
|
1. **Risk Score**: Overall risk rating (CRITICAL/HIGH/MEDIUM/LOW) with justification
|
||||||
|
2. **Top 3 Issues**: The most important findings to fix immediately
|
||||||
|
3. **Quick Wins**: Low-effort fixes that improve security posture
|
||||||
|
4. **Remediation Roadmap**: Ordered list of fixes by priority
|
||||||
|
5. **What's Good**: Security practices already in place
|
||||||
|
|
||||||
|
Format as a clean markdown report suitable for sharing with the team.
|
||||||
|
output_artifacts:
|
||||||
|
- name: report
|
||||||
|
path: .wave/output/security-report.md
|
||||||
|
type: markdown
|
||||||
57
.wave/pipelines/smoke-test.yaml
Normal file
57
.wave/pipelines/smoke-test.yaml
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: smoke-test
|
||||||
|
description: "Minimal pipeline for testing contracts and artifacts"
|
||||||
|
release: false
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "verify contract validation works"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: analyze
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze the codebase for: {{ input }}
|
||||||
|
|
||||||
|
Provide a structured analysis of your findings.
|
||||||
|
output_artifacts:
|
||||||
|
- name: analysis
|
||||||
|
path: .wave/output/analysis.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/analysis.json
|
||||||
|
schema_path: .wave/contracts/smoke-test.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: summarize
|
||||||
|
persona: summarizer
|
||||||
|
dependencies: [analyze]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze
|
||||||
|
artifact: analysis
|
||||||
|
as: analysis_data
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Using the injected analysis data, write a brief markdown summary.
|
||||||
|
|
||||||
|
Include:
|
||||||
|
- What was analyzed
|
||||||
|
- Key findings
|
||||||
|
- Recommended next steps
|
||||||
|
output_artifacts:
|
||||||
|
- name: summary
|
||||||
|
path: .wave/output/summary.md
|
||||||
|
type: markdown
|
||||||
234
.wave/pipelines/speckit-flow.yaml
Normal file
234
.wave/pipelines/speckit-flow.yaml
Normal file
@@ -0,0 +1,234 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: speckit-flow
|
||||||
|
description: "Specification-driven feature development using the full speckit workflow"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
requires:
|
||||||
|
skills:
|
||||||
|
- speckit
|
||||||
|
tools:
|
||||||
|
- git
|
||||||
|
- gh
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "add user authentication with JWT tokens"
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
description: "Natural language feature description to specify and implement"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: specify
|
||||||
|
persona: implementer
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
base: main
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/specify.md
|
||||||
|
output_artifacts:
|
||||||
|
- name: spec-status
|
||||||
|
path: .wave/output/specify-status.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/specify-status.json
|
||||||
|
schema_path: .wave/contracts/specify-status.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: clarify
|
||||||
|
persona: implementer
|
||||||
|
dependencies: [specify]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: specify
|
||||||
|
artifact: spec-status
|
||||||
|
as: spec_info
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/clarify.md
|
||||||
|
output_artifacts:
|
||||||
|
- name: clarify-status
|
||||||
|
path: .wave/output/clarify-status.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/clarify-status.json
|
||||||
|
schema_path: .wave/contracts/clarify-status.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: plan
|
||||||
|
persona: implementer
|
||||||
|
dependencies: [clarify]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: specify
|
||||||
|
artifact: spec-status
|
||||||
|
as: spec_info
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/plan.md
|
||||||
|
output_artifacts:
|
||||||
|
- name: plan-status
|
||||||
|
path: .wave/output/plan-status.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/plan-status.json
|
||||||
|
schema_path: .wave/contracts/plan-status.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: tasks
|
||||||
|
persona: implementer
|
||||||
|
dependencies: [plan]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: specify
|
||||||
|
artifact: spec-status
|
||||||
|
as: spec_info
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/tasks.md
|
||||||
|
output_artifacts:
|
||||||
|
- name: tasks-status
|
||||||
|
path: .wave/output/tasks-status.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/tasks-status.json
|
||||||
|
schema_path: .wave/contracts/tasks-status.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: checklist
|
||||||
|
persona: implementer
|
||||||
|
dependencies: [tasks]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: specify
|
||||||
|
artifact: spec-status
|
||||||
|
as: spec_info
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/checklist.md
|
||||||
|
output_artifacts:
|
||||||
|
- name: checklist-status
|
||||||
|
path: .wave/output/checklist-status.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/checklist-status.json
|
||||||
|
schema_path: .wave/contracts/checklist-status.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: analyze
|
||||||
|
persona: implementer
|
||||||
|
dependencies: [checklist]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: specify
|
||||||
|
artifact: spec-status
|
||||||
|
as: spec_info
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/analyze.md
|
||||||
|
output_artifacts:
|
||||||
|
- name: analysis-report
|
||||||
|
path: .wave/output/analysis-report.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/analysis-report.json
|
||||||
|
schema_path: .wave/contracts/analysis-report.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: implement
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [analyze]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: specify
|
||||||
|
artifact: spec-status
|
||||||
|
as: spec_info
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/implement.md
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
compaction:
|
||||||
|
trigger: "token_limit_80%"
|
||||||
|
persona: summarizer
|
||||||
|
|
||||||
|
- id: create-pr
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [implement]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: specify
|
||||||
|
artifact: spec-status
|
||||||
|
as: spec_info
|
||||||
|
workspace:
|
||||||
|
type: worktree
|
||||||
|
branch: "{{ pipeline_id }}"
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source_path: .wave/prompts/speckit-flow/create-pr.md
|
||||||
|
output_artifacts:
|
||||||
|
- name: pr-result
|
||||||
|
path: .wave/output/pr-result.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/pr-result.json
|
||||||
|
schema_path: .wave/contracts/pr-result.schema.json
|
||||||
|
must_pass: true
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
outcomes:
|
||||||
|
- type: pr
|
||||||
|
extract_from: .wave/output/pr-result.json
|
||||||
|
json_path: .pr_url
|
||||||
|
label: "Pull Request"
|
||||||
168
.wave/pipelines/supervise.yaml
Normal file
168
.wave/pipelines/supervise.yaml
Normal file
@@ -0,0 +1,168 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: supervise
|
||||||
|
description: "Review work quality and process quality, including claudit session transcripts"
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "last pipeline run"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: gather
|
||||||
|
persona: supervisor
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Gather evidence for supervision of: {{ input }}
|
||||||
|
|
||||||
|
## Smart Input Detection
|
||||||
|
|
||||||
|
Determine what to inspect based on the input:
|
||||||
|
- **Empty or "last pipeline run"**: Find the most recent pipeline run via `.wave/workspaces/` timestamps and recent git activity
|
||||||
|
- **"current pr" or "PR #N"**: Inspect the current or specified pull request (`git log`, `gh pr view`)
|
||||||
|
- **Branch name**: Inspect all commits on that branch vs main
|
||||||
|
- **Free-form description**: Use grep/git log to find relevant recent work
|
||||||
|
|
||||||
|
## Evidence Collection
|
||||||
|
|
||||||
|
1. **Git history**: Recent commits with diffs (`git log --stat`, `git diff`)
|
||||||
|
2. **Session transcripts**: Check for claudit git notes (`git notes show <commit>` for each relevant commit). Summarize what happened in each session — tool calls, approach taken, detours, errors
|
||||||
|
3. **Pipeline artifacts**: Scan `.wave/workspaces/` for the relevant pipeline run. List all output artifacts and their contents
|
||||||
|
4. **Test state**: Run `go test ./...` to capture current test status
|
||||||
|
5. **Branch/PR context**: Branch name, ahead/behind status, PR state if applicable
|
||||||
|
|
||||||
|
## Output
|
||||||
|
|
||||||
|
Produce a comprehensive evidence bundle as structured JSON. Include all raw
|
||||||
|
evidence — the evaluation step will interpret it.
|
||||||
|
|
||||||
|
Be thorough in transcript analysis — the process quality evaluation depends
|
||||||
|
heavily on understanding what the agent actually did vs what it should have done.
|
||||||
|
output_artifacts:
|
||||||
|
- name: evidence
|
||||||
|
path: .wave/output/supervision-evidence.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/supervision-evidence.json
|
||||||
|
schema_path: .wave/contracts/supervision-evidence.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: evaluate
|
||||||
|
persona: supervisor
|
||||||
|
dependencies: [gather]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: gather
|
||||||
|
artifact: evidence
|
||||||
|
as: evidence
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Evaluate the work quality based on gathered evidence.
|
||||||
|
|
||||||
|
The gathered evidence has been injected into your workspace. Read it first.
|
||||||
|
|
||||||
|
## Output Quality Assessment
|
||||||
|
|
||||||
|
For each dimension, score as excellent/good/adequate/poor with specific findings:
|
||||||
|
|
||||||
|
1. **Correctness**: Does the code do what was intended? Check logic, edge cases, error handling
|
||||||
|
2. **Completeness**: Are all requirements addressed? Any gaps or TODOs left?
|
||||||
|
3. **Test coverage**: Are changes adequately tested? Run targeted tests if needed
|
||||||
|
4. **Code quality**: Does it follow project conventions? Clean abstractions? Good naming?
|
||||||
|
|
||||||
|
## Process Quality Assessment
|
||||||
|
|
||||||
|
Using the session transcripts from the evidence:
|
||||||
|
|
||||||
|
1. **Efficiency**: Was the approach direct? Count unnecessary file reads, repeated searches, abandoned approaches visible in transcripts
|
||||||
|
2. **Scope discipline**: Did the agent stay on task? Flag any scope creep — changes unrelated to the original goal
|
||||||
|
3. **Tool usage**: Were the right tools used? (e.g., Read vs Bash cat, Glob vs find)
|
||||||
|
4. **Token economy**: Was the work concise or bloated? Excessive context gathering? Redundant operations?
|
||||||
|
|
||||||
|
## Synthesis
|
||||||
|
|
||||||
|
- Overall score (excellent/good/adequate/poor)
|
||||||
|
- Key strengths (what went well)
|
||||||
|
- Key concerns (what needs attention)
|
||||||
|
|
||||||
|
Produce the evaluation as a structured JSON result.
|
||||||
|
output_artifacts:
|
||||||
|
- name: evaluation
|
||||||
|
path: .wave/output/supervision-evaluation.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/supervision-evaluation.json
|
||||||
|
schema_path: .wave/contracts/supervision-evaluation.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: verdict
|
||||||
|
persona: reviewer
|
||||||
|
dependencies: [evaluate]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: gather
|
||||||
|
artifact: evidence
|
||||||
|
as: evidence
|
||||||
|
- step: evaluate
|
||||||
|
artifact: evaluation
|
||||||
|
as: evaluation
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Synthesize a final supervision verdict.
|
||||||
|
|
||||||
|
The gathered evidence and evaluation have been injected into your workspace.
|
||||||
|
Read them both before proceeding.
|
||||||
|
|
||||||
|
## Independent Verification
|
||||||
|
|
||||||
|
1. Run the test suite: `go test ./...`
|
||||||
|
2. Cross-check evaluation claims against actual code
|
||||||
|
3. Verify any specific concerns raised in the evaluation
|
||||||
|
|
||||||
|
## Verdict
|
||||||
|
|
||||||
|
Issue one of:
|
||||||
|
- **APPROVE**: Work is good quality, process was efficient. Ship it.
|
||||||
|
- **PARTIAL_APPROVE**: Output is acceptable but process had issues worth noting for improvement.
|
||||||
|
- **REWORK**: Significant issues found that need to be addressed before the work is acceptable.
|
||||||
|
|
||||||
|
## Action Items (if REWORK or PARTIAL_APPROVE)
|
||||||
|
|
||||||
|
For each issue requiring action:
|
||||||
|
- Specific file and line references
|
||||||
|
- What needs to change and why
|
||||||
|
- Priority (must-fix vs should-fix)
|
||||||
|
|
||||||
|
## Lessons Learned
|
||||||
|
|
||||||
|
What should be done differently next time? Process improvements, common pitfalls observed.
|
||||||
|
|
||||||
|
Produce the verdict as a markdown report with clear sections:
|
||||||
|
## Verdict, ## Output Quality, ## Process Quality, ## Action Items, ## Lessons Learned
|
||||||
|
output_artifacts:
|
||||||
|
- name: verdict
|
||||||
|
path: .wave/output/supervision-verdict.md
|
||||||
|
type: markdown
|
||||||
97
.wave/pipelines/test-gen.yaml
Normal file
97
.wave/pipelines/test-gen.yaml
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
kind: WavePipeline
|
||||||
|
metadata:
|
||||||
|
name: test-gen
|
||||||
|
description: "Generate comprehensive test coverage"
|
||||||
|
release: true
|
||||||
|
|
||||||
|
input:
|
||||||
|
source: cli
|
||||||
|
example: "internal/pipeline"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- id: analyze-coverage
|
||||||
|
persona: navigator
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readonly
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Analyze test coverage for: {{ input }}
|
||||||
|
|
||||||
|
1. Run coverage analysis using the project test command with coverage flags
|
||||||
|
2. Identify uncovered functions and branches
|
||||||
|
3. Find edge cases not tested
|
||||||
|
4. Map dependencies that need mocking
|
||||||
|
output_artifacts:
|
||||||
|
- name: coverage
|
||||||
|
path: .wave/output/coverage-analysis.json
|
||||||
|
type: json
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: json_schema
|
||||||
|
source: .wave/output/coverage-analysis.json
|
||||||
|
schema_path: .wave/contracts/coverage-analysis.schema.json
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 2
|
||||||
|
|
||||||
|
- id: generate-tests
|
||||||
|
persona: craftsman
|
||||||
|
dependencies: [analyze-coverage]
|
||||||
|
memory:
|
||||||
|
inject_artifacts:
|
||||||
|
- step: analyze-coverage
|
||||||
|
artifact: coverage
|
||||||
|
as: gaps
|
||||||
|
workspace:
|
||||||
|
mount:
|
||||||
|
- source: ./
|
||||||
|
target: /project
|
||||||
|
mode: readwrite
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Generate tests to improve coverage for: {{ input }}
|
||||||
|
|
||||||
|
Requirements:
|
||||||
|
1. Write table-driven tests where appropriate
|
||||||
|
2. Cover happy path, error cases, and edge cases
|
||||||
|
3. Use descriptive test names (TestFunction_Condition_Expected)
|
||||||
|
4. Add mocks for external dependencies
|
||||||
|
5. Include benchmarks for performance-critical code
|
||||||
|
|
||||||
|
Follow existing test patterns in the codebase.
|
||||||
|
handover:
|
||||||
|
contract:
|
||||||
|
type: test_suite
|
||||||
|
command: "{{ project.test_command }}"
|
||||||
|
|
||||||
|
must_pass: false
|
||||||
|
on_failure: retry
|
||||||
|
max_retries: 3
|
||||||
|
output_artifacts:
|
||||||
|
- name: tests
|
||||||
|
path: .wave/output/generated-tests.md
|
||||||
|
type: markdown
|
||||||
|
|
||||||
|
- id: verify-coverage
|
||||||
|
persona: auditor
|
||||||
|
dependencies: [generate-tests]
|
||||||
|
exec:
|
||||||
|
type: prompt
|
||||||
|
source: |
|
||||||
|
Verify the generated tests:
|
||||||
|
|
||||||
|
1. Run coverage again — did it improve?
|
||||||
|
2. Are tests meaningful (not just line coverage)?
|
||||||
|
3. Do tests actually catch bugs?
|
||||||
|
4. Are mocks appropriate and minimal?
|
||||||
|
5. Is test code maintainable?
|
||||||
|
|
||||||
|
Output: coverage delta and quality assessment
|
||||||
|
output_artifacts:
|
||||||
|
- name: verification
|
||||||
|
path: .wave/output/coverage-verification.md
|
||||||
|
type: markdown
|
||||||
Reference in New Issue
Block a user