Files
librenotes/.wave/pipelines/recinq.yaml
Michael Czechowski fc24f9a8ab Add Wave general-purpose pipelines
ADR, changelog, code-review, debug, doc-sync, explain, feature,
hotfix, improve, onboard, plan, prototype, refactor, security-scan,
smoke-test, speckit-flow, supervise, test-gen, and more.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-25 17:02:36 +01:00

532 lines
19 KiB
YAML

kind: WavePipeline
metadata:
name: recinq
description: "Rethink and simplify code using divergent-convergent thinking (Double Diamond)"
release: true
input:
source: cli
example: "internal/pipeline"
# Pipeline structure implements the Double Diamond:
#
# gather → diverge → converge → probe → distill → simplify → report
# ╰─ Diamond 1 ─╯ ╰─ Diamond 2 ─╯ ╰implement╯
# (discover) (define) (develop) (deliver)
#
# Each step gets its own context window and cognitive mode.
# Fresh memory at every boundary — no mode-switching within a step.
steps:
- id: gather
persona: github-analyst
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
CONTEXT GATHERING — parse input and fetch GitHub context if applicable.
Input: {{ input }}
## Instructions
Determine what kind of input this is:
1. **GitHub Issue URL**: Contains `github.com` and `/issues/`
- Extract owner/repo and issue number from the URL
- Run: `gh issue view <number> --repo <owner/repo> --json title,body,labels`
- Extract a `focus_hint` summarizing what should be simplified
2. **GitHub PR URL**: Contains `github.com` and `/pull/`
- Extract owner/repo and PR number from the URL
- Run: `gh pr view <number> --repo <owner/repo> --json title,body,labels,files`
- Extract a `focus_hint` summarizing what the PR is about
3. **Local path or description**: Anything else
- Set `input_type` to `"local"`
- Pass through the original input as-is
## Output
IMPORTANT: The output MUST be valid JSON. Do NOT include markdown fencing.
output_artifacts:
- name: context
path: .wave/output/context.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/context.json
schema_path: .wave/contracts/recinq-context.schema.json
must_pass: true
on_failure: retry
max_retries: 2
# ── Diamond 1: Discover (DIVERGENT) ──────────────────────────────────
- id: diverge
persona: provocateur
dependencies: [gather]
memory:
inject_artifacts:
- step: gather
artifact: context
as: context
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
DIVERGENT THINKING — cast the widest net to find simplification opportunities.
Target: {{ input }}
## Starting Point
The context artifact contains input context.
If `input_type` is `"issue"` or `"pr"`, the `focus_hint` tells you WHERE to start looking —
but do NOT limit yourself to what the issue describes. Use it as a seed, then expand outward.
Follow dependency chains, trace callers, explore adjacent modules. The issue author doesn't
know what they don't know — that's YOUR job.
If `input_type` is `"local"`, use the `original_input` field as the target path.
If input is empty or "." — analyze the whole project.
If input is a path — focus on that module/directory but consider its connections.
## Your Mission
Challenge EVERYTHING. Question every assumption. Hunt complexity.
## What to Look For
1. **Premature abstractions**: Interfaces with one implementation. Generic code used once.
"What if we just inlined this?"
2. **Unnecessary indirection**: Layers that pass-through without adding value.
Wrappers around wrappers. "How many hops to get to the actual logic?"
3. **Overengineering**: Configuration for things that never change. Plugins with one plugin.
Feature flags for features that are always on. "Is this complexity earning its keep?"
4. **YAGNI violations**: Code written for hypothetical future needs that never arrived.
"When was this last changed? Does anyone actually use this path?"
5. **Accidental complexity**: Things that are hard because of how they're built, not because
the problem is hard. "Could this be 10x simpler if we started over?"
6. **Copy-paste drift**: Similar-but-slightly-different code that should be unified or
intentionally differentiated. "Are these differences meaningful or accidental?"
7. **Dead weight**: Unused exports, unreachable code, obsolete comments, stale TODOs.
`grep -r` for references. If nothing uses it, flag it.
8. **Naming lies**: Names that don't match what the code actually does.
"Does this 'manager' actually manage anything?"
9. **Dependency gravity**: Modules that pull in everything. Import graphs that are too dense.
"What's the blast radius of changing this?"
## Evidence Requirements
For EVERY finding, gather concrete metrics:
- `wc -l` for line counts
- `grep -r` for usage/reference counts
- `git log --oneline <file> | wc -l` for change frequency
- List the actual files involved
## Output
Each finding gets a unique ID: DVG-001, DVG-002, etc.
Be AGGRESSIVE — flag everything suspicious. The convergent phase will filter.
It's better to have 30 findings with 10 false positives than 5 findings that miss
the big opportunities.
Include a metrics_summary with totals by category and severity, plus hotspot files
that appear in multiple findings.
output_artifacts:
- name: findings
path: .wave/output/divergent-findings.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/divergent-findings.json
schema_path: .wave/contracts/divergent-findings.schema.json
must_pass: true
on_failure: retry
max_retries: 2
# ── Diamond 1: Define (CONVERGENT) ───────────────────────────────────
- id: converge
persona: validator
dependencies: [diverge]
memory:
inject_artifacts:
- step: diverge
artifact: findings
as: divergent_findings
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
CONVERGENT VALIDATION — separate signal from noise.
This is a purely CONVERGENT step. Your job is analytical, not creative.
Judge every finding on technical merit alone. No speculation, no new ideas.
Target: {{ input }}
## For Every DVG-xxx Finding
1. **Read the actual code** cited as evidence — don't trust the provocateur's summary
2. **Verify the metrics** — check reference counts, line counts, change frequency
3. **Assess**: Is this a real problem or a false positive?
- Does the "premature abstraction" actually have a second implementation planned?
- Is the "dead weight" actually used via reflection or codegen?
- Is the "unnecessary indirection" actually providing error handling or logging?
4. **Classify**:
- `CONFIRMED` — real problem, metrics check out, code supports the claim
- `PARTIALLY_CONFIRMED` — real concern but overstated, or scope is narrower than claimed
- `REJECTED` — false positive, justified complexity, or incorrect metrics
5. **Explain**: For every classification, write WHY. For rejections, explain what
the provocateur got wrong.
Be RIGOROUS. The provocateur was told to be aggressive — your job is to be skeptical.
A finding that survives your scrutiny is genuinely worth addressing.
output_artifacts:
- name: validated_findings
path: .wave/output/validated-findings.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/validated-findings.json
schema_path: .wave/contracts/validated-findings.schema.json
must_pass: true
on_failure: retry
max_retries: 2
# ── Diamond 2: Develop (DIVERGENT) ───────────────────────────────────
- id: probe
persona: provocateur
dependencies: [converge]
memory:
inject_artifacts:
- step: diverge
artifact: findings
as: divergent_findings
- step: converge
artifact: validated_findings
as: validated_findings
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
SECOND DIVERGENCE — probe deeper into confirmed findings.
The first pass cast a wide net. The validator filtered it down.
Now YOU go deeper on what survived. This is DIVERGENT thinking again —
expand, connect, discover what the first pass missed.
Focus on findings with status CONFIRMED or PARTIALLY_CONFIRMED.
Target: {{ input }}
## Your Mission
For each confirmed finding, probe OUTWARD:
1. **Trace the dependency graph**: What calls this code? What does it call?
If we simplify X, what happens to its callers and callees?
2. **Find second-order effects**: If we remove abstraction A, does layer B
also become unnecessary? Do test helpers simplify? Do error paths collapse?
3. **Spot patterns across findings**: Do three findings all stem from the same
over-abstraction? Is there a root cause that would address multiple DVGs at once?
4. **Discover what was MISSED**: With the validated findings as context, look for
related opportunities the first pass didn't see. The confirmed findings reveal
the codebase's real pressure points — what else lurks nearby?
5. **Challenge the rejections**: Were any findings rejected too hastily?
Read the validator's rationale — do you disagree?
## Evidence Requirements
Same standard as the first diverge pass:
- `wc -l` for line counts
- `grep -r` for usage/reference counts
- `git log --oneline <file> | wc -l` for change frequency
- Concrete file paths and code references
## Output
Go DEEP. The first pass was wide, this pass is deep. Follow every thread.
output_artifacts:
- name: probed_findings
path: .wave/output/probed-findings.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/probed-findings.json
schema_path: .wave/contracts/probed-findings.schema.json
must_pass: true
on_failure: retry
max_retries: 2
# ── Diamond 2: Deliver (CONVERGENT) ──────────────────────────────────
- id: distill
persona: synthesizer
dependencies: [probe]
memory:
inject_artifacts:
- step: gather
artifact: context
as: context
- step: converge
artifact: validated_findings
as: validated_findings
- step: probe
artifact: probed_findings
as: probed_findings
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
FINAL CONVERGENCE — synthesize all findings into actionable proposals.
This is the last convergent step before implementation. You have:
- Validated findings (what survived scrutiny)
- Probed findings (deeper connections, patterns, new discoveries)
- Optional issue/PR context (from the gather step)
Your job: synthesize everything into prioritized, implementable proposals.
Target: {{ input }}
## Synthesis
Transform the validated and probed findings into prioritized proposals:
1. **Group by pattern**: Use the `patterns` from the probe step. Findings that share
a root cause become a single proposal addressing the root cause.
2. **Incorporate second-order effects**: The probe step found connections and cascading
simplifications. Factor these into impact estimates.
3. **Include new discoveries**: The probe step may have found new findings (DVG-NEW-xxx).
These are pre-validated by the provocateur's second pass — include them.
4. **Apply issue/PR context (if present)**: If the context artifact shows
`input_type` is `"issue"` or `"pr"`, use the `focus_hint` as ONE input
when assigning tiers. But do not discard strong proposals just because they
fall outside the issue's scope. The best simplifications are often the ones
the issue author didn't think to ask for.
5. **80/20 analysis**: which 20% of proposals yield 80% of the simplification?
6. **Dependency ordering**: what must be done first?
## Output
Do NOT write a markdown summary. Write the complete JSON object with every proposal fully populated.
output_artifacts:
- name: proposals
path: .wave/output/convergent-proposals.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/convergent-proposals.json
schema_path: .wave/contracts/convergent-proposals.schema.json
must_pass: true
on_failure: retry
max_retries: 2
# ── Implementation ───────────────────────────────────────────────────
- id: simplify
persona: craftsman
dependencies: [distill]
memory:
inject_artifacts:
- step: converge
artifact: validated_findings
as: validated_findings
- step: distill
artifact: proposals
as: proposals
workspace:
type: worktree
branch: "refactor/{{ pipeline_id }}"
exec:
type: prompt
source: |
IMPLEMENTATION — apply the best simplification proposals.
## Process
Apply ONLY tier-1 proposals, in dependency order.
For each proposal (SMP-xxx):
1. **Announce**: Print which proposal you're applying and what it does
2. **Apply**: Make the code changes
3. **Build**: `go build ./...` — must succeed
4. **Test**: `go test ./...` — must pass
5. **Commit**: If build and tests pass:
```bash
git add <specific-files>
git commit -m "refactor: <proposal title>
Applies SMP-xxx: <brief description>
Source findings: <DVG-xxx list>"
```
6. **Revert if failing**: If tests fail after applying, revert:
```bash
git checkout -- .
```
Log the failure and move to the next proposal.
## Final Verification
After all tier-1 proposals are applied (or attempted):
1. Run the full test suite: `go test -race ./...`
2. Run the build: `go build ./...`
3. Summarize what was applied, what was skipped, and net lines changed
## Important
- Each proposal gets its own atomic commit
- Never combine proposals in a single commit
- If a proposal depends on a failed proposal, skip it too
- Commit each proposal as a separate atomic commit
handover:
contract:
type: test_suite
command: "{{ project.test_command }}"
must_pass: true
on_failure: retry
max_retries: 3
output_artifacts:
- name: result
path: .wave/output/result.md
type: markdown
# ── Reporting ────────────────────────────────────────────────────────
- id: report
persona: navigator
dependencies: [simplify]
memory:
inject_artifacts:
- step: distill
artifact: proposals
as: proposals
- step: simplify
artifact: result
as: result
workspace:
type: worktree
branch: "refactor/{{ pipeline_id }}"
exec:
type: prompt
source: |
REPORT — compose a summary of what recinq found and applied.
Run `git log --oneline` to see the commits on this branch.
## Compose the Report
Write a markdown report containing:
- **Summary**: One-paragraph overview of what recinq found and applied
- **Proposals**: List of all proposals with their tier, impact, and status (applied/skipped/failed)
- **Changes Applied**: Summary of commits made, files changed, net lines removed
- **Remaining Opportunities**: Tier-2 and tier-3 proposals for future consideration
output_artifacts:
- name: report
path: .wave/output/report.md
type: markdown
# ── Publish ─────────────────────────────────────────────────────────
- id: publish
persona: craftsman
dependencies: [report, gather]
memory:
inject_artifacts:
- step: gather
artifact: context
as: context
- step: report
artifact: report
as: report
workspace:
type: worktree
branch: "refactor/{{ pipeline_id }}"
exec:
type: prompt
source: |
PUBLISH — push the branch and create a pull request.
## Steps
1. Push the branch:
```bash
git push -u origin HEAD
```
2. Create a pull request using the report as the body:
```bash
gh pr create --title "refactor: $(git log --format=%s -1)" --body-file .wave/artifacts/report
```
3. If the context artifact shows `input_type` is `"issue"` or `"pr"`,
post the PR URL as a comment on the source:
```bash
gh issue comment <number> --repo <repo> --body "Refactoring PR: <pr-url>"
```
or for PRs:
```bash
gh pr comment <number> --repo <repo> --body "Refactoring PR: <pr-url>"
```
4. Write the JSON status report to the output artifact path.
If any `gh` command fails, log the error and continue.
output_artifacts:
- name: pr-result
path: .wave/output/pr-result.json
type: json
handover:
contract:
type: json_schema
source: .wave/output/pr-result.json
schema_path: .wave/contracts/pr-result.schema.json
must_pass: true
on_failure: retry
max_retries: 2
outcomes:
- type: pr
extract_from: .wave/output/pr-result.json
json_path: .pr_url
label: "Pull Request"