Files
code-crispies/.wave/pipelines/impl-recinq.yaml
Michael Czechowski ab6dabd542
Some checks failed
CI / ci (push) Has been cancelled
Deploy / deploy (push) Has been cancelled
fix(ci): correct image digest separator
2026-04-30 12:20:26 +02:00

561 lines
20 KiB
YAML

kind: WavePipeline
metadata:
name: impl-recinq
description: "Rethink and simplify code using divergent-convergent thinking (Double Diamond)"
release: true
skills:
- "{{ project.skill }}"
- software-design
input:
source: cli
example: "internal/pipeline"
# Pipeline structure implements the Double Diamond:
#
# gather → diverge → converge → probe → distill → simplify → report
# ╰─ Diamond 1 ─╯ ╰─ Diamond 2 ─╯ ╰implement╯
# (discover) (define) (develop) (deliver)
#
# Each step gets its own context window and cognitive mode.
# Fresh memory at every boundary — no mode-switching within a step.
steps:
- id: gather
persona: "gitea-analyst"
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
CONTEXT GATHERING — parse input and fetch GitHub context if applicable.
Input: {{ input }}
## Instructions
Determine what kind of input this is:
1. **GitHub Issue URL**: Contains `github.com` and `/issues/`
- Extract owner/repo and issue number from the URL
- Run: `{{ forge.cli_tool }} issue view <number> --repo <owner/repo> --json title,body,labels`
- Extract a `focus_hint` summarizing what should be simplified
2. **GitHub PR URL**: Contains `github.com` and `/pull/`
- Extract owner/repo and PR number from the URL
- Run: `{{ forge.cli_tool }} {{ forge.pr_command }} view <number> --repo <owner/repo> --json title,body,labels,files`
- Extract a `focus_hint` summarizing what the PR is about
3. **Local path or description**: Anything else
- Set `input_type` to `"local"`
- Pass through the original input as-is
## Output
IMPORTANT: The output MUST be valid JSON. Do NOT include markdown fencing.
output_artifacts:
- name: context
path: .wave/output/context.json
type: json
retry:
policy: patient
max_attempts: 2
handover:
contract:
type: json_schema
source: .wave/output/context.json
schema_path: .wave/contracts/recinq-context.schema.json
must_pass: true
on_failure: retry
# ── Diamond 1: Discover (DIVERGENT) ──────────────────────────────────
- id: diverge
persona: provocateur
model: claude-haiku
dependencies: [gather]
memory:
inject_artifacts:
- step: gather
artifact: context
as: context
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
DIVERGENT THINKING — cast the widest net to find simplification opportunities.
Target: {{ input }}
## Starting Point
The context artifact contains input context.
If `input_type` is `"issue"` or `"pr"`, the `focus_hint` tells you WHERE to start looking —
but do NOT limit yourself to what the issue describes. Use it as a seed, then expand outward.
Follow dependency chains, trace callers, explore adjacent modules. The issue author doesn't
know what they don't know — that's YOUR job.
If `input_type` is `"local"`, use the `original_input` field as the target path.
If input is empty or "." — analyze the whole project.
If input is a path — focus on that module/directory but consider its connections.
## Your Mission
Challenge EVERYTHING. Question every assumption. Hunt complexity.
## What to Look For
1. **Premature abstractions**: Interfaces with one implementation. Generic code used once.
"What if we just inlined this?"
2. **Unnecessary indirection**: Layers that pass-through without adding value.
Wrappers around wrappers. "How many hops to get to the actual logic?"
3. **Overengineering**: Configuration for things that never change. Plugins with one plugin.
Feature flags for features that are always on. "Is this complexity earning its keep?"
4. **YAGNI violations**: Code written for hypothetical future needs that never arrived.
"When was this last changed? Does anyone actually use this path?"
5. **Accidental complexity**: Things that are hard because of how they're built, not because
the problem is hard. "Could this be 10x simpler if we started over?"
6. **Copy-paste drift**: Similar-but-slightly-different code that should be unified or
intentionally differentiated. "Are these differences meaningful or accidental?"
7. **Dead weight**: Unused exports, unreachable code, obsolete comments, stale TODOs.
`grep -r` for references. If nothing uses it, flag it.
8. **Naming lies**: Names that don't match what the code actually does.
"Does this 'manager' actually manage anything?"
9. **Dependency gravity**: Modules that pull in everything. Import graphs that are too dense.
"What's the blast radius of changing this?"
## Evidence Requirements
For EVERY finding, gather concrete metrics:
- `wc -l` for line counts
- `grep -r` for usage/reference counts
- `git log --oneline <file> | wc -l` for change frequency
- List the actual files involved
## Output
Each finding gets a unique ID: DVG-001, DVG-002, etc.
Be AGGRESSIVE — flag everything suspicious. The convergent phase will filter.
It's better to have 30 findings with 10 false positives than 5 findings that miss
the big opportunities.
Include a metrics_summary with totals by category and severity, plus hotspot files
that appear in multiple findings.
output_artifacts:
- name: findings
path: .wave/output/divergent-findings.json
type: json
retry:
policy: patient
max_attempts: 2
handover:
contract:
type: json_schema
source: .wave/output/divergent-findings.json
schema_path: .wave/contracts/divergent-findings.schema.json
must_pass: true
on_failure: retry
# ── Diamond 1: Define (CONVERGENT) ───────────────────────────────────
- id: converge
persona: validator
dependencies: [diverge]
memory:
inject_artifacts:
- step: diverge
artifact: findings
as: divergent_findings
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
CONVERGENT VALIDATION — separate signal from noise.
This is a purely CONVERGENT step. Your job is analytical, not creative.
Judge every finding on technical merit alone. No speculation, no new ideas.
Target: {{ input }}
## For Every DVG-xxx Finding
1. **Read the actual code** cited as evidence — don't trust the provocateur's summary
2. **Verify the metrics** — check reference counts, line counts, change frequency
3. **Assess**: Is this a real problem or a false positive?
- Does the "premature abstraction" actually have a second implementation planned?
- Is the "dead weight" actually used via reflection or codegen?
- Is the "unnecessary indirection" actually providing error handling or logging?
4. **Classify**:
- `CONFIRMED` — real problem, metrics check out, code supports the claim
- `PARTIALLY_CONFIRMED` — real concern but overstated, or scope is narrower than claimed
- `REJECTED` — false positive, justified complexity, or incorrect metrics
5. **Explain**: For every classification, write WHY. For rejections, explain what
the provocateur got wrong.
Be RIGOROUS. The provocateur was told to be aggressive — your job is to be skeptical.
A finding that survives your scrutiny is genuinely worth addressing.
output_artifacts:
- name: validated_findings
path: .wave/output/validated-findings.json
type: json
retry:
policy: patient
max_attempts: 2
handover:
contract:
type: json_schema
source: .wave/output/validated-findings.json
schema_path: .wave/contracts/validated-findings.schema.json
must_pass: true
on_failure: retry
# ── Diamond 2: Develop (DIVERGENT) ───────────────────────────────────
- id: probe
persona: provocateur
dependencies: [converge]
memory:
inject_artifacts:
- step: diverge
artifact: findings
as: divergent_findings
- step: converge
artifact: validated_findings
as: validated_findings
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
SECOND DIVERGENCE — probe deeper into confirmed findings.
The first pass cast a wide net. The validator filtered it down.
Now YOU go deeper on what survived. This is DIVERGENT thinking again —
expand, connect, discover what the first pass missed.
Focus on findings with status CONFIRMED or PARTIALLY_CONFIRMED.
Target: {{ input }}
## Your Mission
For each confirmed finding, probe OUTWARD:
1. **Trace the dependency graph**: What calls this code? What does it call?
If we simplify X, what happens to its callers and callees?
2. **Find second-order effects**: If we remove abstraction A, does layer B
also become unnecessary? Do test helpers simplify? Do error paths collapse?
3. **Spot patterns across findings**: Do three findings all stem from the same
over-abstraction? Is there a root cause that would address multiple DVGs at once?
4. **Discover what was MISSED**: With the validated findings as context, look for
related opportunities the first pass didn't see. The confirmed findings reveal
the codebase's real pressure points — what else lurks nearby?
5. **Challenge the rejections**: Were any findings rejected too hastily?
Read the validator's rationale — do you disagree?
## Evidence Requirements
Same standard as the first diverge pass:
- `wc -l` for line counts
- `grep -r` for usage/reference counts
- `git log --oneline <file> | wc -l` for change frequency
- Concrete file paths and code references
## Output
Go DEEP. The first pass was wide, this pass is deep. Follow every thread.
output_artifacts:
- name: probed_findings
path: .wave/output/probed-findings.json
type: json
retry:
policy: patient
max_attempts: 2
handover:
contract:
type: json_schema
source: .wave/output/probed-findings.json
schema_path: .wave/contracts/probed-findings.schema.json
must_pass: true
on_failure: retry
# ── Diamond 2: Deliver (CONVERGENT) ──────────────────────────────────
- id: distill
persona: synthesizer
dependencies: [probe]
memory:
inject_artifacts:
- step: gather
artifact: context
as: context
- step: converge
artifact: validated_findings
as: validated_findings
- step: probe
artifact: probed_findings
as: probed_findings
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
FINAL CONVERGENCE — write a JSON object to `.wave/output/convergent-proposals.json`.
Target: {{ input }}
Read ALL injected artifacts first:
- `.wave/artifacts/context` — issue/PR context from the gather step
- `.wave/artifacts/validated_findings` — findings that survived scrutiny
- `.wave/artifacts/probed_findings` — deeper connections, patterns, new discoveries
Then write a SINGLE JSON object (no markdown, no prose, no code fences) to
the output file using the Write tool. The file must start with `{` and end with `}`.
## How to populate each field
**`source_findings`**: Count how many findings you reviewed, confirmed, partially
confirmed, or rejected. Include rejection reasons.
**`validation_summary`**: One paragraph describing the converge→diverge→converge
validation process and what survived.
**`proposals`** array — for each proposal:
- `id`: SMP-001, SMP-002, etc.
- Group findings that share a root cause into a single proposal
- Incorporate second-order effects from the probe step into `impact` estimates
- Include DVG-NEW-xxx discoveries from the probe step (pre-validated)
- If context shows `input_type` is `"issue"` or `"pr"`, use `focus_hint` as ONE
input when assigning `tier`, but do not discard strong proposals outside scope
- `tier`: 1=do now, 2=do next, 3=consider later
- `files`: list actual file paths affected
- `dependencies`: SMP-xxx IDs that must be applied first
**`eighty_twenty_analysis`**: Which 20% of proposals yield 80% of the benefit?
**`timestamp`**: ISO 8601 datetime.
IMPORTANT: The Write tool call must contain ONLY the JSON object.
Contract validation will reject non-JSON output.
output_artifacts:
- name: proposals
path: .wave/output/convergent-proposals.json
type: json
retry:
policy: standard
max_attempts: 2
handover:
contract:
type: json_schema
source: .wave/output/convergent-proposals.json
schema_path: .wave/contracts/convergent-proposals.schema.json
must_pass: true
on_failure: retry
# ── Implementation ───────────────────────────────────────────────────
- id: simplify
persona: craftsman
dependencies: [distill]
memory:
inject_artifacts:
- step: converge
artifact: validated_findings
as: validated_findings
- step: distill
artifact: proposals
as: proposals
workspace:
type: worktree
branch: "refactor/{{ pipeline_id }}"
exec:
type: prompt
source: |
IMPLEMENTATION — apply the best simplification proposals.
## Process
Apply ONLY tier-1 proposals, in dependency order.
For each proposal (SMP-xxx):
1. **Announce**: Print which proposal you're applying and what it does
2. **Apply**: Make the code changes
3. **Build**: Run the project's build command — must succeed
4. **Test**: Run the project's test suite — must pass
5. **Commit**: If build and tests pass:
```bash
git add <specific-files>
git commit -m "refactor: <proposal title>
Applies SMP-xxx: <brief description>
Source findings: <DVG-xxx list>"
```
6. **Revert if failing**: If tests fail after applying, revert:
```bash
git checkout -- .
```
Log the failure and move to the next proposal.
## Final Verification
After all tier-1 proposals are applied (or attempted):
1. Run the full test suite
2. Run the project's build command
3. Summarize what was applied, what was skipped, and net lines changed
## Important
- Each proposal gets its own atomic commit
- Never combine proposals in a single commit
- If a proposal depends on a failed proposal, skip it too
- Commit each proposal as a separate atomic commit
retry:
policy: standard
max_attempts: 3
handover:
contract:
type: test_suite
command: "{{ project.test_command }}"
must_pass: true
on_failure: retry
compaction:
trigger: "token_limit_80%"
persona: summarizer
output_artifacts:
- name: result
path: .wave/output/result.md
type: markdown
# ── Reporting ────────────────────────────────────────────────────────
- id: report
persona: navigator
dependencies: [simplify]
memory:
inject_artifacts:
- step: distill
artifact: proposals
as: proposals
- step: simplify
artifact: result
as: result
workspace:
type: worktree
branch: "refactor/{{ pipeline_id }}"
exec:
type: prompt
source: |
REPORT — compose a summary of what recinq found and applied.
Run `git log --oneline` to see the commits on this branch.
## Compose the Report
Write a markdown report containing:
- **Summary**: One-paragraph overview of what recinq found and applied
- **Proposals**: List of all proposals with their tier, impact, and status (applied/skipped/failed)
- **Changes Applied**: Summary of commits made, files changed, net lines removed
- **Remaining Opportunities**: Tier-2 and tier-3 proposals for future consideration
output_artifacts:
- name: report
path: .wave/output/report.md
type: markdown
handover:
contract:
type: non_empty_file
source: .wave/output/report.md
# ── Publish ─────────────────────────────────────────────────────────
- id: publish
persona: craftsman
dependencies: [report, gather]
memory:
inject_artifacts:
- step: gather
artifact: context
as: context
- step: report
artifact: report
as: report
workspace:
type: worktree
branch: "refactor/{{ pipeline_id }}"
exec:
type: prompt
source: |
PUBLISH — push the branch and create a pull request.
## Steps
1. Push the branch:
```bash
git push -u origin HEAD
```
2. Create a pull request using the report as the body:
```bash
COMMIT_SUBJECT=$(git log --format=%s -1)
{{ forge.cli_tool }} {{ forge.pr_command }} create --title "refactor: $COMMIT_SUBJECT" --body-file .wave/artifacts/report
```
3. If the context artifact shows `input_type` is `"issue"` or `"pr"`,
post the PR URL as a comment on the source:
```bash
echo "Refactoring PR: <pr-url>" > /tmp/recinq-comment.md
{{ forge.cli_tool }} issue comment <number> --repo <repo> --body-file /tmp/recinq-comment.md
```
or for PRs:
```bash
echo "Refactoring PR: <pr-url>" > /tmp/recinq-comment.md
{{ forge.cli_tool }} {{ forge.pr_command }} comment <number> --repo <repo> --body-file /tmp/recinq-comment.md
```
4. Write the JSON status report to the output artifact path.
If any `{{ forge.cli_tool }}` command fails, log the error and continue.
output_artifacts:
- name: pr-result
path: .wave/output/pr-result.json
type: json
retry:
policy: aggressive
max_attempts: 2
handover:
contract:
type: json_schema
source: .wave/output/pr-result.json
schema_path: .wave/contracts/pr-result.schema.json
must_pass: true
on_failure: retry
outcomes:
- type: pr
extract_from: .wave/output/pr-result.json
json_path: .pr_url
label: "Pull Request"