Files
code-crispies/.wave/pipelines/test-gen.yaml

126 lines
3.2 KiB
YAML

kind: WavePipeline
metadata:
name: test-gen
description: "Generate comprehensive test coverage"
release: true
skills:
- "{{ project.skill }}"
input:
source: cli
example: "generate tests for internal/pipeline to improve coverage"
steps:
- id: analyze-coverage
persona: navigator
model: claude-haiku
workspace:
mount:
- source: ./
target: /project
mode: readonly
exec:
type: prompt
source: |
Analyze test coverage for: {{ input }}
1. Run coverage analysis using the project test command with coverage flags
2. Identify uncovered functions and branches
3. Find edge cases not tested
4. Map dependencies that need mocking
output_artifacts:
- name: coverage
path: .wave/output/coverage-analysis.json
type: json
retry:
policy: patient
max_attempts: 2
handover:
contract:
type: json_schema
source: .wave/output/coverage-analysis.json
schema_path: .wave/contracts/coverage-analysis.schema.json
on_failure: retry
- id: generate-tests
persona: craftsman
dependencies: [analyze-coverage]
thread: test-gen
max_visits: 3
memory:
inject_artifacts:
- step: analyze-coverage
artifact: coverage
as: gaps
workspace:
mount:
- source: ./
target: /project
mode: readwrite
exec:
type: prompt
source: |
Generate tests to improve coverage for: {{ input }}
Requirements:
1. Write table-driven tests where appropriate
2. Cover happy path, error cases, and edge cases
3. Use descriptive test names (TestFunction_Condition_Expected)
4. Add mocks for external dependencies
5. Include benchmarks for performance-critical code
Follow existing test patterns in the codebase.
retry:
policy: standard
max_attempts: 3
handover:
contract:
type: test_suite
command: "{{ project.test_command }}"
must_pass: false
on_failure: retry
output_artifacts:
- name: tests
path: .wave/output/generated-tests.md
type: markdown
- id: run-tests
type: command
dependencies: [generate-tests]
script: "{{ project.contract_test_command }}"
- id: check-quality
type: conditional
dependencies: [run-tests]
edges:
- target: verify-coverage
condition: "outcome=success"
- target: generate-tests
- id: verify-coverage
persona: reviewer
model: claude-haiku
dependencies: [check-quality]
exec:
type: prompt
source: |
Verify the generated tests:
1. Run coverage again — did it improve?
2. Are tests meaningful (not just line coverage)?
3. Do tests actually catch bugs?
4. Are mocks appropriate and minimal?
5. Is test code maintainable?
Output: coverage delta and quality assessment
output_artifacts:
- name: verification
path: .wave/output/coverage-verification.md
type: markdown
handover:
contract:
type: non_empty_file
source: .wave/output/coverage-verification.md