Add Wave general-purpose pipelines
ADR, changelog, code-review, debug, doc-sync, explain, feature, hotfix, improve, onboard, plan, prototype, refactor, security-scan, smoke-test, speckit-flow, supervise, test-gen, and more. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
97
.wave/pipelines/test-gen.yaml
Normal file
97
.wave/pipelines/test-gen.yaml
Normal file
@@ -0,0 +1,97 @@
|
||||
kind: WavePipeline
|
||||
metadata:
|
||||
name: test-gen
|
||||
description: "Generate comprehensive test coverage"
|
||||
release: true
|
||||
|
||||
input:
|
||||
source: cli
|
||||
example: "internal/pipeline"
|
||||
|
||||
steps:
|
||||
- id: analyze-coverage
|
||||
persona: navigator
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readonly
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Analyze test coverage for: {{ input }}
|
||||
|
||||
1. Run coverage analysis using the project test command with coverage flags
|
||||
2. Identify uncovered functions and branches
|
||||
3. Find edge cases not tested
|
||||
4. Map dependencies that need mocking
|
||||
output_artifacts:
|
||||
- name: coverage
|
||||
path: .wave/output/coverage-analysis.json
|
||||
type: json
|
||||
handover:
|
||||
contract:
|
||||
type: json_schema
|
||||
source: .wave/output/coverage-analysis.json
|
||||
schema_path: .wave/contracts/coverage-analysis.schema.json
|
||||
on_failure: retry
|
||||
max_retries: 2
|
||||
|
||||
- id: generate-tests
|
||||
persona: craftsman
|
||||
dependencies: [analyze-coverage]
|
||||
memory:
|
||||
inject_artifacts:
|
||||
- step: analyze-coverage
|
||||
artifact: coverage
|
||||
as: gaps
|
||||
workspace:
|
||||
mount:
|
||||
- source: ./
|
||||
target: /project
|
||||
mode: readwrite
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Generate tests to improve coverage for: {{ input }}
|
||||
|
||||
Requirements:
|
||||
1. Write table-driven tests where appropriate
|
||||
2. Cover happy path, error cases, and edge cases
|
||||
3. Use descriptive test names (TestFunction_Condition_Expected)
|
||||
4. Add mocks for external dependencies
|
||||
5. Include benchmarks for performance-critical code
|
||||
|
||||
Follow existing test patterns in the codebase.
|
||||
handover:
|
||||
contract:
|
||||
type: test_suite
|
||||
command: "{{ project.test_command }}"
|
||||
|
||||
must_pass: false
|
||||
on_failure: retry
|
||||
max_retries: 3
|
||||
output_artifacts:
|
||||
- name: tests
|
||||
path: .wave/output/generated-tests.md
|
||||
type: markdown
|
||||
|
||||
- id: verify-coverage
|
||||
persona: auditor
|
||||
dependencies: [generate-tests]
|
||||
exec:
|
||||
type: prompt
|
||||
source: |
|
||||
Verify the generated tests:
|
||||
|
||||
1. Run coverage again — did it improve?
|
||||
2. Are tests meaningful (not just line coverage)?
|
||||
3. Do tests actually catch bugs?
|
||||
4. Are mocks appropriate and minimal?
|
||||
5. Is test code maintainable?
|
||||
|
||||
Output: coverage delta and quality assessment
|
||||
output_artifacts:
|
||||
- name: verification
|
||||
path: .wave/output/coverage-verification.md
|
||||
type: markdown
|
||||
Reference in New Issue
Block a user