Add GitHub issue pipelines and prompts using gh CLI

gh-issue-impl, gh-issue-research, gh-issue-rewrite, gh-issue-update
pipelines with corresponding prompts for fetch-assess, plan,
implement, and create-pr steps.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-02-25 17:02:42 +01:00
parent fc24f9a8ab
commit 22370827ee
16 changed files with 1453 additions and 0 deletions

View File

@@ -0,0 +1,47 @@
You are performing a cross-artifact consistency and quality analysis across the
specification, plan, and tasks before implementation begins.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.analyze` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks`
to find FEATURE_DIR and locate spec.md, plan.md, tasks.md
3. Load all three artifacts and build semantic models:
- Requirements inventory from spec.md
- User story/action inventory with acceptance criteria
- Task coverage mapping from tasks.md
- Constitution rule set from `.specify/memory/constitution.md`
4. Run detection passes (limit to 50 findings total):
- **Duplication**: Near-duplicate requirements across artifacts
- **Ambiguity**: Vague adjectives, unresolved placeholders
- **Underspecification**: Requirements missing outcomes, tasks missing file paths
- **Constitution alignment**: Conflicts with MUST principles
- **Coverage gaps**: Requirements with no tasks, tasks with no requirements
- **Inconsistency**: Terminology drift, data entity mismatches, ordering contradictions
5. Assign severity: CRITICAL / HIGH / MEDIUM / LOW
6. Produce a compact analysis report (do NOT modify files — read-only analysis)
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec artifacts
- This is a READ-ONLY analysis — do NOT modify any files
## Output
Produce a JSON analysis report matching the injected output schema.
IMPORTANT: If CRITICAL issues are found, document them clearly but do NOT block
the pipeline. The implement step will handle resolution.

View File

@@ -0,0 +1,40 @@
You are generating quality checklists to validate requirement completeness before
implementation.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.checklist` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json` to get FEATURE_DIR
3. Load feature context: spec.md, plan.md, tasks.md
4. Generate focused checklists as "unit tests for requirements":
- Each item tests the QUALITY of requirements, not the implementation
- Use format: `- [ ] CHK### - Question about requirement quality [Dimension]`
- Group by quality dimensions: Completeness, Clarity, Consistency, Coverage
5. Create the following checklist files in `FEATURE_DIR/checklists/`:
- `review.md` — overall requirements quality validation
- Additional domain-specific checklists as warranted by the feature
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec artifacts
## Checklist Anti-Patterns (AVOID)
- WRONG: "Verify the button clicks correctly" (tests implementation)
- RIGHT: "Are interaction requirements defined for all clickable elements?" (tests requirements)
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,42 @@
You are refining a feature specification by identifying and resolving ambiguities.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.clarify` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` to confirm paths
3. Load the current spec and perform a focused ambiguity scan across:
- Functional scope and domain model
- Integration points and edge cases
- Terminology consistency
4. Generate up to 5 clarification questions (prioritized)
5. For each question, select the best option based on codebase context
6. Integrate each resolution directly into the spec file
7. Save the updated spec
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all clarifications should be resolved from codebase
context and the existing spec. The specify step already did the research.
- Keep the scope tight: only fix genuine ambiguities, don't redesign the spec
## Non-Interactive Mode
Since this runs in a pipeline, resolve all clarifications autonomously:
- Select the recommended option based on codebase patterns and existing architecture
- Document the rationale for each choice in the Clarifications section
- Err on the side of commonly-accepted industry standards
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,53 @@
You are creating a pull request for the implemented feature and requesting a review.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
1. Find the branch name and feature directory from the spec info artifact
2. **Verify implementation**: Run `go test -race ./...` one final time to confirm
all tests pass. If tests fail, fix them before proceeding.
3. **Stage changes**: Review all modified and new files with `git status` and `git diff`.
Stage relevant files — exclude any sensitive files (.env, credentials).
4. **Commit**: Create a well-structured commit (or multiple commits if logical):
- Use conventional commit prefixes: `feat:`, `fix:`, `refactor:`, `test:`, `docs:`
- Write concise commit messages focused on the "why"
- Do NOT include Co-Authored-By or AI attribution lines
5. **Push**: Push the branch to the remote repository:
```bash
git push -u origin HEAD
```
6. **Create Pull Request**: Use `gh pr create` with a descriptive summary:
```bash
gh pr create --title "<concise title>" --body "<PR body with summary and test plan>"
```
The PR body should include:
- Summary of changes (3-5 bullet points)
- Link to the spec file in the specs/ directory
- Test plan describing how changes were validated
- Any known limitations or follow-up work needed
7. **Request Copilot Review**: After the PR is created, request a review from Copilot:
```bash
gh pr edit --add-reviewer "copilot"
```
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,49 @@
You are implementing a feature according to the specification, plan, and task breakdown.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.implement` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks`
to find FEATURE_DIR, load tasks.md, plan.md, and all available artifacts
3. Check checklists status — if any are incomplete, note them but proceed
4. Parse tasks.md and extract phase structure, dependencies, and execution order
5. Execute implementation phase-by-phase:
**Setup first**: Initialize project structure, dependencies, configuration
**Tests before code**: Write tests for contracts and entities (TDD approach)
**Core development**: Implement models, services, CLI commands, endpoints
**Integration**: Database connections, middleware, logging, external services
**Polish**: Unit tests, performance optimization, documentation
6. For each completed task, mark it as `[X]` in tasks.md
7. Run `go test -race ./...` after each phase to catch regressions early
8. Final validation: verify all tasks complete, tests pass, spec requirements met
## Agent Usage — USE UP TO 6 AGENTS
Maximize parallelism with up to 6 Task agents for independent work:
- Agents 1-2: Setup and foundational tasks (Phase 1-2)
- Agents 3-4: Core implementation tasks (parallelizable [P] tasks)
- Agent 5: Test writing and validation
- Agent 6: Integration and polish tasks
Coordinate agents to respect task dependencies:
- Sequential tasks (no [P] marker) must complete before dependents start
- Parallel tasks [P] affecting different files can run simultaneously
- Run test validation between phases
## Error Handling
- If a task fails, halt dependent tasks but continue independent ones
- Provide clear error context for debugging
- If tests fail, fix the issue before proceeding to the next phase

View File

@@ -0,0 +1,41 @@
You are creating an implementation plan for a feature specification.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.plan` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/setup-plan.sh --json` to get FEATURE_SPEC, IMPL_PLAN,
SPECS_DIR, and BRANCH paths
3. Load the feature spec and `.specify/memory/constitution.md`
4. Follow the plan template phases:
**Phase 0 — Outline & Research**:
- Extract unknowns from the spec (NEEDS CLARIFICATION markers, tech decisions)
- Research best practices for each technology choice
- Consolidate findings into `research.md` with Decision/Rationale/Alternatives
**Phase 1 — Design & Contracts**:
- Extract entities from spec → write `data-model.md`
- Generate API contracts from functional requirements → `/contracts/`
- Run `.specify/scripts/bash/update-agent-context.sh claude`
5. Evaluate constitution compliance at each phase gate
6. Stop after Phase 1 — report branch, plan path, and generated artifacts
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec and codebase
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,50 @@
You are creating a feature specification for the following request:
{{ input }}
## Working Directory
You are running in an **isolated git worktree** checked out at `main` (detached HEAD).
Your working directory IS the project root. All git operations here are isolated
from the main working tree and will not affect it.
Use `create-new-feature.sh` to create the feature branch from this clean starting point.
## Instructions
Follow the `/speckit.specify` workflow to generate a complete feature specification:
1. Generate a concise short name (2-4 words) for the feature branch
2. Check existing branches to determine the next available number:
```bash
git fetch --all --prune
git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-'
git branch | grep -E '^[* ]*[0-9]+-'
```
3. Run the feature creation script:
```bash
.specify/scripts/bash/create-new-feature.sh --json --number <N> --short-name "<name>" "{{ input }}"
```
4. Load `.specify/templates/spec-template.md` for the required structure
5. Write the specification to the SPEC_FILE returned by the script
6. Create the quality checklist at `FEATURE_DIR/checklists/requirements.md`
7. Run self-validation against the checklist (up to 3 iterations)
## Agent Usage
Use 1-3 Task agents to parallelize independent work:
- Agent 1: Analyze the codebase to understand existing patterns and architecture
- Agent 2: Research domain-specific best practices for the feature
- Agent 3: Draft specification sections in parallel
## Quality Standards
- Focus on WHAT and WHY, not HOW (no implementation details)
- Every requirement must be testable and unambiguous
- Maximum 3 `[NEEDS CLARIFICATION]` markers — make informed guesses for the rest
- Include user stories with acceptance criteria, data model, edge cases
- Success criteria must be measurable and technology-agnostic
## Output
Produce a JSON status report matching the injected output schema.

View File

@@ -0,0 +1,52 @@
You are generating an actionable, dependency-ordered task breakdown for implementation.
Feature context: {{ input }}
## Working Directory
You are running in an **isolated git worktree** shared with previous pipeline steps.
Your working directory IS the project root. The feature branch was created by a
previous step and is already checked out.
## Instructions
Follow the `/speckit.tasks` workflow:
1. Find the feature directory and spec file path from the spec info artifact
2. Run `.specify/scripts/bash/check-prerequisites.sh --json` to get FEATURE_DIR
and AVAILABLE_DOCS
3. Load from FEATURE_DIR:
- **Required**: plan.md (tech stack, structure), spec.md (user stories, priorities)
- **Optional**: data-model.md, contracts/, research.md, quickstart.md
4. Execute task generation:
- Extract user stories with priorities (P1, P2, P3) from spec.md
- Map entities and endpoints to user stories
- Generate tasks organized by user story
5. Write `tasks.md` following the strict checklist format:
```
- [ ] [TaskID] [P?] [Story?] Description with file path
```
6. Organize into phases:
- Phase 1: Setup (project initialization)
- Phase 2: Foundational (blocking prerequisites)
- Phase 3+: One phase per user story (priority order)
- Final: Polish & cross-cutting concerns
## CONSTRAINTS
- Do NOT spawn Task subagents — work directly in the main context
- Do NOT use WebSearch — all information is in the spec artifacts
- Keep the scope tight: generate tasks from existing artifacts only
## Quality Requirements
- Every task must have a unique ID (T001, T002...), description, and file path
- Mark parallelizable tasks with [P]
- Each user story phase must be independently testable
- Tasks must be specific enough for an LLM to complete without additional context
## Output
Produce a JSON status report matching the injected output schema.