Decision-Linked Development (DLD) — a workflow for recording, linking, and maintaining development decisions alongside code. Skills for planning, recording, implementing, auditing, and documenting decisions via @decision annotations.
68
Quality
68%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
You are helping the developer bootstrap DLD in an existing codebase by generating decision records from what the code already does. The goal is not 100% decision coverage — it's to create enough scaffolding that the DLD workflow feels natural for future development.
Use the AskUserQuestion tool for all questions and prompts. This provides a structured input experience for the user rather than waiting for freeform replies.
Shared scripts:
../dld-common/scripts/next-id.sh
../dld-common/scripts/regenerate-index.sh
../dld-decide/scripts/create-decision.shdld.config.yaml exists at the repo root. If not, tell the user to run /dld-init first and stop./dld-decide or /dld-plan instead.dld.config.yaml for project structure (flat vs namespaced, decisions directory)decisions/PRACTICES.md if it existsPerform a broad analysis of the codebase:
Present a brief summary to the user:
Codebase analysis:
- [Tech stack summary]
- [N] main components/modules identified: [list]
- [Notable patterns or architectural choices observed]
Ask the user:
Would you like to retrofit the entire codebase, or focus on a specific area?
If a specific area, ask them to identify it (a directory, module, or domain concept). Narrow the analysis scope accordingly.
Explain the two modes and ask the user to choose:
Broad mode — Creates high-level decisions covering major features and components. Produces file-level references only (e.g.,
path: src/billing/service.ts). Good for getting started quickly with a general decision scaffold.Detailed mode — Creates finer-grained decisions covering specific behaviors and design choices. Produces method-level references and annotations (e.g.,
symbol: calculateVAT). Takes longer but creates richer traceability from the start.Which mode would you like to use?
Before diving into implementation details, identify foundational decisions that frame the system itself. These establish the context that all other decisions hang on:
These framing decisions may not map to a single function — they often reference the application entry point, main service interface, or core domain model files. Without them, the projected OVERVIEW.md would read as a collection of implementation details with no narrative anchor.
Aim for 1-3 framing decisions depending on the system's complexity. Ask the user to help articulate these — they capture the kind of high-level context that's hardest to infer from code alone.
Then identify decisions at the implementation level. Focus on:
Broad mode: Aim for 1-2 decisions per major component or feature area. Each decision should cover a significant chunk of functionality.
Detailed mode: Aim for decisions wherever there's a meaningful "why" behind the code. Still don't try to cover everything — focus on the decisions a future developer (or AI agent) would most benefit from knowing about.
Ask when unsure: If you encounter code that looks like a deliberate design choice but you can't confidently infer the rationale, ask the user. These are often the most valuable decisions to capture — the ones where the "why" isn't obvious from the code alone. For example:
I see the order service retries with a 7-second delay and max 3 attempts. Is there a specific reason for these values, or is it a general resilience pattern?
Don't ask about everything — focus on cases where the implementation seems intentionally specific and the rationale would be lost without human input.
Present the proposed decisions as a numbered list, with framing decisions first:
Proposed decisions:
System framing:
- [System purpose/identity] — [what this system is and why it exists]
- Affects:
src/Application.ts(or equivalent entry point)- [Core domain model] — [key concepts and relationships]
- Affects:
src/models/Account.ts,src/models/Identity.tsImplementation decisions: 3. [Title] — [one-line summary]
- Affects:
src/path/to/code.ts
- [Title] — [one-line summary]
- Affects:
src/path/to/other.ts...Want me to proceed with all of these, remove any, or adjust?
Let the user review and adjust the list before proceeding.
For each approved decision:
ID=$(bash ../dld-common/scripts/next-id.sh)
printf "## Context\n\n...\n\n## Decision\n\n...\n\n## Rationale\n\n..." | bash ../dld-decide/scripts/create-decision.sh \
--id "$ID" \
--title "Title" \
--tags "tag1, tag2" \
--body-stdinFor each decision, add @decision(DL-NNN) annotations to the relevant code:
Then update each decision record's references field directly in the YAML frontmatter with the annotated code paths and symbols.
Since the code already exists, these decisions go directly to accepted status:
bash ../dld-common/scripts/update-status.sh DL-NNN acceptedDo this for each generated decision.
bash ../dld-common/scripts/regenerate-index.shRetrofit complete:
- Created N decisions (DL-XXX through DL-YYY)
- Added M
@decisionannotations across the codebase- Mode: [broad/detailed]
- Scope: [entire codebase / specific area]
These decisions capture the current state of the codebase — not every design choice, but enough to establish a working decision scaffold.
Next steps:
- Review the generated decisions for accuracy — especially the inferred rationale
/dld-snapshot— generate SNAPSHOT.md and OVERVIEW.md from the new decisions/dld-decide— record new decisions as you make changes going forward/dld-retrofit— run again on other areas if you scoped to a specific component