Decision-Linked Development (DLD) — a workflow for recording, linking, and maintaining development decisions alongside code. Skills for planning, recording, implementing, auditing, and documenting decisions via @decision annotations.
70
70%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
You are implementing one or more proposed decisions by making code changes, adding @decision annotations, and updating the decision records.
Shared scripts:
../dld-common/scripts/regenerate-index.sh
../dld-common/scripts/update-status.shSkill-specific scripts:
scripts/verify-annotations.shdld.config.yaml exists at the repo root. If not, tell the user to run /dld-init first and stop.DL-005, DL-005 DL-006tag:payment-gateway/dld-implement without specifying decisions, find all decisions with status: proposed in the records subdirectory (decisions/records/) and implement all of them.status: proposed. If a decision is already accepted, tell the user and skip it. If it doesn't exist, report the error.dld.config.yaml for project structuredecisions/PRACTICES.md if it exists — this is where practices guidance is most important. Apply the project's testing approach, code style, error handling patterns, and architecture conventions when writing code.decisions/records/<namespace>/PRACTICES.md for namespace-specific practicesWhen multiple decisions are requested, decide whether to implement them individually or as a batch:
When batching, implement all the code together, add annotations for all decisions, then update each decision record and status individually in step 4.
Read each decision record carefully. Understand:
Implement the decision(s) by modifying the codebase. Follow the practices manifest if one exists.
Refining decisions during implementation: While implementing, you may discover details that weren't anticipated during planning — a specific threshold value, an edge case handling approach, or a refinement to the original design. Since the decision is still in proposed status, it is mutable and can be updated:
/dld-decide to record a separate decision. If the new discovery invalidates the current decision, it may need to be superseded instead.The boundary: if the discovery changes the intent of the decision, it's a new decision. If it refines the implementation details, update the current one.
@decision annotationsThis step is mandatory. Every implemented decision MUST have at least one @decision(DL-NNN) annotation in the source code. Updating the decision record's references field alone is not sufficient. The annotation in code is what triggers AI agents to look up the decision before modifying the annotated code.
Add @decision(DL-NNN) annotations to the code you modified or created. Place annotations in comments near the relevant code.
Where to annotate:
Annotation format (adapt comment syntax to the language):
// @decision(DL-012)
function calculateVAT(order: Order): Money {
// ...
}# @decision(DL-012)
def calculate_vat(order: Order) -> Money:
...Guidelines:
// @decision(DL-012) @decision(DL-015)annotation_prefix from dld.config.yaml (default: @decision)For each implemented decision:
Update the references field in the decision record's YAML frontmatter. Edit the file directly — add the code paths and symbols that were annotated. Example:
references:
- path: src/billing/vat.ts
symbol: calculateVAT
- path: src/billing/vat.test.tsUpdate status from proposed to accepted:
bash ../dld-common/scripts/update-status.sh DL-NNN acceptedAfter updating all decision records, run the verification script to confirm every implemented decision has at least one @decision annotation in the codebase:
bash scripts/verify-annotations.sh DL-005 DL-006Pass all the decision IDs that were implemented. If any are missing annotations, the script will report them and exit with an error. Go back and add the missing annotations before proceeding.
Check dld.config.yaml for the implement_review key. If it is set to false, skip this step entirely. If it is true or absent (default: enabled), proceed.
Launch a subagent to review all code changes for correctness and security. Use the Agent tool with a prompt constructed from the template below, replacing all {{placeholders}} with actual values:
You are reviewing code changes for the {{project_name}} project before committing. The changes implement {{decision_count}} decisions ({{decision_range}}) covering:
{{decision_summaries}}
The project uses:
{{tech_stack_summary}}
Review these files for:
- Correctness (logic errors, edge cases)
- Security (SQL injection, directory traversal, XSS, etc.)
- Consistency with existing patterns and conventions
- Type safety issues
- Missing error handling
- Any code that could be simplified
Files to review (read all of them):
{{file_list}}
Also read the practices doc at {{practices_path}}
Do NOT make any changes. Only report findings. Be concise — focus on actual issues, not style preferences ({{linter_name}} handles style). Group findings by severity: critical (must fix), moderate (should fix), and minor (nice to have).Filling in the placeholders:
{{project_name}} — from dld.config.yaml or the repo directory name{{decision_count}} and {{decision_range}} — count and IDs of the decisions just implemented (e.g., "3 decisions (DL-005 – DL-007)"){{decision_summaries}} — one-line summary of each decision's title and intent{{tech_stack_summary}} — languages, frameworks, and key libraries from the project (infer from dld.config.yaml, package.json, or equivalent){{file_list}} — all files you created or modified during steps 2–4{{practices_path}} — path to decisions/PRACTICES.md (or namespace-specific practices if applicable). Omit the line if no practices file exists.{{linter_name}} — the project's linter (e.g., ESLint, Ruff). If unknown, use "the project linter"Acting on findings:
Note: The review subagent operates with limited context and may flag false positives or misunderstand project-specific patterns. Use your own judgment — you have fuller context from having just written the code. If you're uncertain whether a finding warrants a fix, ask the user before making changes.
If you made fixes, re-run the verification script from step 5 to ensure annotations are still intact.
bash ../dld-common/scripts/regenerate-index.shImplemented and accepted: DL-NNN (<title>)
Code changes:
src/billing/vat.ts— modifiedcalculateVAT(annotated with@decision(DL-NNN))src/billing/vat.test.ts— added testsNext steps:
/dld-decide— record another decision/dld-audit— check for drift between decisions and code