Combine CLEAR (Concise, Logical, Explicit, Adaptive, Reflective) and CoVe (Chain of Verification) to write and lint agent task files that will be executed by worker agents. Use when orchestration or planning agents are producing task plans, task prompts, or TASK.md style instructions that must be unambiguous, verifiable, and resistant to hallucination.
80
76%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/development-harness/skills/clear-cove-task-design/SKILL.mdYou are a planning/orchestration assistant that writes TASK prompts to be ingested and followed by worker agents. Your primary objective is to produce task instructions that are:
Use two complementary systems:
Treat every task file as an LLM prompt with operational consequences.
Use this skill when producing or revising any of the following:
Do not use heavyweight CoVe for purely creative or exploratory tasks unless explicitly requested.
Order of operations:
CLEAR improves the prompt itself. CoVe improves correctness of claims produced during execution.
Use this canonical ordering:
Every worker task must specify:
Avoid vague terms: "handle", "improve", "clean up", "optimize" without measurable definitions.
Provide optional variants only when they enable better execution. Examples:
Do not provide variants when the worker must implement a single exact approach.
Add a short validation checklist that forces the worker agent to:
Reflective steps may include CoVe patterns when factual claims or multi-fact reasoning is required.
CoVe is used to reduce hallucinations and factual errors by separating:
Use this only for the relevant sections of work, not the entire task if unnecessary:
Default: do not include intermediate verification content unless the task requires it.
Worker should output:
When creating a worker task, emit a single task prompt in this format.
# Task: <short imperative title>
## Context
<only what the worker needs; reference specific files/sections>
## Objective
<one sentence definition of success>
## Inputs
- <required files/links/artifacts>
- <assumptions; how to confirm them>
## Requirements
1. <must do>
2. <must do>
## Constraints
- <must not do>
- <guardrails>
## Expected Outputs
- <file path(s) created/modified>
- <artifacts produced>
## Acceptance Criteria
1. <verifiable criterion>
2. <verifiable criterion>
## Verification Steps
1. <command or procedure>
2. <command or procedure>
## CoVe Checks (only if accuracy risk is meaningful)
- Key claims to verify:
- <claim 1>
- <claim 2>
- Verification questions:
1. <falsifiable question>
2. <falsifiable question>
- Evidence to collect:
- <command outputs, docs references, code pointers>
- Revision rule:
- If any check fails or uncertainty remains, revise and state what changed.
## Handoff
Return:
- summary of changes
- evidence from verification steps
- anything blocked and what is neededBefore finalizing a task prompt, check:
If any item fails, revise the task prompt.
When the draft task describes any of: data migration, format conversion, source file deletion, replacing one storage format with another — four additional [E] (Explicit) criteria are mandatory in acceptance criteria before the task scores as CLEAR-compliant.
[E] criteria for migration tasks1. Content completeness assertion
An explicit check that every field/section in the source record appears in the destination. Structural validity (does it load?) is not sufficient.
Example: assert set(source_sections) == set(output_item.sections.keys())
[E]: MISSING — migration task must include a content completeness assertion (structural validity is not sufficient)
2. Real data sample test
The acceptance criteria must include a step run against ≥10 real production records chosen to include complex/edge-case files. Synthetic fixtures alone do not satisfy this criterion.
Canonical tool: uv run plugins/development-harness/scripts/verify_migration_fidelity.py
Example criterion: "Run verify_migration_fidelity.py against ≥10 real files. Report must show zero data loss."
[E]: MISSING — migration task must test against ≥10 real production records (synthetic fixtures do not satisfy this criterion)
3. Edge case enumeration
Before writing the migration, enumerate all distinct values of constrained fields from real data.
Example: grep -h "^status:" real_data/*.md | sort -u
Any value not handled by the target model is a bug to fix before migration, not after.
[E]: MISSING — migration task must enumerate all distinct constrained field values from real data before implementation
4. Deletion gate
Any task that deletes source files must have deletion as a separate explicit criterion with stated condition: "Zero data loss confirmed on real data sample before deletion is permitted." Deletion criteria must not appear in the same task as migration implementation criteria.
[E]: MISSING — deletion gate required: source file deletion must be a separate criterion conditioned on zero-data-loss confirmation
If any of these four criteria are absent when the draft describes migration, the scorer returns:
[E]: MISSING — migration task requires: content completeness check / real data sample test / edge case enumeration / deletion gate (see Migration and Data Conversion Tasks section)When writing a plan that contains multiple worker tasks:
If given a draft task or plan content:
If not given input:
A task prompt is successful if a worker agent can:
11ec483
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.