Validates Conductor project artifacts for completeness, consistency, and correctness. Use after setup, when diagnosing issues, or before implementation to verify project context.
55
44%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/conductor-validator/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has a solid structure with explicit 'what' and 'when' clauses, which is its strongest aspect. However, it lacks specificity about what concrete validation actions are performed and what 'Conductor project artifacts' actually entails. The trigger terms are somewhat generic and could benefit from more natural user language and specific artifact types.
Suggestions
Add specific concrete actions being validated, e.g., 'Checks configuration files, verifies dependency declarations, validates schema definitions' instead of the abstract 'completeness, consistency, and correctness'.
Include more natural trigger terms users might say, such as 'check my Conductor config', 'verify project setup', 'debug missing files', or list specific artifact types like '.conductor files' or 'pipeline definitions'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain ('Conductor project artifacts') and describes general actions ('validates for completeness, consistency, and correctness'), but doesn't list specific concrete actions like checking config files, verifying dependencies, or validating schemas. | 2 / 3 |
Completeness | Clearly answers both what ('Validates Conductor project artifacts for completeness, consistency, and correctness') and when ('Use after setup, when diagnosing issues, or before implementation to verify project context') with explicit trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes some relevant terms like 'validate', 'diagnosing issues', 'project context', and 'setup', but 'Conductor project artifacts' is fairly niche jargon. Missing natural user phrases like 'check my project', 'something is wrong', 'verify configuration', or 'debug setup'. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of 'Conductor project artifacts' provides some specificity, but 'validates for completeness, consistency, and correctness' is quite generic and could overlap with linting, testing, or other validation skills. The trigger scenarios ('after setup', 'diagnosing issues') are also broad enough to conflict with other diagnostic skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill suffers from a mix of auto-generated boilerplate and insufficient actionable content. While it provides some useful concrete elements (shell commands, pattern matching syntax), it lacks a coherent validation workflow with clear pass/fail criteria and error recovery steps. The generic instructions and limitations sections waste tokens without adding value.
Suggestions
Replace the generic Instructions section with a concrete, sequenced validation workflow: (1) check directory exists, (2) verify required files, (3) validate file contents against expected patterns, (4) report results with specific pass/fail criteria.
Add explicit validation checkpoints and error recovery — e.g., 'If conductor/index.md is missing, check if the project has been initialized with [specific command]'.
Remove the boilerplate 'Use this skill when' / 'Do not use this skill when' sections that repeat 'check if conductor directory exists' — these add no information.
Add concrete expected outputs for the validation commands so Claude knows what a healthy vs unhealthy project looks like.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill contains significant filler and boilerplate that adds no value. The 'Use this skill when' and 'Do not use this skill when' sections repeat 'check if conductor directory exists' verbatim in a way that seems auto-generated and unhelpful. The Instructions section is generic advice ('Clarify goals, constraints') that Claude already knows. The Limitations section is also generic boilerplate. | 1 / 3 |
Actionability | The initial shell commands for checking directory structure are concrete and executable, and the pattern matching section provides specific, useful marker formats. However, the Instructions section is vague ('Apply relevant best practices and validate outcomes') and doesn't provide concrete validation logic or what to do when checks fail. | 2 / 3 |
Workflow Clarity | There is no clear sequenced workflow for validation. The skill lists some ls commands at the top but doesn't define what constitutes a pass/fail, what to do when files are missing, or how to sequence the validation steps. For a validation skill, missing feedback loops and error handling is a significant gap. | 1 / 3 |
Progressive Disclosure | There is one reference to `resources/implementation-playbook.md` which is appropriate, and the content is organized into sections. However, the sections are poorly organized — the shell commands appear before any heading context, and the relationship between sections is unclear. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
93c57b2
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.