CtrlK
BlogDocsLog inGet started
Tessl Logo

project-development

This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.

69

Quality

61%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/project-development/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

37%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is inverted - it provides excellent trigger term coverage but completely omits what the skill actually does. A description should lead with capabilities ('Designs LLM project architectures, estimates costs, evaluates task-model fit...') and then specify when to use it. Currently, Claude would know when to select this skill but not what it accomplishes.

Suggestions

Add concrete capability statements at the beginning describing what the skill does (e.g., 'Guides LLM project architecture decisions, designs batch processing pipelines, estimates API costs, and evaluates whether tasks suit LLM vs traditional approaches').

Restructure to follow the pattern: '[What it does]. Use when [triggers].' rather than leading with triggers only.

Include specific outputs or deliverables the skill produces (e.g., 'produces architecture diagrams, cost breakdowns, and implementation roadmaps').

DimensionReasoningScore

Specificity

The description contains no concrete actions - only trigger phrases. It never explains what the skill actually does, only when to use it. Phrases like 'start an LLM project' and 'design batch pipeline' are triggers, not capabilities.

1 / 3

Completeness

The description only answers 'when' (extensively) but completely fails to answer 'what does this do'. There is no explanation of the skill's capabilities, outputs, or actions - only trigger conditions.

1 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'start an LLM project', 'design batch pipeline', 'evaluate task-model fit', 'structure agent project', 'pipeline architecture', 'agent-assisted development', 'cost estimation', 'choosing between LLM and traditional approaches'.

3 / 3

Distinctiveness Conflict Risk

The trigger terms are fairly specific to LLM project planning, but without knowing what the skill actually does, it's unclear how it would differentiate from other LLM-related skills. Terms like 'agent-assisted development' could overlap with agent implementation skills.

2 / 3

Total

7

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, well-structured skill that provides actionable methodology for LLM project development. The workflow clarity and progressive disclosure are excellent, with clear sequencing and appropriate content organization. The main weakness is verbosity - the rationale explanations and some table content could be trimmed to respect token budget while maintaining the same actionable value.

Suggestions

Remove or condense the 'Rationale' columns in the task-model fit tables - Claude can infer why synthesis tasks suit LLMs without explicit explanation

Trim explanatory sentences like 'Do this because...' throughout - the instructions themselves are sufficient without justifying each recommendation

DimensionReasoningScore

Conciseness

The skill is comprehensive but includes some unnecessary explanation (e.g., explaining why file systems work for state management, rationale columns in tables that Claude could infer). The tables and structured content help, but the document could be tightened by 20-30% without losing actionable value.

2 / 3

Actionability

Provides concrete, executable guidance throughout: specific pipeline structure (acquire->prepare->process->parse->render), file system patterns with exact directory structures, cost estimation formulas, and real examples with specific metrics ($58 cost, 15 workers, etc.). The project planning template is step-by-step actionable.

3 / 3

Workflow Clarity

Multi-step processes are clearly sequenced with explicit validation checkpoints. The manual prototype step serves as validation before automation, the pipeline stages have clear boundaries, and the project planning template enforces ordered execution with validation at each step. The file system state pattern provides natural checkpointing.

3 / 3

Progressive Disclosure

Well-structured with clear overview sections, detailed topics appropriately separated, and one-level-deep references to related skills and external resources. References section clearly signals when to read each linked document. Content is appropriately split between inline guidance and referenced materials.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
muratcankoylan/Agent-Skills-for-Context-Engineering
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.