CtrlK
BlogDocsLog inGet started
Tessl Logo

project-development

This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.

Install with Tessl CLI

npx tessl i github:muratcankoylan/Agent-Skills-for-Context-Engineering --skill project-development
What are skills?

Overall
score

64%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Discovery

37%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is essentially a list of trigger conditions without any explanation of what the skill actually does. While it excels at providing natural keywords users might say, it completely fails to describe the skill's capabilities, making it impossible for Claude to understand what actions this skill enables.

Suggestions

Add a clear 'what' statement at the beginning describing concrete actions (e.g., 'Guides LLM project architecture decisions, designs batch processing pipelines, evaluates task-model fit, and estimates API costs.')

Restructure to follow the pattern: '[Concrete capabilities]. Use when [trigger conditions]' rather than leading with triggers only.

Include specific deliverables or outputs the skill produces (e.g., 'generates architecture diagrams', 'produces cost breakdowns', 'recommends model selections').

DimensionReasoningScore

Specificity

The description contains no concrete actions - only trigger phrases. It never explains what the skill actually does (e.g., 'designs pipelines', 'estimates costs', 'evaluates models'). The capabilities are only implied through the trigger terms.

1 / 3

Completeness

The description only addresses 'when' (trigger conditions) but completely omits 'what' the skill does. There is no explanation of the actual capabilities or actions this skill performs.

1 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'start an LLM project', 'design batch pipeline', 'evaluate task-model fit', 'structure agent project', plus domain keywords like 'pipeline architecture', 'agent-assisted development', 'cost estimation'.

3 / 3

Distinctiveness Conflict Risk

The LLM/agent project focus provides some distinctiveness, but terms like 'cost estimation' and 'pipeline architecture' could overlap with general project planning or infrastructure skills. The lack of concrete actions makes it harder to distinguish from related skills.

2 / 3

Total

7

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive methodology skill with excellent structure and workflow clarity, but suffers from verbosity and lack of executable code examples. The conceptual framework is solid and well-organized, but the skill explains many concepts Claude already understands and provides templates/checklists rather than copy-paste ready implementations.

Suggestions

Reduce explanatory text about LLM characteristics (the tables explaining why tasks fit/don't fit LLMs) - Claude knows this; focus on the decision framework instead

Add executable Python code for at least one complete pipeline stage (e.g., the file system state machine pattern with actual implementation)

Replace the abstract 'Example prompt structure' with a complete, real-world prompt example that could be directly used

Condense the 'Core Concepts' section by removing explanations of why patterns work and focusing on how to implement them

DimensionReasoningScore

Conciseness

The skill contains useful information but is verbose in places, explaining concepts Claude likely knows (e.g., what makes tasks suited for LLMs, basic pipeline concepts). Tables and lists add structure but some sections could be significantly tightened.

2 / 3

Actionability

Provides conceptual guidance and some concrete examples (file structure, prompt format), but lacks executable code. The pipeline stages are described abstractly rather than with copy-paste ready implementations. The 'Project Planning Template' is a checklist rather than actionable commands.

2 / 3

Workflow Clarity

The 5-stage pipeline (acquire → prepare → process → parse → render) is clearly sequenced with explicit rationale for each stage. The file system state machine pattern provides clear validation checkpoints (file existence gates execution), and the guidance on re-running stages is explicit.

3 / 3

Progressive Disclosure

Well-structured with clear sections, appropriate references to related skills (tool-design, multi-agent-patterns, evaluation), and external references at the end. Links to detailed case studies and pipeline patterns are clearly signaled and one level deep.

3 / 3

Total

10

/

12

Passed

Validation

87%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation14 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata' field is not a dictionary

Warning

license_field

'license' field is missing

Warning

Total

14

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.