CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

project-development

tessl i github:muratcankoylan/Agent-Skills-for-Context-Engineering --skill project-development
github.com/muratcankoylan/Agent-Skills-for-Context-Engineering

This skill should be used when the user asks to "start an LLM project", "design batch pipeline", "evaluate task-model fit", "structure agent project", or mentions pipeline architecture, agent-assisted development, cost estimation, or choosing between LLM and traditional approaches.

Review Score

64%

Validation Score

14/16

Implementation Score

70%

Activation Score

37%

SKILL.md
Review
Evals

Generated

Validation

Total

14/16

Score

Passed
CriteriaScore

metadata_version

'metadata' field is not a dictionary

license_field

'license' field is missing

Implementation

Suggestions 4

Score

70%

Overall Assessment

This is a comprehensive methodology skill with excellent structure and workflow clarity, but suffers from verbosity and lack of executable code examples. The conceptual framework is solid and well-organized, but the skill explains many concepts Claude already understands and provides templates/checklists rather than copy-paste ready implementations.

Suggestions

  • Reduce explanatory text about LLM characteristics (the tables explaining why tasks fit/don't fit LLMs) - Claude knows this; focus on the decision framework instead
  • Add executable Python code for at least one complete pipeline stage (e.g., the file system state machine pattern with actual implementation)
  • Replace the abstract 'Example prompt structure' with a complete, real-world prompt example that could be directly used
  • Condense the 'Core Concepts' section by removing explanations of why patterns work and focusing on how to implement them
DimensionScoreReasoning

Conciseness

2/3

The skill contains useful information but is verbose in places, explaining concepts Claude likely knows (e.g., what makes tasks suited for LLMs, basic pipeline concepts). Tables and lists add structure but some sections could be significantly tightened.

Actionability

2/3

Provides conceptual guidance and some concrete examples (file structure, prompt format), but lacks executable code. The pipeline stages are described abstractly rather than with copy-paste ready implementations. The 'Project Planning Template' is a checklist rather than actionable commands.

Workflow Clarity

3/3

The 5-stage pipeline (acquire → prepare → process → parse → render) is clearly sequenced with explicit rationale for each stage. The file system state machine pattern provides clear validation checkpoints (file existence gates execution), and the guidance on re-running stages is explicit.

Progressive Disclosure

3/3

Well-structured with clear sections, appropriate references to related skills (tool-design, multi-agent-patterns, evaluation), and external references at the end. Links to detailed case studies and pipeline patterns are clearly signaled and one level deep.

Activation

Suggestions 3

Score

37%

Overall Assessment

This description is essentially a list of trigger conditions without any explanation of what the skill actually does. While it excels at providing natural keywords users might say, it completely fails to describe the skill's capabilities, making it impossible for Claude to understand what actions this skill enables.

Suggestions

  • Add a clear 'what' statement at the beginning describing concrete actions (e.g., 'Guides LLM project architecture decisions, designs batch processing pipelines, evaluates task-model fit, and estimates API costs.')
  • Restructure to follow the pattern: '[Concrete capabilities]. Use when [trigger conditions]' rather than leading with triggers only.
  • Include specific deliverables or outputs the skill produces (e.g., 'generates architecture diagrams', 'produces cost breakdowns', 'recommends model selections').
DimensionScoreReasoning

Specificity

1/3

The description contains no concrete actions - only trigger phrases. It never explains what the skill actually does (e.g., 'designs pipelines', 'estimates costs', 'evaluates models'). The capabilities are only implied through the trigger terms.

Completeness

1/3

The description only addresses 'when' (trigger conditions) but completely omits 'what' the skill does. There is no explanation of the actual capabilities or actions this skill performs.

Trigger Term Quality

3/3

Excellent coverage of natural trigger terms users would say: 'start an LLM project', 'design batch pipeline', 'evaluate task-model fit', 'structure agent project', plus domain keywords like 'pipeline architecture', 'agent-assisted development', 'cost estimation'.

Distinctiveness Conflict Risk

2/3

The LLM/agent project focus provides some distinctiveness, but terms like 'cost estimation' and 'pipeline architecture' could overlap with general project planning or infrastructure skills. The lack of concrete actions makes it harder to distinguish from related skills.