CtrlK
BlogDocsLog inGet started
Tessl Logo

incremental-python-programmer

Takes a Python repository and natural language feature description as input, implements the feature with proper code placement, generates comprehensive tests, and ensures all tests pass. Use when Claude needs to: (1) Add new features to existing Python projects, (2) Implement functions, classes, or modules based on requirements, (3) Modify existing code to add functionality, (4) Generate unit and integration tests for new code, (5) Fix failing tests after implementation, (6) Ensure code follows existing patterns and conventions.

69

1.05x
Quality

56%

Does it follow best practices?

Impact

90%

1.05x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/incremental-python-programmer/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

77%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured description with strong specificity and completeness, featuring an explicit 'Use when' clause with detailed trigger scenarios. The main weaknesses are moderate trigger term coverage (missing common user phrasings) and potential overlap with other coding-related skills due to the broad scope of capabilities described.

Suggestions

Add more natural user trigger terms like 'add code', 'write tests', 'pytest', 'implement feature', 'TDD', '.py files' to improve discoverability

Consider narrowing the scope or adding distinguishing qualifiers to reduce potential conflicts with general Python coding or testing skills

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'implements the feature with proper code placement', 'generates comprehensive tests', 'ensures all tests pass', plus detailed numbered list of capabilities including 'Add new features', 'Implement functions, classes, or modules', 'Modify existing code', 'Generate unit and integration tests', 'Fix failing tests'.

3 / 3

Completeness

Clearly answers both what ('Takes a Python repository and natural language feature description as input, implements the feature...') AND when with explicit 'Use when Claude needs to:' clause followed by six specific trigger scenarios.

3 / 3

Trigger Term Quality

Includes relevant terms like 'Python repository', 'feature', 'tests', 'functions', 'classes', 'modules', but missing common user variations like 'add code', 'write tests', 'pytest', '.py files', 'implement', 'coding'. Technical but not comprehensive coverage of natural user language.

2 / 3

Distinctiveness Conflict Risk

While Python-specific, the broad scope of 'Add new features', 'Modify existing code', and 'Generate tests' could overlap with general coding skills, Python testing skills, or code review skills. The combination is somewhat distinctive but individual triggers could conflict.

2 / 3

Total

10

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive but severely over-engineered for Claude's capabilities. It explains basic Python concepts (docstrings, type hints, class structure) that Claude already knows well, wasting significant token budget. The workflow is present but lacks tight validation loops, and much content should be moved to reference files.

Suggestions

Reduce content by 70%+ by removing explanations of basic Python concepts (docstrings, type hints, class structure, pytest basics) that Claude already knows

Move 'Common Scenarios', 'Troubleshooting', and 'Best Practices' sections to reference files, keeping only the core 8-step workflow in SKILL.md

Add explicit validation gates: 'STOP: Do not proceed to step 6 until implementation compiles without errors' and 'STOP: Do not proceed to step 8 until all tests pass'

Replace generic code templates with guidance on how to discover and follow the specific project's existing patterns

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanations of concepts Claude already knows (how to write docstrings, basic Python patterns, what type hints are). The document is over 400 lines when the core workflow could be expressed in under 100 lines.

1 / 3

Actionability

Provides concrete code examples and executable patterns, but many examples are generic templates rather than project-specific guidance. The bash command references a script that may not exist, and examples are illustrative rather than copy-paste ready for real scenarios.

2 / 3

Workflow Clarity

Steps are clearly numbered and sequenced, but validation checkpoints are weak. The 'Run Tests' step lacks explicit feedback loops for fixing failures before proceeding, and there's no clear 'stop and verify' gate between implementation and test generation.

2 / 3

Progressive Disclosure

References external files (implementation-patterns.md, testing-strategies.md) appropriately, but the main document contains too much inline content that should be in those reference files. The 'Common Scenarios' and 'Troubleshooting' sections bloat the main skill file.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (547 lines); consider splitting into references/ and linking

Warning

Total

10

/

11

Passed

Repository
ArabelaTso/Skills-4-SE
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.