Takes a Python repository and natural language feature description as input, implements the feature with proper code placement, generates comprehensive tests, and ensures all tests pass. Use when Claude needs to: (1) Add new features to existing Python projects, (2) Implement functions, classes, or modules based on requirements, (3) Modify existing code to add functionality, (4) Generate unit and integration tests for new code, (5) Fix failing tests after implementation, (6) Ensure code follows existing patterns and conventions.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill incremental-python-programmer79
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
77%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-structured description with strong specificity and completeness, featuring an explicit 'Use when' clause with detailed trigger scenarios. The main weaknesses are moderate trigger term coverage (missing common user phrasings) and potential overlap with other coding/testing skills due to its broad scope.
Suggestions
Add more natural user trigger terms like 'add code', 'write tests', 'pytest', 'implement feature', 'TDD', '.py' to improve discoverability
Consider adding distinguishing phrases that differentiate this from general Python coding skills, such as 'end-to-end feature implementation' or 'full-cycle development with testing'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'implements the feature with proper code placement', 'generates comprehensive tests', 'ensures all tests pass', plus detailed numbered list of capabilities including 'Add new features', 'Implement functions, classes, or modules', 'Modify existing code', 'Generate unit and integration tests', 'Fix failing tests'. | 3 / 3 |
Completeness | Clearly answers both what ('Takes a Python repository and natural language feature description as input, implements the feature...') AND when with explicit 'Use when Claude needs to:' clause followed by six specific trigger scenarios. | 3 / 3 |
Trigger Term Quality | Includes relevant terms like 'Python repository', 'feature', 'tests', 'functions', 'classes', 'modules', but missing common user variations like 'add code', 'write tests', 'pytest', '.py files', 'implement', 'coding'. Technical but not comprehensive coverage of natural user language. | 2 / 3 |
Distinctiveness Conflict Risk | While Python-specific, the broad scope ('Add new features', 'Modify existing code', 'Generate tests') could overlap with general coding skills, test-writing skills, or Python-specific utilities. The combination of feature implementation + testing is somewhat distinctive but individual triggers are common. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides comprehensive, actionable guidance for implementing Python features with good code examples and clear structure. However, it's somewhat verbose for Claude's capabilities and could benefit from tighter validation loops in the workflow. The progressive disclosure is well-handled with appropriate references to detailed materials.
Suggestions
Reduce verbosity by removing explanations of basic Python concepts (docstrings, type hints, pytest basics) that Claude already knows
Integrate validation checkpoints directly into the implementation workflow (e.g., 'After Step 4, run tests immediately; only proceed to Step 5 when tests pass')
Consolidate the 'Common Scenarios' section into the main workflow or move to a reference file, as it largely repeats the main content
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is moderately verbose with some unnecessary explanations (e.g., explaining what docstrings are, basic Python concepts Claude knows). The examples are helpful but could be more condensed, and sections like 'Common Scenarios' repeat patterns already shown. | 2 / 3 |
Actionability | Provides fully executable code examples throughout, including complete function implementations, test classes with pytest fixtures, and specific bash commands. Examples are copy-paste ready with proper imports and type hints. | 3 / 3 |
Workflow Clarity | Steps are clearly numbered and sequenced, but validation checkpoints are weak. The workflow mentions 'run tests' but lacks explicit feedback loops for fixing issues before proceeding. Step 7 (Fix Failing Tests) is separate rather than integrated as a validation checkpoint. | 2 / 3 |
Progressive Disclosure | Good structure with clear references to external files (implementation-patterns.md, testing-strategies.md) that are one level deep and well-signaled. Content is appropriately organized with sections for different concerns. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (547 lines); consider splitting into references/ and linking | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.