Takes a Python repository and natural language feature description as input, implements the feature with proper code placement, generates comprehensive tests, and ensures all tests pass. Use when Claude needs to: (1) Add new features to existing Python projects, (2) Implement functions, classes, or modules based on requirements, (3) Modify existing code to add functionality, (4) Generate unit and integration tests for new code, (5) Fix failing tests after implementation, (6) Ensure code follows existing patterns and conventions.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill incremental-python-programmer79
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Implement new features in Python repositories with automated testing and validation.
Parse the request:
Clarify if needed:
Automated analysis:
python scripts/analyze_repo_structure.py <repo_path>Manual analysis:
Key questions:
See implementation-patterns.md for common patterns.
Determine implementation approach:
For new functions:
For new classes:
For new modules:
For modifications:
Follow this order:
Step 1: Add necessary imports
# Standard library
import os
from typing import List, Dict, Optional
# Third-party (add to requirements.txt if new)
import requests
# Local imports
from .existing_module import helper_functionStep 2: Implement core functionality
Step 3: Integrate with existing code
__init__.py if neededExample: Adding a new function
def new_feature_function(param1: str, param2: int = 10) -> dict:
"""
Brief description of what the function does.
Args:
param1: Description of param1
param2: Description of param2 (default: 10)
Returns:
Dictionary containing results
Raises:
ValueError: If param1 is empty
TypeError: If param2 is not an integer
"""
# Validate inputs
if not param1:
raise ValueError("param1 cannot be empty")
# Implementation
result = {
"input": param1,
"multiplier": param2,
"output": len(param1) * param2
}
return resultExample: Adding a new class
class NewFeatureClass:
"""
Class for handling new feature functionality.
Attributes:
attribute1: Description of attribute1
attribute2: Description of attribute2
"""
def __init__(self, param1: str, param2: int = 10):
"""
Initialize NewFeatureClass.
Args:
param1: Description
param2: Description (default: 10)
"""
self.attribute1 = param1
self.attribute2 = param2
def method1(self) -> str:
"""Method description."""
return f"{self.attribute1}_{self.attribute2}"
def method2(self, value: int) -> int:
"""Method description."""
return value * self.attribute2Example: Modifying existing code
# Before
def existing_function(param: str) -> str:
return param.upper()
# After - adding new parameter with default
def existing_function(param: str, enable_new_feature: bool = False) -> str:
result = param.upper()
if enable_new_feature:
result = apply_new_transformation(result)
return result
def apply_new_transformation(text: str) -> str:
"""New feature logic."""
return f"[TRANSFORMED] {text}"See implementation-patterns.md for detailed patterns.
Identify test requirements:
Generate unit tests:
import pytest
from module import new_feature_function, NewFeatureClass
class TestNewFeatureFunction:
"""Test suite for new_feature_function."""
def test_basic_functionality(self):
"""Test basic functionality."""
result = new_feature_function("test", 5)
assert result["input"] == "test"
assert result["multiplier"] == 5
assert result["output"] == 20
def test_default_parameter(self):
"""Test with default parameter."""
result = new_feature_function("hello")
assert result["multiplier"] == 10
assert result["output"] == 50
def test_empty_string_raises_error(self):
"""Test that empty string raises ValueError."""
with pytest.raises(ValueError, match="cannot be empty"):
new_feature_function("", 5)
@pytest.mark.parametrize("input_str,multiplier,expected", [
("a", 1, 1),
("ab", 2, 4),
("abc", 3, 9),
])
def test_various_inputs(self, input_str, multiplier, expected):
"""Test with various inputs."""
result = new_feature_function(input_str, multiplier)
assert result["output"] == expected
class TestNewFeatureClass:
"""Test suite for NewFeatureClass."""
@pytest.fixture
def instance(self):
"""Create instance for testing."""
return NewFeatureClass("test", 5)
def test_initialization(self, instance):
"""Test class initialization."""
assert instance.attribute1 == "test"
assert instance.attribute2 == 5
def test_method1(self, instance):
"""Test method1."""
result = instance.method1()
assert result == "test_5"
def test_method2(self, instance):
"""Test method2."""
result = instance.method2(3)
assert result == 15Generate integration tests if needed:
def test_integration_with_existing_code():
"""Test that new feature integrates with existing code."""
# Setup
data = prepare_test_data()
# Execute workflow using new feature
result = existing_workflow(data, use_new_feature=True)
# Verify
assert result["status"] == "success"
assert "new_feature_output" in resultSee testing-strategies.md for comprehensive testing patterns.
Execute test suite:
# Run all tests
pytest
# Run with coverage
pytest --cov=module --cov-report=term-missing
# Run specific test file
pytest tests/test_new_feature.py
# Run with verbose output
pytest -vCheck results:
If tests fail, diagnose and fix:
Common issues:
1. Assertion failures
2. Import errors
__init__.py exports3. Type errors
4. Logic errors
pytest --pdbExample fix:
# Failing test
def test_calculation():
result = calculate(5, 3)
assert result == 15 # AssertionError: assert 8 == 15
# Diagnosis: Expected value is wrong
# Fix: Update test expectation
def test_calculation():
result = calculate(5, 3)
assert result == 8 # CorrectedSee testing-strategies.md for test fixing strategies.
Final verification:
Documentation checklist:
Summary to provide:
Files modified/created:
Implementation summary:
Tests added:
Test results:
Functions:
Classes:
Modules:
__init__.py for public APIFollow existing conventions:
Type hints:
typing moduleError handling:
Test coverage:
Test organization:
Test quality:
Request: "Add a function to validate email addresses"
Steps:
validators.py)validate_email() functiontest_validate_email() with various casesRequest: "Create a UserManager class to handle user operations"
Steps:
UserManager class with methodsTestUserManager class with method testsRequest: "Add optional caching to the data_loader function"
Steps:
Request: "Add a reporting module with PDF generation"
Steps:
reporting.py with functions/classestest_reporting.py with comprehensive tests__init__.pyProblem: Don't know where to place code
Problem: Unclear how to integrate with existing code
Problem: Missing dependencies
Problem: Tests fail after implementation
Problem: Low test coverage
Problem: Tests are flaky
c1fb172
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.