Automatically generate clear, actionable issue reports from failing tests and repository analysis. Analyze test failures to understand expected vs. actual behavior, identify affected code components, and produce well-structured Markdown reports suitable for GitHub Issues or similar trackers. Use when a test fails, when debugging issues, or when the user asks to create an issue report, generate a bug report, or document a test failure.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill issue-report-generator80
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Generate comprehensive, developer-friendly issue reports from failing tests. Analyze test failures, identify affected code, infer root causes when possible, and produce structured Markdown reports ready for issue tracking systems.
Understand what the test is checking and why it fails:
Identify the test:
Understand test intent:
Analyze the failure:
Extract key information:
Locate the code related to the failure:
From stack trace:
From test code:
Code analysis:
Record locations:
Determine why the failure occurs (when possible):
For exceptions:
For assertion failures:
For timeouts:
For integration failures:
State uncertainty:
Create the issue report with required sections:
Title:
[Exception/Issue] in [Component].[Method][Feature] returns incorrect [result]Description:
Steps to Reproduce:
Expected Behavior:
Actual Behavior:
Affected Code:
Analysis (optional):
Additional Context:
Produce the final Markdown report:
Use proper Markdown:
Be precise:
Be clear:
Be actionable:
For detailed templates and patterns, see report_patterns.md.
| Failure Type | Template | Key Focus |
|---|---|---|
| Exception | Exception-Based Bug | Exception type, location, null/invalid variable |
| Assertion failure | Assertion Failure Bug | Expected vs. actual, wrong value source |
| Timeout | Performance/Timeout Bug | Slow operation, bottleneck |
| Integration error | Integration Failure Bug | External system, error message, config |
| Regression | Regression Bug | Breaking commit, what changed |
Input:
UserServiceTest.testAuthenticateNonExistentUserNullPointerException at UserService.java:45Generated Report:
# NullPointerException in UserService.authenticate() when user not found
## Description
A `NullPointerException` is thrown in `UserService.authenticate()` when attempting to authenticate a user that doesn't exist in the database. This causes the authentication endpoint to return a 500 error instead of properly handling the missing user case.
## Steps to Reproduce
1. Run test: `mvn test -Dtest=UserServiceTest#testAuthenticateNonExistentUser`
2. Or call: `userService.authenticate("nonexistent@example.com", "password")`
## Expected Behavior
The method should return an authentication failure result (e.g., `AuthResult.FAILED`) or throw a specific `UserNotFoundException`, not a `NullPointerException`.
## Actual Behaviorjava.lang.NullPointerException at com.example.UserService.authenticate(UserService.java:45) at com.example.AuthController.login(AuthController.java:23) at com.example.UserServiceTest.testAuthenticateNonExistentUser(UserServiceTest.java:67)
## Affected Code
- **File:** `src/main/java/com/example/UserService.java`
- **Method:** `UserService.authenticate`
- **Line:** 45
**Code at line 45:**
```java
String hashedPassword = user.getPassword(); // user is null hereThe exception occurs because the user object is null when the user lookup fails (line 42). The code attempts to call user.getPassword() without checking if the user exists.
Suspected cause: Missing null check after user lookup.
src/test/java/com/example/UserServiceTest.javatestAuthenticateNonExistentUser### Example 2: Assertion Failure
**Input:**
- Test: `CalculatorTest.testDivision`
- Error: `AssertionError: expected 2.5 but got 2.0`
**Generated Report:**
```markdown
# Calculator.divide() returns integer instead of decimal result
## Description
The `Calculator.divide()` method returns `2.0` instead of the expected `2.5` when dividing 5 by 2. This indicates the method is performing integer division instead of floating-point division.
## Steps to Reproduce
1. Run test: `pytest tests/test_calculator.py::CalculatorTest::test_division`
2. Or execute:
```python
calc = Calculator()
result = calc.divide(5, 2)
# Returns 2.0, expected 2.5assert calc.divide(5, 2) == 2.5AssertionError: assert 2.0 == 2.5
Expected: 2.5
Actual: 2.0src/calculator.pyCalculator.divideCurrent implementation:
def divide(self, a, b):
return a / b # Using integer divisionThe method performs integer division when both operands are integers. In Python 2 or when using // operator, this truncates the decimal part.
Suspected cause: Missing float conversion or using wrong division operator.
Suggested fix:
def divide(self, a, b):
return float(a) / float(b)tests/test_calculator.pytest_division### Example 3: Timeout
**Input:**
- Test: `DataProcessorTest.testLargeDataset`
- Error: `Test timeout after 30s`
**Generated Report:**
```markdown
# Performance issue: processLargeDataset() exceeds timeout
## Description
The `DataProcessor.processLargeDataset()` method takes longer than 30 seconds when processing 10,000 items, causing the test to timeout.
## Steps to Reproduce
1. Run test: `npm test -- DataProcessorTest.testLargeDataset`
2. Test processes 10,000 items
## Expected Behavior
Processing should complete within 30 seconds.
## Actual Behavior
Test times out after 30 seconds. Processing is incomplete.
## Affected Code
- **File:** `src/data_processor.js`
- **Method:** `DataProcessor.processLargeDataset`
- **Lines:** 45-60
**Suspected bottleneck (lines 50-55):**
```javascript
for (let i = 0; i < items.length; i++) {
for (let j = 0; j < items.length; j++) { // O(n²) nested loop
if (items[i].id === items[j].relatedId) {
// Process relationship
}
}
}The performance issue appears to be caused by a nested loop with O(n²) complexity. With 10,000 items, this results in 100 million iterations.
Suspected cause: Inefficient algorithm using nested loops.
Suggested optimization: Use a hash map for O(n) lookup:
const itemMap = new Map(items.map(item => [item.id, item]));
for (let item of items) {
const related = itemMap.get(item.relatedId);
if (related) {
// Process relationship
}
}tests/data_processor.test.jstestLargeDataset### Example 4: Integration Failure
**Input:**
- Test: `ApiTest.testGetUserEndpoint`
- Error: `Expected status 200, got 500`
- Response: `{"error": "Database connection failed"}`
**Generated Report:**
```markdown
# Database connection failure in GET /api/users endpoint
## Description
The `/api/users` endpoint returns a 500 error with message "Database connection failed" instead of returning user data.
## Steps to Reproduce
1. Run test: `pytest tests/test_api.py::ApiTest::test_get_user_endpoint`
2. Or make request: `GET http://localhost:8000/api/users`
## Expected BehaviorStatus: 200 OK Body: [{"id": 1, "name": "John"}, ...]
## Actual BehaviorStatus: 500 Internal Server Error Body: {"error": "Database connection failed"}
Stack trace: at DatabaseConnection.connect (db.js:23) at UserRepository.findAll (user_repository.js:15) at UserController.getUsers (user_controller.js:42)
## Affected Code
- **File:** `src/db.js`
- **Method:** `DatabaseConnection.connect`
- **Line:** 23
## Analysis
The database connection fails, likely due to:
1. Database server not running
2. Incorrect connection configuration
3. Missing environment variables
**Suspected cause:** Missing or incorrect `DATABASE_URL` environment variable.
## Environment
- Database: PostgreSQL
- Required env var: `DATABASE_URL`
- Expected format: `postgresql://user:pass@host:port/dbname`
## Test Details
- Test file: `tests/test_api.py`
- Test method: `test_get_user_endpoint`Evidence-based reporting:
Precise language:
Clear uncertainty:
Complete information:
Don't invent:
Don't be vague:
Don't be judgmental:
Before finalizing a report:
0f00a4f
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.