Automatically performs git bisect to identify the first bad commit that introduced a bug or failure. Use when debugging regressions, tracking down when a test started failing, or identifying which commit broke functionality. Handles flaky tests with retry logic and provides comprehensive reports with bisect logs and confidence levels.
Install with Tessl CLI
npx tessl i github:ArabelaTso/Skills-4-SE --skill git-bisect-assistant83
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Automates the git bisect process to efficiently identify the first bad commit responsible for a bug or test failure.
Basic usage pattern:
python scripts/git_bisect_runner.py \
--good <known-good-commit> \
--bad <known-bad-commit> \
--test "<test-command>"Example:
python scripts/git_bisect_runner.py \
--good v1.0.0 \
--bad HEAD \
--test "pytest tests/test_feature.py::test_specific_case"Gather Information
Run Bisect
git_bisect_runner.py script with appropriate parametersReview Results
--good: Known good revision (commit hash, tag, or branch name)--test: Shell command to test each commit. Exit code 0 = good, non-zero = bad--bad: Known bad revision (default: HEAD)--repo: Repository path (default: current directory)--retries: Number of test runs per commit for flaky tests (default: 1)--timeout: Test execution timeout in seconds (default: no timeout)For non-deterministic tests, use --retries to run the test multiple times per commit:
python scripts/git_bisect_runner.py \
--good abc123 \
--bad HEAD \
--test "npm test" \
--retries 3The script uses majority voting: if a test passes 2 out of 3 times, the commit is marked as good.
The test command should:
--retries for flaky tests--timeoutExamples:
# Python test
--test "pytest tests/test_auth.py -v"
# Shell script
--test "./scripts/verify_build.sh"
# Compilation check
--test "make && ./bin/app --version"
# Multiple commands
--test "npm install && npm test"The script generates a comprehensive report including:
User: "The integration tests started failing sometime in the last 20 commits"
python scripts/git_bisect_runner.py \
--good HEAD~20 \
--bad HEAD \
--test "pytest tests/integration/"User: "Feature X worked in v2.1.0 but is broken now"
python scripts/git_bisect_runner.py \
--good v2.1.0 \
--bad HEAD \
--test "python -c 'import app; assert app.feature_x() == expected'"User: "A test fails intermittently, need to find when it started"
python scripts/git_bisect_runner.py \
--good main \
--bad feature-branch \
--test "pytest tests/test_flaky.py" \
--retries 5 \
--timeout 30Bisect fails to start: Verify good and bad revisions exist and are valid git references
Test command fails unexpectedly: Test the command manually on a known good/bad commit first
Inconsistent results: Increase --retries or check for environmental factors affecting tests
Timeout too short: Increase --timeout or optimize test command
c1fb172
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.