Compare behavior across multiple versions of programs or repositories. Use when you need to analyze how functionality changes between versions, identify regressions, compare outputs and exceptions, or validate upgrades. The skill compares execution behavior, test results, outputs, exceptions, and observable states across versions, generating detailed reports showing behavioral divergences, potential regressions, added/removed functionality, and areas requiring validation. Supports multiple programming languages and can work with test suites or execution traces.
84
76%
Does it follow best practices?
Impact
100%
1.58xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/multi-version-behavior-comparator/SKILL.mdThis skill compares behavior across multiple versions of programs, identifying functional changes, regressions, and behavioral divergences to guide safe upgrades and validation.
# Compare two versions
python scripts/compare.py v1.0/ v2.0/
# Compare multiple versions with test suite
python scripts/compare.py v1.0/ v2.0/ v3.0/ --tests tests/
# Generate detailed report
python scripts/compare.py old/ new/ --output report.jsonGenerates JSON report with:
python scripts/compare.py <version1> <version2> [version3...] [--tests <test_dir>] [--output <report.json>]0f00a4f
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.