Statistical models library for Python. Use when you need specific model classes (OLS, GLM, mixed models, ARIMA) with detailed diagnostics, residuals, and inference. Best for econometrics, time series, rigorous inference with coefficient tables. For guided statistical test selection with APA reporting use statistical-analysis.
85
75%
Does it follow best practices?
Impact
93%
1.09xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/statsmodels/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly specifies concrete capabilities (OLS, GLM, mixed models, ARIMA), includes rich natural trigger terms users would actually use, and explicitly addresses both what the skill does and when to use it. The explicit disambiguation from the related 'statistical-analysis' skill is a notable strength that reduces conflict risk.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and model classes: OLS, GLM, mixed models, ARIMA, along with specific outputs like detailed diagnostics, residuals, inference, and coefficient tables. | 3 / 3 |
Completeness | Clearly answers both 'what' (statistical models library with specific model classes and diagnostics) and 'when' ('Use when you need specific model classes... with detailed diagnostics'). Also includes explicit disambiguation guidance ('For guided statistical test selection with APA reporting use statistical-analysis'). | 3 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'OLS', 'GLM', 'mixed models', 'ARIMA', 'econometrics', 'time series', 'inference', 'coefficient tables', 'residuals', 'diagnostics'. These are terms a user working in statistics would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly distinguishes itself from a related 'statistical-analysis' skill by specifying its niche (specific model classes, econometrics, time series, rigorous inference) and explicitly directing users to the other skill for APA reporting and guided test selection. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill provides excellent actionable code examples that are executable and well-structured, but is severely bloated with encyclopedic listings of model types, features, and best practices that Claude already knows. The content would benefit enormously from moving the catalog-style sections into the referenced files and keeping only the quick-start patterns and key gotchas (like add_constant) in the main SKILL.md. Workflows are present but lack the explicit validation checkpoints needed for rigorous statistical analysis.
Suggestions
Cut the 'Core Statistical Modeling Capabilities' section drastically — move the detailed model/feature listings into the referenced files and keep only a brief table or 2-3 line summary per category with links to references.
Remove or heavily condense the 'When to Use This Skill' section, 'Best Practices', and 'Common Pitfalls' — most of these are general statistical knowledge Claude already possesses. Keep only statsmodels-specific gotchas (e.g., add_constant, formula API quirks).
Add explicit validation checkpoints to the Common Workflows — e.g., 'If Breusch-Pagan p < 0.05, switch to robust SEs or WLS before proceeding' rather than just 'Test for heteroskedasticity'.
Provide the actual bundle reference files or remove references to them — currently the skill promises five reference files that don't exist, which undermines the progressive disclosure structure.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~400+ lines. Massive lists of model types, features, and capabilities that Claude already knows. The 'Core Statistical Modeling Capabilities' section is essentially a reformatted documentation index. The 'When to Use This Skill' bullet list, best practices, common pitfalls (15 items), and workflow descriptions are largely things Claude already understands about statistics. Much of this could be cut by 60-70%. | 1 / 3 |
Actionability | The Quick Start Guide provides fully executable, copy-paste ready code examples for OLS, Logistic Regression, ARIMA, and GLM. The Formula API section, model comparison code, and cross-validation examples are all concrete and runnable. Key patterns like sm.add_constant() are demonstrated clearly. | 3 / 3 |
Workflow Clarity | The four common workflows (Linear Regression, Binary Classification, Count Data, Time Series) list clear sequences of steps, but they lack explicit validation checkpoints and feedback loops. Steps like 'Check residual diagnostics' don't specify what to do if diagnostics fail. The GLM example does show an overdispersion check with a conditional branch, which is good, but the workflow sections themselves are abstract numbered lists without validation gates. | 2 / 3 |
Progressive Disclosure | References to five separate reference files (linear_models.md, glm.md, discrete_choice.md, time_series.md, stats_diagnostics.md) are well-signaled and one level deep. However, no bundle files were provided, so these references may be broken. More importantly, the SKILL.md itself contains enormous amounts of content that should be in those reference files rather than inline — the 'Core Statistical Modeling Capabilities' section alone is a wall of categorized lists that belongs in references. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (613 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
7a1d69c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.