Automated LLM-driven hypothesis generation and testing on tabular datasets. Use when you want to systematically explore hypotheses about patterns in empirical data (e.g., deception detection, content analysis). Combines literature insights with data-driven hypothesis testing. For manual hypothesis formulation use hypothesis-generation; for creative ideation use scientific-brainstorming.
69
62%
Does it follow best practices?
Impact
74%
1.19xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/hypogenic/SKILL.mdDataset format and task configuration
uv installation
0%
100%
Train file naming
0%
100%
Val file naming
0%
100%
Test file naming
0%
100%
Feature key names
0%
0%
Label key name
100%
100%
Consistent list lengths
100%
100%
Prompt template placeholder syntax
100%
100%
num_hypotheses placeholder
100%
100%
Required prompt templates
100%
100%
Observations template
100%
100%
Python API usage and label extraction
Correct import
0%
0%
config_path argument
100%
0%
extract_label argument
100%
100%
generate_hypotheses method
0%
0%
method parameter
0%
0%
num_hypotheses parameter
0%
0%
output_path parameter
0%
0%
inference hypothesis_bank
30%
0%
inference test_data
25%
0%
extract_label implementation
100%
100%
extract_label label consistency
100%
100%
HypoRefine literature processing workflow
Literature directory structure
100%
100%
GROBID setup command
100%
100%
GROBID run command
100%
100%
PDF preprocessing command
100%
100%
Redis port number
100%
100%
Number of papers
100%
100%
HypoRefine method name
100%
100%
Three hypothesis bank outputs
20%
100%
b58ad7e
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.