CtrlK
BlogDocsLog inGet started
Tessl Logo

propose-hypotheses

Execute complete FPF cycle from hypothesis generation to decision

40

Quality

27%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/fpf/skills/propose-hypotheses/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is critically underspecified. It relies on the undefined acronym 'FPF' without explanation, provides no concrete actions, and lacks any 'Use when...' trigger guidance. A user or Claude selecting from multiple skills would have no way to understand what this skill does or when to apply it.

Suggestions

Define what 'FPF' stands for and list the specific concrete steps or actions involved in the cycle (e.g., 'Generates hypotheses, designs experiments, collects data, analyzes results, and recommends decisions').

Add an explicit 'Use when...' clause with natural trigger terms that users would actually say when they need this skill (e.g., 'Use when the user asks for hypothesis testing, experimental design, or data-driven decision making').

Replace jargon with plain language or at minimum expand the acronym so the description is self-contained and distinguishable from other analytical or decision-support skills.

DimensionReasoningScore

Specificity

The description mentions 'FPF cycle' and 'hypothesis generation to decision' but does not explain what FPF stands for or list concrete actions. 'Execute complete FPF cycle' is abstract jargon without clarification of specific capabilities.

1 / 3

Completeness

The 'what' is vague (execute an undefined FPF cycle) and there is no 'when' clause or explicit trigger guidance at all. Both components are very weak.

1 / 3

Trigger Term Quality

'FPF cycle' is unexplained technical jargon that users are unlikely to naturally say. 'Hypothesis generation' and 'decision' are generic terms that don't serve as distinctive trigger terms for this specific skill.

1 / 3

Distinctiveness Conflict Risk

Without defining what FPF means, the description is indistinguishable from any analytical or decision-making skill. Terms like 'hypothesis generation' and 'decision' are extremely broad and could conflict with many other skills.

1 / 3

Total

4

/

12

Passed

Implementation

55%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill provides a well-structured multi-step workflow with clear sequencing, validation checkpoints, and a completion checklist, which is its strongest aspect. However, it critically depends on numerous external task files that are not provided in the bundle, making the actual execution logic incomplete and unverifiable. The content is moderately concise but could reduce repetition in the agent launch patterns.

Suggestions

Provide the referenced task files (tasks/init-context.md, tasks/generate-hypotheses.md, tasks/verify-logic.md, etc.) in the bundle, or inline the essential logic from each so the skill is self-contained.

Reduce repetition by defining the common agent launch pattern once (model, agent type, read-task-file convention) and referencing it in each step rather than repeating the full template.

Add a brief 'References' section at the end listing all external files with one-line descriptions of what each contains, so the dependency graph is clear at a glance.

DimensionReasoningScore

Conciseness

The skill is reasonably structured but includes some repetitive patterns (every step repeats the agent launch format verbatim) and could be tightened. The prompt templates are somewhat verbose with repeated structural elements, though most content is necessary for the multi-step workflow.

2 / 3

Actionability

The workflow provides concrete directory creation commands and clear prompt templates, but relies heavily on external task files (e.g., tasks/init-context.md, tasks/generate-hypotheses.md) that are not provided in the bundle. Without those files, the actual execution logic is opaque. The skill describes what to delegate but not what the delegated tasks actually do.

2 / 3

Workflow Clarity

The multi-step process is clearly sequenced with numbered steps, explicit postconditions, wait-for-all synchronization points, file movement as validation checkpoints (L0→L1→L2 or invalid), conditional loops (Step 4), and a completion checklist. Each step specifies what to verify before proceeding.

3 / 3

Progressive Disclosure

The skill references multiple external task files (tasks/init-context.md, tasks/generate-hypotheses.md, tasks/verify-logic.md, etc.) but none are provided in the bundle. There are no bundle files at all, making the references unresolvable. The skill is also a monolithic document with no signposted navigation to supporting materials that actually exist.

1 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
NeoLabHQ/context-engineering-kit
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.