CtrlK
BlogDocsLog inGet started
Tessl Logo

talk-stage1-extract

Extracts and structures source material (articles, transcripts, notes) into a talk summary with narrative arc, themes, metrics, and gaps. Auto-detects REX vs Concept type. Use when starting a new talk from any source material or auditing existing material before committing to a talk.

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./examples/skills/talk-pipeline/stage-1-extract/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly communicates what the skill does and when to use it. It lists specific concrete actions and outputs, includes an explicit 'Use when' clause with clear triggers, and occupies a distinct niche. The main weakness is that trigger terms could be broader to capture more natural user language variations (e.g., 'presentation', 'speech', 'conference').

Suggestions

Add common synonyms for 'talk' such as 'presentation', 'speech', or 'conference talk' to improve trigger term coverage for users who may use different terminology.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'extracts and structures source material', 'narrative arc, themes, metrics, and gaps', 'auto-detects REX vs Concept type', and 'auditing existing material'. These are concrete, well-defined capabilities.

3 / 3

Completeness

Clearly answers both 'what' (extracts and structures source material into a talk summary with narrative arc, themes, metrics, gaps; auto-detects type) and 'when' (explicitly states 'Use when starting a new talk from any source material or auditing existing material before committing to a talk').

3 / 3

Trigger Term Quality

Includes some natural terms like 'articles', 'transcripts', 'notes', 'talk summary', and 'source material', but the domain is fairly niche. Terms like 'REX vs Concept type' are domain-specific jargon that users familiar with the system would use, but common variations like 'presentation', 'speech', 'conference talk' are missing.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: talk preparation from source material with specific outputs (narrative arc, themes, metrics, gaps) and a unique type detection feature (REX vs Concept). Unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

70%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured instructional skill that clearly defines a workflow for extracting talk summaries from source material. Its strengths are the detailed output template, validation checklist, and clear source type detection table. Its main weaknesses are moderate verbosity (some redundancy between sections) and a lack of truly executable/concrete implementation steps — it describes what to do rather than showing exactly how to do it with tool calls or code.

Suggestions

Remove the 'What This Skill Does' numbered list since it duplicates the detailed sections below, or collapse it into a single-sentence purpose statement.

Add concrete examples of tool usage — e.g., show an actual file read command, demonstrate the AskUserQuestion call format, or provide a brief example of processing a sample input snippet into the output format.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary verbosity. The 'What This Skill Does' section largely duplicates information found in the detailed sections below. The 'Tips' section restates things already implied. The anti-patterns section, while useful, could be more concise.

2 / 3

Actionability

The skill provides a detailed output template and clear rules for metric extraction, which is good. However, it lacks executable code or concrete commands — there are no actual tool invocations, no script examples, and the process relies on Claude inferring how to 'read the source' and 'detect source type' without concrete implementation steps. The guidance is structured but more descriptive than executable.

2 / 3

Workflow Clarity

The workflow is clearly sequenced (collect metadata → read source → detect type → extract arc → extract metrics → identify themes → flag gaps → write file). The validation checklist at the end serves as an explicit checkpoint before completion. The 'AskUserQuestion' step for missing metadata is a good feedback loop. The anti-patterns section helps prevent common errors.

3 / 3

Progressive Disclosure

The skill is well-organized with clear sections that progress logically from overview to details. Related stages are linked with one-level-deep references to other SKILL.md files. The content is appropriately self-contained for a single skill file without being monolithic, and the output format template is appropriately inline since it's the core deliverable.

3 / 3

Total

10

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
FlorianBruniaux/claude-code-ultimate-guide
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.