CtrlK
BlogDocsLog inGet started
Tessl Logo

q-and-a-prep-partner

Predict challenging questions for presentations and prepare structured responses.

46

Quality

33%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/Academic Writing/q-and-a-prep-partner/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description communicates a clear purpose—preparing for tough presentation questions—but is too terse and lacks explicit trigger guidance. It would benefit from a 'Use when...' clause and more natural trigger terms to help Claude distinguish it from general presentation or writing skills.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks for help preparing for Q&A sessions, anticipating audience objections, or rehearsing presentation defenses.'

Include more natural trigger terms users might say, such as 'Q&A prep', 'tough questions', 'audience objections', 'presentation rehearsal', or 'devil's advocate'.

Expand the 'what' slightly to clarify deliverables, e.g., 'Predicts challenging audience questions for presentations, categorizes them by difficulty, and prepares structured talking-point responses.'

DimensionReasoningScore

Specificity

Names the domain (presentations) and two actions (predict challenging questions, prepare structured responses), but lacks detail on what 'structured responses' entails or additional concrete capabilities.

2 / 3

Completeness

Describes what the skill does but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and since the 'when' is entirely absent, this scores a 1.

1 / 3

Trigger Term Quality

Includes relevant terms like 'presentations', 'questions', and 'responses', but misses common natural variations users might say such as 'Q&A prep', 'audience questions', 'presentation defense', 'tough questions', or 'rehearsal'.

2 / 3

Distinctiveness Conflict Risk

The combination of 'challenging questions' and 'presentations' is somewhat distinctive, but 'presentations' is broad enough to overlap with presentation creation/editing skills, and 'prepare responses' could overlap with general writing or coaching skills.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is heavily padded with generic boilerplate (risk assessment, security checklist, lifecycle status, evaluation criteria, response template) that provides no task-specific value and consumes significant token budget. The core skill—predicting challenging Q&A questions for presentations—is inadequately demonstrated: there are no examples of generated questions or response frameworks. The CLI interface documentation is the strongest element, but the surrounding process documentation is generic and repetitive.

Suggestions

Remove or drastically reduce boilerplate sections (Risk Assessment, Security Checklist, Lifecycle Status, Evaluation Criteria, Response Template, Output Requirements) that are generic and not specific to Q&A preparation.

Add a concrete example showing sample input (e.g., an abstract) and expected output (e.g., 2-3 predicted questions with response frameworks) so Claude knows exactly what to produce.

Eliminate circular cross-references ('See ## Usage above') and consolidate redundant sections (e.g., merge 'Example Usage', 'Usage', and 'Workflow' into a single clear workflow).

Add task-specific validation guidance, such as how to assess whether generated questions are sufficiently challenging and whether response frameworks adequately address the questions.

DimensionReasoningScore

Conciseness

Extremely verbose and repetitive. Multiple sections reference each other circularly ('See ## Prerequisites above', 'See ## Usage above', 'See ## Workflow above'). The risk assessment table, security checklist, lifecycle status, evaluation criteria, and response template are boilerplate that add no task-specific value. The core skill (predicting Q&A questions) is buried under layers of generic process documentation.

1 / 3

Actionability

The CLI parameters and usage examples are concrete and specific (e.g., `python scripts/main.py --abstract abstract.txt --field oncology`), and the question types list provides useful categorization. However, there's no example of actual input/output (what a predicted question looks like, what a response framework contains), and the workflow steps are generic process descriptions rather than task-specific instructions.

2 / 3

Workflow Clarity

The workflow section provides a numbered sequence but it's entirely generic ('confirm objective', 'validate request', 'use packaged script') with no task-specific validation checkpoints. The 'Example run plan' is slightly more concrete but still lacks verification of output quality. No feedback loop for reviewing whether generated questions are actually challenging or relevant.

2 / 3

Progressive Disclosure

There is a reference to `references/audit-reference.md` and the `scripts/main.py` entry point, which is appropriate. However, the main file itself is a monolithic wall of text with many sections that could be consolidated or removed. The circular cross-references ('See ## Prerequisites above') are confusing rather than helpful for navigation.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
aipoch/medical-research-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.