CtrlK
BlogDocsLog inGet started
Tessl Logo

coding-tutor

Personalized coding tutorials that build on your existing knowledge and use your actual codebase for examples. Creates a persistent learning trail that compounds over time using the power of AI, spaced repetition and quizes.

65

1.91x
Quality

47%

Does it follow best practices?

Impact

94%

1.91x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/coding-tutor/skills/coding-tutor/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description communicates a general concept—personalized coding tutorials with spaced repetition—but lacks the precision and explicit trigger guidance needed for reliable skill selection. It uses second person voice ('your existing knowledge', 'your actual codebase') which violates the third-person convention, and contains a typo ('quizes'). The absence of a 'Use when...' clause is a significant gap.

Suggestions

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user wants to learn a coding concept, requests a tutorial, asks for practice exercises, or wants to review previous lessons.'

Replace second person voice ('your existing knowledge', 'your actual codebase') with third person ('the user's existing knowledge', 'the user's actual codebase') and fix the typo 'quizes' → 'quizzes'.

List more specific concrete actions, e.g., 'Generates personalized coding lessons, creates quiz questions from the user's codebase, schedules spaced repetition reviews, and tracks learning progress over time.'

DimensionReasoningScore

Specificity

Names the domain (coding tutorials) and some actions (builds on existing knowledge, uses actual codebase, creates learning trail), but the actions are more conceptual than concrete. It doesn't list specific discrete operations like 'generates quiz questions, tracks progress, schedules review sessions.'

2 / 3

Completeness

Describes what it does (personalized coding tutorials with spaced repetition) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also somewhat vague, bringing this to 1.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'coding tutorials', 'spaced repetition', 'quizes' [sic], and 'learning trail', but misses many natural user phrases like 'teach me', 'learn', 'explain concept', 'practice exercises', 'study', 'flashcards'. Coverage of natural trigger terms is incomplete.

2 / 3

Distinctiveness Conflict Risk

The combination of personalized tutorials + spaced repetition + codebase examples is somewhat distinctive, but 'coding tutorials' is broad enough to overlap with general coding assistance or documentation skills. The lack of explicit trigger conditions increases conflict risk.

2 / 3

Total

7

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with clear workflows, concrete scripts, and well-defined templates, making it practically useful. However, it is significantly over-verbose, spending many tokens on teaching philosophy, writing style guidance, and pedagogical principles that Claude already understands. The content would benefit from aggressive trimming of philosophical sections and splitting detailed subsections into referenced files.

Suggestions

Cut the 'Teaching Philosophy', 'What Makes Great Teaching', and 'Tutorial Writing Style' sections down to 3-5 bullet points total - Claude already knows how to teach well; focus only on project-specific constraints.

Move the detailed Quiz Mode section (scoring rubric, spaced repetition explanation, question types) into a separate QUIZ_MODE.md file and reference it from the main skill.

Remove explanatory prose like 'Learning from abstract examples is forgettable; learning from YOUR code is sticky' - these are motivational statements that consume tokens without adding actionable guidance.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~200+ lines. It explains teaching philosophy, what makes great teaching, and general pedagogical principles that Claude already understands. Sections like 'What Makes Great Teaching' with DO/DON'T/CALIBRATE are things Claude inherently knows. The Julia Evans/Dan Abramov references and extensive style guidance add tokens without adding actionable specificity.

1 / 3

Actionability

The skill provides concrete, executable commands (setup scripts, create_tutorial.py, index_tutorials.py, quiz_priority.py), specific file paths, exact template structures with YAML frontmatter, and clear formats for learner profiles, quiz history, and tutorial creation. The guidance is copy-paste ready throughout.

3 / 3

Workflow Clarity

The multi-step workflows are clearly sequenced: setup → check learner profile → onboarding interview → plan curriculum → get approval → create tutorial. Quiz mode has clear triggers, prioritization via spaced repetition script, scoring rubric, and recording steps. Validation checkpoints exist (user approval of curriculum plan, understanding scores, re-teaching triggers at low scores).

3 / 3

Progressive Disclosure

The skill references external scripts appropriately (setup_tutorials.py, index_tutorials.py, create_tutorial.py, quiz_priority.py) but the main SKILL.md itself is a monolithic wall of text that could benefit from splitting teaching philosophy, quiz mode, and tutorial creation into separate referenced files. All content is inline rather than appropriately distributed.

2 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
EveryInc/compound-engineering-plugin
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.