CtrlK
BlogDocsLog inGet started
Tessl Logo

coding-tutor

Personalized coding tutorials that build on your existing knowledge and use your actual codebase for examples. Creates a persistent learning trail that compounds over time using the power of AI, spaced repetition and quizes.

56

1.91x
Quality

33%

Does it follow best practices?

Impact

94%

1.91x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/coding-tutor/skills/coding-tutor/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description communicates a general concept of personalized coding education with spaced repetition but lacks concrete action verbs and explicit trigger guidance. It uses second person voice ('your existing knowledge', 'your actual codebase') which violates the third-person requirement. The absence of a 'Use when...' clause significantly weakens its utility for skill selection among many options.

Suggestions

Add an explicit 'Use when...' clause with trigger terms like 'teach me', 'learn programming', 'coding practice', 'quiz me', 'spaced repetition', 'coding exercises'.

Rewrite in third person voice (e.g., 'Creates personalized coding tutorials that build on the user's existing knowledge') and list specific concrete actions like 'generates quizzes, tracks learning progress, creates flashcards from codebase patterns'.

Fix the typo 'quizes' to 'quizzes' and add more natural user-facing keywords like 'study', 'practice', 'review', 'drill' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (coding tutorials) and some actions (builds on existing knowledge, uses actual codebase, creates learning trail), but these are more conceptual than concrete specific actions. 'Spaced repetition and quizes' adds some specificity but remains somewhat vague about what the skill actually does step-by-step.

2 / 3

Completeness

Describes what it does (personalized coding tutorials with spaced repetition) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also somewhat weak, so this scores a 1.

1 / 3

Trigger Term Quality

Includes some relevant terms like 'coding tutorials', 'learning', 'spaced repetition', and 'quizes' that users might say. However, it misses common variations like 'teach me', 'learn programming', 'practice coding', 'exercises', 'flashcards', or specific programming concepts.

2 / 3

Distinctiveness Conflict Risk

The combination of personalized tutorials, codebase-based examples, and spaced repetition creates a somewhat distinct niche, but 'coding tutorials' is broad enough to potentially overlap with general coding assistance or education-focused skills.

2 / 3

Total

7

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill has a well-thought-out system design with clear scripts, file structures, and a spaced repetition quiz system. However, it suffers significantly from verbosity - extensive teaching philosophy, style guidance, and motivational framing that Claude doesn't need. The actionable workflow is buried under layers of pedagogical advice that could be dramatically condensed.

Suggestions

Cut the 'Teaching Philosophy', 'Tutorial Writing Style', and 'What Makes Great Teaching' sections down to a concise bullet list of 5-6 key constraints (e.g., 'Use learner's actual code, not abstract examples' and 'Max 3 concepts per tutorial'). Remove generic advice like 'be encouraging but honest' that Claude already knows.

Add a single numbered workflow summary at the top (1. Setup → 2. Read profile → 3. Index tutorials → 4. Plan curriculum → 5. Get approval → 6. Create tutorial → 7. Write content) so the overall process is immediately clear before diving into section details.

Add validation steps: after creating a tutorial file, verify it exists and has correct frontmatter; after updating understanding_score, verify the file was saved correctly.

Move the detailed tutorial template YAML, quiz scoring rubric, and onboarding interview questions into separate reference files to reduce the main skill's token footprint.

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~250+ lines. It explains teaching philosophy, what makes great teaching, and general pedagogical advice that Claude already understands. Sections like 'What Makes Great Teaching' with DO/DON'T/CALIBRATE are generic coaching advice, not actionable instructions. The tutorial writing style section tells Claude to write like Julia Evans/Dan Abramov - Claude knows how to write well. Much of this could be cut by 50%+ without losing actionable content.

1 / 3

Actionability

There are concrete scripts to run (setup_tutorials.py, index_tutorials.py, create_tutorial.py, quiz_priority.py) and specific file paths, which is good. However, the tutorial creation process relies heavily on vague guidance ('build mental models', 'tell stories', 'predict confusion') rather than executable steps. The onboarding interview and quiz sections have concrete structure but the core teaching workflow is more philosophical than procedural.

2 / 3

Workflow Clarity

The overall flow is discernible: setup → onboarding → plan → create tutorial → quiz. However, the steps are spread across many sections without a clear numbered sequence. There are no validation checkpoints (e.g., verify the tutorial file was created correctly, verify the learner profile was saved). The curriculum approval step is a good checkpoint, but the workflow lacks explicit error handling or verification steps.

2 / 3

Progressive Disclosure

The skill references external scripts (setup_tutorials.py, index_tutorials.py, create_tutorial.py, quiz_priority.py) which is good progressive disclosure in principle. However, no bundle files were provided to verify these exist. The SKILL.md itself is monolithic - the teaching philosophy, tutorial writing style, and quiz mode sections could be split into separate reference files. The content that should be inline (workflow steps) is mixed with content that should be separate (philosophy, style guide).

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
EveryInc/compound-engineering-plugin
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.