CtrlK
BlogDocsLog inGet started
Tessl Logo

building-recommendation-systems

Execute this skill empowers AI assistant to construct recommendation systems using collaborative filtering, content-based filtering, or hybrid approaches. it analyzes user preferences, item features, and interaction data to generate personalized recommendations... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

25

Quality

8%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/ai-ml/recommendation-engine/skills/building-recommendation-systems/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description suffers from placeholder trigger language that provides no actual guidance for skill selection, severely undermining its utility. While it names the recommendation systems domain and some technical approaches, the boilerplate 'Use when appropriate context detected' clause is effectively meaningless. Additionally, the description uses improper voice ('it analyzes') and begins with the awkward phrase 'Execute this skill empowers AI assistant', suggesting it was auto-generated without review.

Suggestions

Replace the placeholder trigger clause with specific, natural user phrases such as 'Use when the user asks for recommendations, suggests building a recommender, mentions collaborative filtering, or wants to personalize content for users.'

Add concrete actions like 'builds user-item matrices, computes similarity scores, evaluates recommendation accuracy with metrics like RMSE or precision@k' to increase specificity.

Remove the awkward preamble 'Execute this skill empowers AI assistant' and rewrite in proper third-person active voice (e.g., 'Constructs recommendation systems using collaborative filtering...').

DimensionReasoningScore

Specificity

Names the domain (recommendation systems) and some approaches (collaborative filtering, content-based filtering, hybrid), and mentions analyzing user preferences, item features, and interaction data. However, the concrete actions are somewhat generic and padded with buzzwords rather than listing truly specific operations.

2 / 3

Completeness

While the 'what' is partially addressed, the 'when' clause is completely absent — the placeholder text 'Use when appropriate context detected' and 'Trigger with relevant phrases based on skill purpose' provides zero actionable trigger guidance, which is worse than having no clause at all.

1 / 3

Trigger Term Quality

The description contains no natural user-facing trigger terms. Phrases like 'collaborative filtering' and 'content-based filtering' are technical jargon, and the trigger guidance is entirely placeholder text ('Use when appropriate context detected. Trigger with relevant phrases based on skill purpose') with zero actual keywords a user would say.

1 / 3

Distinctiveness Conflict Risk

The domain of recommendation systems is somewhat specific and distinguishable from most other skills, but the vague trigger language and lack of concrete file types or user phrases means it could still overlap with general data analysis or machine learning skills.

2 / 3

Total

6

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is almost entirely descriptive meta-content that explains what a recommendation engine skill would do, rather than providing actionable instructions for building one. It contains no executable code, no specific library configurations, no concrete data schemas, and no validation steps. The majority of the content is generic boilerplate (Error Handling, Output, Instructions sections) that could apply to any skill and wastes token budget.

Suggestions

Replace the abstract 'Examples' section with complete, executable Python code snippets showing collaborative filtering (e.g., using surprise or scipy for matrix factorization) and content-based filtering implementations.

Remove generic boilerplate sections (Instructions, Output, Error Handling, Resources, Integration) that provide no recommendation-system-specific guidance and waste tokens.

Add a concrete workflow with validation checkpoints, e.g.: 1. Load data → 2. Validate schema → 3. Train model → 4. Evaluate with NDCG/precision metrics → 5. If metrics below threshold, tune hyperparameters → 6. Export model.

Include specific data format expectations (e.g., expected CSV schema for user-item interactions) and concrete library installation commands (e.g., `pip install scikit-surprise`).

DimensionReasoningScore

Conciseness

Extremely verbose with extensive explanations of concepts Claude already knows. Sections like 'How It Works', 'When to Use This Skill', 'Integration', 'Instructions', 'Output', 'Error Handling', and 'Resources' are all filler that provide no actionable value. The content explains what recommendation systems are and how Claude would approach them, which is redundant.

1 / 3

Actionability

No executable code, no concrete commands, no specific library usage examples, no copy-paste ready snippets. The 'Examples' section describes what the skill 'will do' rather than providing actual implementation code. The 'Instructions' section is entirely generic ('Invoke this skill when trigger conditions are met').

1 / 3

Workflow Clarity

No clear multi-step workflow with validation checkpoints. The steps listed are abstract descriptions ('Analyzing Requirements', 'Generating Code') rather than actionable sequences. There are no validation steps, no feedback loops, and no concrete commands to execute at each stage.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. All content is inline but paradoxically lacks depth—it's both too long and too shallow. No links to detailed guides, API references, or example files. The 'Resources' section lists vague placeholders ('Project documentation', 'Related skills and commands').

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.