CtrlK
BlogDocsLog inGet started
Tessl Logo

building-recommendation-systems

Execute this skill empowers AI assistant to construct recommendation systems using collaborative filtering, content-based filtering, or hybrid approaches. it analyzes user preferences, item features, and interaction data to generate personalized recommendations... Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

25

Quality

8%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/ai-ml/recommendation-engine/skills/building-recommendation-systems/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

17%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description suffers from placeholder trigger guidance that provides no actual selection criteria, undermining its usefulness in a multi-skill environment. While it names the recommendation system domain and some technical approaches, the boilerplate 'Use when appropriate context detected' clause is entirely non-functional. Additionally, the description begins with 'Execute this skill empowers AI assistant,' which is awkward phrasing that mixes imperative and third-person voice.

Suggestions

Replace the placeholder trigger clause with specific, actionable guidance, e.g., 'Use when the user asks about recommendation engines, personalized suggestions, collaborative filtering, content-based recommendations, or building a recommender system.'

Add natural user-facing trigger terms such as 'recommend products', 'suggest items', 'personalization', 'user preferences', 'similar items', 'recommender', and file/data types commonly associated with recommendation tasks.

Rewrite the opening to use proper third-person voice and remove filler, e.g., 'Constructs recommendation systems using collaborative filtering, content-based filtering, or hybrid approaches. Analyzes user-item interaction matrices, computes similarity scores, and generates ranked recommendation lists.'

DimensionReasoningScore

Specificity

Names the domain (recommendation systems) and some approaches (collaborative filtering, content-based filtering, hybrid), and mentions analyzing user preferences, item features, and interaction data. However, the concrete actions are somewhat generic and padded with buzzwords rather than listing truly specific operations.

2 / 3

Completeness

While the 'what' is partially addressed, the 'when' clause is completely absent — the placeholder text 'Use when appropriate context detected' and 'Trigger with relevant phrases based on skill purpose' provides zero actionable trigger guidance, which per the rubric should cap completeness and here warrants the lowest score.

1 / 3

Trigger Term Quality

The description contains no natural user-facing trigger terms. Phrases like 'collaborative filtering' and 'content-based filtering' are technical jargon, and the trigger guidance is entirely placeholder text ('Use when appropriate context detected. Trigger with relevant phrases based on skill purpose') with zero actual keywords a user would say.

1 / 3

Distinctiveness Conflict Risk

The mention of recommendation systems, collaborative filtering, and hybrid approaches provides some domain specificity, but the lack of concrete trigger terms and the vague placeholder language means it could easily overlap with general data analysis or machine learning skills.

2 / 3

Total

6

/

12

Passed

Implementation

0%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is almost entirely generic boilerplate with no actionable content. It describes what a recommendation system is and what Claude would theoretically do, rather than providing concrete code, specific algorithms, or executable workflows. The majority of sections (Instructions, Output, Error Handling, Resources) contain placeholder text that provides zero value.

Suggestions

Replace the abstract 'Examples' section with actual executable Python code showing collaborative filtering (e.g., using surprise or scipy for matrix factorization) and content-based filtering implementations.

Remove all generic boilerplate sections (Instructions, Output, Error Handling, Resources) that contain no skill-specific information and waste tokens.

Add a concrete workflow with validation steps: data loading → exploratory checks → model training → evaluation metrics → iteration, with specific code at each step.

Remove the 'How It Works' and 'When to Use This Skill' sections which explain Claude's own behavior back to it, and replace with a concise quick-start code block.

DimensionReasoningScore

Conciseness

Extremely verbose with extensive filler content. Explains concepts Claude already knows (what collaborative filtering is, what data preprocessing means), includes generic boilerplate sections (Error Handling, Resources, Instructions) that add no value, and the 'How It Works' section describes Claude's own behavior back to it.

1 / 3

Actionability

No executable code, no concrete commands, no specific library usage examples. The 'Examples' section describes what the skill 'will do' in abstract terms rather than providing actual implementation code. The Instructions section is entirely generic ('Invoke this skill when trigger conditions are met').

1 / 3

Workflow Clarity

No clear multi-step workflow with validation checkpoints. The steps listed are abstract descriptions ('Generate code to load and preprocess data') rather than actionable sequences. No validation steps, no feedback loops, no concrete process to follow.

1 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. Multiple sections contain generic filler (Resources just says 'Project documentation', Output says 'structured output relevant to the task'). Content is poorly organized with redundant sections that could be consolidated or removed entirely.

1 / 3

Total

4

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.