Execute this skill enables AI assistant to analyze the sentiment of text data. it identifies the emotional tone expressed in text, classifying it as positive, negative, or neutral. use this skill when a user requests sentiment analysis, opinion mining, or emoti... Use when analyzing code or data. Trigger with phrases like 'analyze', 'review', or 'examine'.
35
21%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/sentiment-analysis-tool/skills/analyzing-text-sentiment/SKILL.mdQuality
Discovery
42%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description attempts to define a sentiment analysis skill but is undermined by contradictory scope (sentiment analysis vs. generic code/data analysis), truncated text, and overly broad trigger terms. The first-person framing ('enables AI assistant') and the imperative 'Execute this skill' add unnecessary noise. The generic triggers would cause frequent false matches with other analytical skills.
Suggestions
Remove the contradictory 'Use when analyzing code or data' clause and replace with sentiment-specific triggers like 'Use when the user asks about sentiment, emotional tone, opinion polarity, or wants to classify text as positive/negative/neutral'.
Fix the truncated text ('emoti...') and remove the imperative 'Execute this skill' preamble—use third person voice like 'Analyzes the sentiment of text data...'.
Replace overly generic trigger phrases ('analyze', 'review', 'examine') with domain-specific terms like 'sentiment', 'opinion mining', 'emotional tone', 'positive or negative', 'text classification'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (sentiment analysis) and some actions (analyze sentiment, classify as positive/negative/neutral), but the description is muddled by contradictory claims about analyzing 'code or data' and truncated text ('emoti...'). | 2 / 3 |
Completeness | It addresses 'what' (sentiment analysis, classifying emotional tone) and has a 'Use when' clause, but the 'when' is contradictory—it says 'Use when analyzing code or data' which conflicts with the sentiment-specific purpose, undermining the explicit trigger guidance. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'sentiment analysis', 'opinion mining', 'analyze', 'review', and 'examine', but the generic triggers ('analyze', 'review', 'examine') would match far too many unrelated requests, and the truncated 'emoti...' loses a potentially useful keyword. | 2 / 3 |
Distinctiveness Conflict Risk | The generic triggers 'analyze', 'review', 'examine' and the broad 'Use when analyzing code or data' would conflict with virtually any analysis-related skill, severely undermining distinctiveness despite the sentiment-specific content. | 1 / 3 |
Total | 7 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is a template-like placeholder with no actionable content. It explains what sentiment analysis is at a high level without providing any concrete implementation—no code, no specific libraries, no executable examples, and no real workflow. Nearly every section contains generic filler text that wastes tokens without adding value.
Suggestions
Replace the abstract descriptions with concrete, executable code showing how to actually perform sentiment analysis (e.g., using a specific Python library like TextBlob or VADER, or a prompt-based approach with specific output format).
Remove or drastically condense the 'Overview', 'How It Works', 'When to Use', 'Best Practices', 'Integration', 'Prerequisites', and 'Resources' sections—Claude already understands these concepts and they add no actionable information.
Provide a specific output schema (e.g., JSON with sentiment label and confidence score) and show a complete worked example with actual input text and expected output.
Add concrete error handling guidance instead of generic placeholders (e.g., how to handle mixed-language text, very short inputs, or ambiguous sarcasm).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows. The 'Overview', 'How It Works', 'When to Use', 'Best Practices', 'Integration', 'Prerequisites', 'Instructions', 'Error Handling', and 'Resources' sections are almost entirely filler that explains obvious things or provides no actionable detail. Nearly every section could be removed or condensed to 1-2 lines. | 1 / 3 |
Actionability | No executable code, no concrete commands, no specific tools or libraries referenced, no actual sentiment analysis implementation. The examples describe what 'the skill will' do in vague terms without showing how. The 'Instructions' section is entirely generic ('Invoke this skill when trigger conditions are met') with zero specificity. | 1 / 3 |
Workflow Clarity | The workflow steps are vague placeholders ('Process the text', 'Classify each review') with no concrete implementation details, no validation checkpoints, and no error recovery. The 'Instructions' section is a generic 4-step placeholder that could apply to literally any skill. | 1 / 3 |
Progressive Disclosure | Monolithic wall of text with no references to external files and no bundle files to support it. The 'Resources' section lists 'Project documentation' and 'Related skills and commands' without any actual links or file paths. Content is poorly organized with many redundant sections that could be consolidated. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.