Execute this skill enables AI assistant to perform natural language processing and text analysis using the nlp-text-analyzer plugin. it should be used when the user requests analysis of text, including sentiment analysis, keyword extraction, topic modeling, or ... Use when analyzing code or data. Trigger with phrases like 'analyze', 'review', or 'examine'.
35
21%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/nlp-text-analyzer/skills/analyzing-text-with-nlp/SKILL.mdQuality
Discovery
42%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description suffers from internal contradictions (text analysis vs. code/data analysis), vague and overly broad trigger terms, and incomplete capability listing (trailing ellipsis). It also violates voice guidelines by using 'enables AI assistant' framing rather than third-person active voice. While it names some specific NLP tasks, the overall quality is undermined by scope confusion and generic triggers.
Suggestions
Resolve the contradiction between NLP text analysis and 'analyzing code or data'—pick the correct scope and describe it consistently.
Replace generic triggers ('analyze', 'review', 'examine') with domain-specific terms like 'sentiment analysis', 'extract keywords', 'topic modeling', 'text mining', 'NLP'.
Rewrite in third-person active voice (e.g., 'Performs sentiment analysis, extracts keywords, and identifies topics from text. Use when the user asks for text analysis, sentiment scoring, or keyword extraction.') and remove the ellipsis by completing the capability list.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names some specific actions like 'sentiment analysis, keyword extraction, topic modeling' but the description is padded with vague language ('natural language processing and text analysis'), uses an ellipsis suggesting incompleteness, and contradicts itself by mentioning both text analysis and code/data analysis. | 2 / 3 |
Completeness | Has a 'what' (NLP text analysis tasks) and a 'when' clause ('Use when analyzing code or data'), but the 'when' is contradictory and vague—it says 'code or data' despite the skill being about text analysis. The ellipsis also signals incompleteness in the 'what'. | 2 / 3 |
Trigger Term Quality | Includes some trigger terms like 'analyze', 'review', 'examine', 'sentiment analysis', 'keyword extraction', but these are overly generic and would match many unrelated skills. The terms 'analyze code or data' conflict with the NLP text focus. | 2 / 3 |
Distinctiveness Conflict Risk | Highly generic trigger terms like 'analyze', 'review', 'examine' would conflict with virtually any analysis-related skill. The scope confusion between text analysis and code/data analysis further increases conflict risk. | 1 / 3 |
Total | 7 / 12 Passed |
Implementation
0%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is almost entirely boilerplate with no actionable, concrete guidance. It never shows how to actually invoke the nlp-text-analyzer plugin, provides no code examples, no API syntax, and no real workflow steps. Every section reads as a generic template placeholder rather than a useful instruction set for Claude.
Suggestions
Replace the abstract 'How It Works' and 'Instructions' sections with concrete, executable code showing how to invoke the nlp-text-analyzer plugin (e.g., actual function calls, CLI commands, or tool-use syntax with parameters).
Add real input/output examples with actual data—show the exact plugin invocation and the structured JSON or text output Claude should expect for sentiment analysis, keyword extraction, etc.
Remove sections that explain obvious concepts Claude already knows (Overview, When to Use, Best Practices about 'be specific', Integration generalities) to dramatically reduce token usage.
Add explicit validation/error-handling steps with concrete error messages and recovery actions rather than generic placeholders like 'Prompts for correction'.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive explanation of concepts Claude already knows. Sections like 'How It Works', 'When to Use This Skill', 'Best Practices', 'Integration', and 'Overview' are padded with obvious information that wastes tokens. The entire file could be reduced to a fraction of its size. | 1 / 3 |
Actionability | No executable code, no concrete commands, no API calls, no actual plugin invocation syntax. The 'Instructions' section is entirely generic ('Invoke this skill when the trigger conditions are met'). Examples describe what the skill 'will do' rather than showing how to actually do it. There is zero copy-paste-ready guidance. | 1 / 3 |
Workflow Clarity | The workflow steps are vague placeholders ('Invoke this skill when the trigger conditions are met', 'Provide necessary context and parameters', 'Review the generated output'). No concrete sequence of operations, no validation checkpoints, no error recovery loops. The 'How It Works' section is equally abstract. | 1 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files, no bundle structure, and no meaningful organization. Sections like 'Resources' reference 'Project documentation' and 'Related skills and commands' without any actual links or paths. Content that could be split out is all inline but also all vacuous. | 1 / 3 |
Total | 4 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.