Execute this skill enables AI assistant to perform natural language processing and text analysis using the nlp-text-analyzer plugin. it should be used when the user requests analysis of text, including sentiment analysis, keyword extraction, topic modeling, or ... Use when analyzing code or data. Trigger with phrases like 'analyze', 'review', or 'examine'.
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill analyzing-text-with-nlp85
Quality
24%
Does it follow best practices?
Impact
91%
1.13xAverage score across 12 eval scenarios
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/nlp-text-analyzer/skills/analyzing-text-with-nlp/SKILL.mdDiscovery
42%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description attempts to cover NLP capabilities but suffers from internal contradictions (text analysis vs code/data), incomplete information (trailing '...'), and overly generic trigger terms. The mismatch between stated capabilities and usage guidance would cause confusion in skill selection.
Suggestions
Remove the contradiction by aligning the 'Use when' clause with the actual capabilities - either focus on text/NLP analysis OR code/data analysis, not both
Replace generic triggers ('analyze', 'review', 'examine') with NLP-specific terms like 'sentiment', 'extract keywords', 'topic modeling', 'text analysis', 'NLP'
Complete the capability list instead of using '...' - explicitly state all supported analysis types to improve specificity and distinctiveness
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (NLP/text analysis) and lists some actions (sentiment analysis, keyword extraction, topic modeling), but the trailing '...' suggests incompleteness and the description mixes text analysis with code/data analysis confusingly. | 2 / 3 |
Completeness | Has a 'Use when' clause and trigger phrases, but the guidance is contradictory - describes text analysis capabilities but then says 'Use when analyzing code or data' which doesn't match. The incomplete '...' also weakens the 'what' portion. | 2 / 3 |
Trigger Term Quality | Includes some natural trigger terms ('analyze', 'review', 'examine', 'sentiment analysis', 'keyword extraction'), but these are generic and could apply to many skills. Missing specific variations like 'NLP', 'text processing', 'extract keywords'. | 2 / 3 |
Distinctiveness Conflict Risk | High conflict risk due to generic triggers ('analyze', 'review', 'examine') that would match many skills. The confusion between text analysis and code/data analysis makes it unclear when this should be selected over other analysis tools. | 1 / 3 |
Total | 7 / 12 Passed |
Implementation
7%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill content is largely boilerplate with no actionable guidance. It explains concepts Claude already understands, provides no executable code or plugin invocation syntax, and uses generic placeholder sections. The examples describe outcomes without showing how to achieve them.
Suggestions
Replace abstract descriptions with concrete plugin invocation syntax showing exact commands/API calls (e.g., `nlp-text-analyzer analyze --type sentiment "text here"`)
Remove sections explaining what NLP, sentiment analysis, and keyword extraction are - Claude already knows these concepts
Provide actual output format examples (JSON schema or sample responses) so Claude knows what to expect and how to parse results
Delete generic placeholder sections (Prerequisites, Instructions, Error Handling, Resources) that contain no skill-specific information
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with unnecessary explanations of concepts Claude already knows (what NLP is, what sentiment analysis does). Multiple sections repeat the same information, and generic boilerplate sections like 'Prerequisites', 'Instructions', and 'Error Handling' add no value. | 1 / 3 |
Actionability | No concrete code, commands, or executable guidance provided. Examples describe what 'the skill will do' abstractly rather than showing actual plugin invocation syntax, API calls, or expected output formats. | 1 / 3 |
Workflow Clarity | The 'How It Works' section is vague and describes conceptual steps rather than actionable ones. No actual commands, validation steps, or error recovery procedures are provided. The 'Instructions' section is completely generic placeholder text. | 1 / 3 |
Progressive Disclosure | Content is organized into sections with headers, but it's a monolithic document with no references to external files. The structure exists but contains too much inline content that could be condensed or split appropriately. | 2 / 3 |
Total | 5 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.