tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill analyzing-text-with-nlpExecute this skill enables AI assistant to perform natural language processing and text analysis using the nlp-text-analyzer plugin. it should be used when the user requests analysis of text, including sentiment analysis, keyword extraction, topic modeling, or ... Use when analyzing code or data. Trigger with phrases like 'analyze', 'review', or 'examine'.
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
7%This skill content is a template filled with generic placeholder text rather than actionable guidance. It lacks any concrete information about how to actually use the nlp-text-analyzer plugin - no API syntax, no code examples, no actual commands. The content explains concepts Claude already understands while failing to provide the specific technical details needed to execute the skill.
Suggestions
Replace abstract descriptions with actual plugin invocation syntax and code examples showing how to call the nlp-text-analyzer plugin
Remove generic sections like 'Best Practices', 'Prerequisites', and 'Error Handling' that contain only placeholder content
Add concrete input/output examples showing actual API calls and response formats (e.g., JSON schema for sentiment analysis results)
Consolidate the content to under 30 lines focusing only on: plugin syntax, supported analysis types, and one concrete example per analysis type
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with unnecessary explanations of concepts Claude already knows (what NLP is, what sentiment analysis does). Filled with padding like 'This skill empowers Claude' and generic sections that add no value. | 1 / 3 |
Actionability | No concrete code, commands, or executable guidance. Examples describe what 'the skill will' do abstractly rather than showing actual plugin invocation syntax, API calls, or expected output formats. | 1 / 3 |
Workflow Clarity | The 'How It Works' and 'Instructions' sections are vague placeholders with no specific steps. No validation checkpoints, no actual plugin commands, and generic instructions like 'Invoke this skill when trigger conditions are met' provide no actionable workflow. | 1 / 3 |
Progressive Disclosure | Content is organized into sections with headers, but it's a monolithic file with no references to external documentation. The structure exists but contains mostly filler content that could be drastically reduced. | 2 / 3 |
Total | 5 / 12 Passed |
Activation
42%This description attempts to cover NLP capabilities but suffers from internal contradictions (text analysis vs code/data analysis), incomplete specification (trailing '...'), and overly generic trigger terms. The first-person framing ('enables AI assistant') and inconsistent scope would make it difficult for Claude to reliably select this skill over others.
Suggestions
Resolve the contradiction between 'text analysis/NLP' and 'analyzing code or data' - pick one clear domain and stick to it
Replace generic triggers ('analyze', 'review', 'examine') with specific NLP-related phrases like 'sentiment analysis', 'extract keywords', 'find topics in text', 'analyze tone'
Complete the capability list instead of using '...' and ensure the 'Use when' clause matches the actual capabilities described
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (NLP/text analysis) and lists some actions (sentiment analysis, keyword extraction, topic modeling), but the trailing '...' suggests incompleteness and the description conflates text analysis with code/data analysis. | 2 / 3 |
Completeness | Has a 'Use when' clause mentioning 'analyzing code or data' and trigger phrases, but this contradicts the stated purpose of text/NLP analysis. The 'what' and 'when' are present but inconsistent and the '...' indicates incomplete specification. | 2 / 3 |
Trigger Term Quality | Includes some natural terms like 'analyze', 'review', 'examine', and 'sentiment analysis', but these are generic and could apply to many skills. Missing specific variations users might say like 'extract keywords', 'find topics', or 'text sentiment'. | 2 / 3 |
Distinctiveness Conflict Risk | High conflict risk due to generic triggers ('analyze', 'review', 'examine') that would match many skills. The confusion between 'text analysis' and 'code or data' analysis makes it unclear when this skill should be selected over others. | 1 / 3 |
Total | 7 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.