Validate messaging consistency across website, GitHub repos, and local documentation generating read-only discrepancy reports. Use when checking content alignment or finding mixed messaging. Trigger with phrases like "check consistency", "validate documentation", or "audit messaging".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill 000-jeremy-content-consistency-validator64
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels in completeness and trigger term quality with explicit 'Use when' and 'Trigger with phrases' clauses. The description clearly carves out a distinct niche for cross-platform messaging validation. Minor improvement could come from listing more specific actions beyond validation and report generation.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (messaging consistency) and mentions specific sources (website, GitHub repos, local documentation) and output type (read-only discrepancy reports), but doesn't list multiple concrete actions beyond 'validate' and 'generate reports'. | 2 / 3 |
Completeness | Clearly answers both what (validate messaging consistency across sources, generate discrepancy reports) and when (explicit 'Use when' clause with triggers and 'Trigger with phrases' providing specific activation terms). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms: 'check consistency', 'validate documentation', 'audit messaging', plus domain terms like 'content alignment' and 'mixed messaging' that users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive niche combining messaging consistency, cross-platform validation (website/GitHub/local docs), and read-only reporting. The specific trigger phrases and domain focus make conflicts with other skills unlikely. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a high-level outline but lacks the concrete, actionable guidance needed for Claude to execute the task. The instructions describe WHAT to do but never HOW - there are no specific commands for fetching content, no code for comparison logic, no concrete examples of discrepancy detection. The structure is reasonable but the content is too abstract to be useful.
Suggestions
Add concrete code examples for content extraction - e.g., specific WebFetch calls for website content, git commands for repo access, file reading patterns for local docs
Provide a specific example of what a discrepancy looks like and how to detect it programmatically (e.g., regex patterns for version numbers, string comparison for feature lists)
Include a sample output report structure showing exactly what the Markdown report should contain with placeholder values
Replace the vague Resources section with actual file references or remove it entirely if no real resources exist
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is relatively brief but includes some unnecessary filler like 'This skill provides automated assistance for the described functionality' which adds no value. The Resources section lists vague concepts rather than actionable references. | 2 / 3 |
Actionability | The instructions are entirely abstract and vague - 'Identify and discover all content sources' and 'Compare content systematically' provide no concrete commands, code, or specific methods. There's no executable guidance on HOW to perform any of these steps. | 1 / 3 |
Workflow Clarity | Steps are listed in sequence but lack any validation checkpoints or feedback loops. For a task involving content comparison across multiple sources, there's no guidance on handling partial failures, verification of extracted content, or error recovery. | 2 / 3 |
Progressive Disclosure | References to external files (errors.md, examples.md) are present and one-level deep, but the main content is thin and the references feel like placeholders rather than well-organized supplementary material. The Resources section lists concepts without actual links. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.