Debug AI suggestion quality, context issues, and code generation problems in Cursor. Triggers on "debug cursor ai", "cursor suggestions wrong", "bad cursor completion", "cursor ai debug", "cursor hallucination".
85
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description with excellent trigger terms and clear distinctiveness for its niche of debugging Cursor AI issues. The 'when' guidance is explicit with natural trigger phrases. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., checking .cursorrules, analyzing context settings, reviewing indexing).
Suggestions
Expand the capability list with more concrete actions, e.g., 'Analyzes .cursorrules configuration, checks context window settings, diagnoses indexing issues, and reviews prompt patterns in Cursor.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Cursor AI) and some actions ('debug AI suggestion quality, context issues, and code generation problems'), but these are somewhat general categories rather than multiple specific concrete actions like 'analyze .cursorrules files, check context window settings, validate prompt templates'. | 2 / 3 |
Completeness | Clearly answers both 'what' (debug AI suggestion quality, context issues, and code generation problems in Cursor) and 'when' (explicit triggers listed with 'Triggers on' clause providing specific phrases). The trigger guidance is explicit and actionable. | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would actually say: 'debug cursor ai', 'cursor suggestions wrong', 'bad cursor completion', 'cursor hallucination'. These are realistic phrases a frustrated user would type when experiencing Cursor AI issues. | 3 / 3 |
Distinctiveness Conflict Risk | Very specific niche targeting Cursor IDE AI debugging specifically. The trigger terms are highly distinctive ('cursor suggestions wrong', 'cursor hallucination') and unlikely to conflict with general coding, debugging, or other AI tool skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable debugging guide with clear symptom-to-fix mappings and a systematic workflow with validation checkpoints. Its main weaknesses are moderate verbosity (some sections like Enterprise Considerations and the ASCII diagram could be trimmed) and the monolithic structure that could benefit from splitting into referenced sub-files. Overall it's a strong skill that would serve Claude well in diagnosing Cursor AI issues.
Suggestions
Trim or remove the 'Enterprise Considerations' section — it's generic advice that doesn't add actionable debugging value for Claude.
Consider splitting 'Logs and Diagnostics' and 'Reproducing Issues for Bug Reports' into a separate DIAGNOSTICS.md file, referenced from the main skill, to reduce the monolithic feel.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with good use of structured diagrams and concrete examples, but includes some unnecessary sections like 'Enterprise Considerations' and explanatory framing that Claude doesn't need. The diagnostic framework percentages and the ASCII box diagram add visual weight without proportional value. | 2 / 3 |
Actionability | Highly actionable throughout — provides specific file paths, exact commands (Cmd+Shift+P > 'Cursor: Resync Index'), concrete YAML rule examples, exact prompt patterns with @-references, and specific keyboard shortcuts. Each symptom section has a clear, executable fix. | 3 / 3 |
Workflow Clarity | The 'Systematic Debug Workflow' is clearly sequenced with 5 explicit steps, each with validation logic (e.g., 'If this works, the issue was context pollution'). The symptom-based sections also follow clear diagnostic patterns. The bug report reproduction steps include a verification checklist. | 3 / 3 |
Progressive Disclosure | The content is well-organized with clear headers and symptom-based sections, but it's a long monolithic file with no bundle files to offload detail into. The Enterprise Considerations, Logs and Diagnostics, and detailed symptom sections could be split into separate referenced files for better progressive disclosure. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3a2d27d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.