Master Cursor AI Chat with @-mentions, inline edit, and conversation patterns. Triggers on "cursor chat", "cursor ai chat", "ask cursor", "cursor conversation", "chat with cursor", "Cmd+L", "inline edit".
73
68%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/cursor-pack/skills/cursor-ai-chat/SKILL.mdQuality
Discovery
72%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has strong trigger term coverage and is clearly distinctive to Cursor AI chat functionality. However, it lacks specificity in what concrete actions or knowledge the skill provides, and the 'what it does' portion is vague ('Master Cursor AI Chat'). The completeness suffers from not having an explicit 'Use when...' clause describing user scenarios.
Suggestions
Replace 'Master Cursor AI Chat' with specific concrete actions, e.g., 'Guides composing effective prompts in Cursor AI Chat, using @-mentions to reference files/symbols, applying inline edits, and managing conversation context.'
Add an explicit 'Use when...' clause describing user scenarios, e.g., 'Use when the user asks about Cursor's chat interface, how to reference code in Cursor prompts, or how to use inline editing features.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Cursor AI Chat) and mentions some features like '@-mentions, inline edit, and conversation patterns', but 'Master' is vague and the actions aren't concrete enough—it doesn't specify what the skill actually does with these features (e.g., 'compose prompts', 'navigate chat history', 'apply inline edits'). | 2 / 3 |
Completeness | The 'when' is partially addressed through the 'Triggers on' clause listing keywords, but there's no explicit 'Use when...' guidance explaining the scenarios or user needs. The 'what' is also weak—'Master Cursor AI Chat' doesn't clearly explain what the skill teaches or does. | 2 / 3 |
Trigger Term Quality | Includes a strong set of natural trigger terms that users would actually say: 'cursor chat', 'cursor ai chat', 'ask cursor', 'cursor conversation', 'chat with cursor', 'Cmd+L', and 'inline edit'. These cover multiple natural phrasings and include the keyboard shortcut. | 3 / 3 |
Distinctiveness Conflict Risk | The description is clearly scoped to Cursor AI's chat functionality specifically, with distinct trigger terms like 'Cmd+L', 'cursor chat', and 'inline edit' that are unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, well-structured skill that provides actionable guidance on Cursor AI Chat features with concrete examples and useful prompting patterns. Its main weaknesses are moderate verbosity (the ASCII diagram, enterprise section, and model recommendations add bulk without proportional value) and the lack of error recovery guidance. The @-symbol reference table and prompting patterns are particularly strong sections.
Suggestions
Move the full @-symbol reference table, model selection guide, and enterprise considerations into separate reference files, keeping only the most essential symbols and a link in the main skill.
Remove the ASCII chat panel diagram — Claude doesn't need a visual representation of a UI panel to understand how to instruct users about it.
Add brief guidance on what to do when chat responses degrade or inline edits are incorrect (e.g., 'If inline edit is wrong, Esc to reject, refine your prompt with more specific constraints, or switch to Chat for exploration first').
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient but includes some unnecessary content like the ASCII diagram of the chat panel, the enterprise considerations section, and the model recommendation table with specific model names that may become outdated. Some sections like 'Multi-Turn Conversation Management' explain things Claude would naturally understand. | 2 / 3 |
Actionability | The skill provides concrete keyboard shortcuts, specific @-symbol references with examples, actionable prompting patterns with realistic code examples, and clear step-by-step instructions for adding custom docs. The guidance is specific and immediately usable. | 3 / 3 |
Workflow Clarity | The prompting patterns and feature comparison table provide good guidance, and the inline edit workflow (select → Cmd+K → type → accept/reject) is clear. However, there are no validation checkpoints or feedback loops for when things go wrong (e.g., what to do if inline edit produces incorrect code, how to recover from bad multi-file changes). | 2 / 3 |
Progressive Disclosure | The content is well-structured with clear headers and tables, and includes external resource links at the bottom. However, the skill is quite long and some sections (like the full @-symbol reference table, enterprise considerations, and model selection guide) could be split into separate reference files to keep the main skill leaner. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.