tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill cursor-ai-chatManage master Cursor AI chat interface for code assistance. Triggers on "cursor chat", "cursor ai chat", "ask cursor", "cursor conversation", "chat with cursor". Use when working with cursor ai chat functionality. Trigger with phrases like "cursor ai chat", "cursor chat", "cursor".
Validation
81%| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Implementation
35%This skill provides a high-level overview of Cursor AI chat but lacks the concrete, actionable guidance needed for effective use. The instructions are too abstract - they describe what to do conceptually but don't show how to do it with specific examples, actual @-mention syntax, or real prompting patterns. The content would benefit significantly from concrete examples and executable guidance.
Suggestions
Add concrete examples of effective prompts with actual @-mention syntax (e.g., '@filename.py explain this function' or '@docs/*.md summarize the API')
Include specific examples of good vs bad questions to illustrate 'Ask specific, clear questions'
Show actual code snippets demonstrating the workflow - what selecting code looks like, how to reference it in chat
Replace placeholder paths ({baseDir}) with actual relative paths to referenced files
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is relatively brief but includes some unnecessary padding like the Overview section explaining what the skill does, and the Prerequisites section lists things Claude would already understand (like 'Basic familiarity with AI prompting'). | 2 / 3 |
Actionability | Instructions are vague and abstract ('Ask specific, clear questions', 'Use @-mentions to add file context') without concrete examples of actual prompts, @-mention syntax, or specific commands. No executable guidance is provided. | 1 / 3 |
Workflow Clarity | Steps are listed in sequence but lack specificity and validation checkpoints. The workflow is superficial - it doesn't explain what makes a question 'specific' or how to effectively 'review and apply' suggestions. | 2 / 3 |
Progressive Disclosure | References to external files (errors.md, examples.md) are present and one-level deep, but the main content is thin and the references use placeholder syntax ({baseDir}) rather than actual paths. The Output section lists categories without explaining when each applies. | 2 / 3 |
Total | 7 / 12 Passed |
Activation
55%This description has strong trigger term coverage for Cursor-related queries but fails to explain what the skill actually does. The phrase 'Manage master Cursor AI chat interface for code assistance' is vague and doesn't describe concrete actions or capabilities, making it difficult for Claude to know what this skill can accomplish versus other coding or Cursor skills.
Suggestions
Replace 'Manage master Cursor AI chat interface for code assistance' with specific actions like 'Configure chat settings, manage conversation history, customize AI responses, set up context rules'
Clarify the 'Use when' clause to specify scenarios beyond just 'cursor ai chat functionality' - e.g., 'Use when configuring Cursor chat behavior, managing chat history, or customizing how Cursor AI responds to queries'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Manage master Cursor AI chat interface for code assistance' without listing any concrete actions. It doesn't specify what managing entails or what code assistance capabilities are provided. | 1 / 3 |
Completeness | Has a 'Use when' clause mentioning 'cursor ai chat functionality', but the 'what' portion is extremely weak - 'Manage master Cursor AI chat interface' doesn't explain what the skill actually does or what capabilities it provides. | 2 / 3 |
Trigger Term Quality | Good coverage of natural trigger terms including 'cursor chat', 'cursor ai chat', 'ask cursor', 'cursor conversation', 'chat with cursor', and the shorter 'cursor'. These are terms users would naturally say. | 3 / 3 |
Distinctiveness Conflict Risk | The Cursor-specific triggers help distinguish it, but 'code assistance' is generic and could overlap with other coding skills. The vague capability description makes it unclear when this should be chosen over other Cursor-related or code assistance skills. | 2 / 3 |
Total | 8 / 12 Passed |
Reviewed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.