Create effective custom prompts for Cursor AI. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions". Use when working with cursor custom prompts functionality. Trigger with phrases like "cursor custom prompts", "cursor prompts", "cursor".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill cursor-custom-promptsOverall
score
61%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has strong trigger term coverage and completeness with explicit 'Use when' guidance. However, it lacks specificity in describing concrete actions beyond 'create effective custom prompts' and has moderate conflict risk due to the overly broad 'cursor' trigger term.
Suggestions
Add more specific concrete actions like 'write system instructions, optimize existing prompts, structure rules files, configure .cursorrules'
Consider removing or qualifying the bare 'cursor' trigger to reduce false positive matches with unrelated cursor/pointer tasks
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Cursor AI prompts) and one action ('Create effective custom prompts'), but lacks comprehensive concrete actions like 'write system instructions', 'optimize existing prompts', or 'structure prompt templates'. | 2 / 3 |
Completeness | Explicitly answers both what ('Create effective custom prompts for Cursor AI') and when ('Use when working with cursor custom prompts functionality. Trigger with phrases like...'). Has clear 'Use when' clause with explicit triggers. | 3 / 3 |
Trigger Term Quality | Good coverage of natural terms users would say: 'cursor prompts', 'prompt engineering cursor', 'better cursor prompts', 'cursor instructions', 'cursor custom prompts', and the simple 'cursor'. These are realistic phrases users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | While 'Cursor AI' is specific, the broad trigger 'cursor' alone could conflict with other skills. The term 'prompt engineering' might also overlap with general prompt-writing skills. The niche is reasonably clear but has some overlap risk. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is too abstract and lacks the concrete, actionable guidance needed to be useful. It describes what good prompts should have without showing any actual prompt examples or templates. The heavy reliance on external files for 'detailed examples' leaves the main skill body as essentially a table of contents with vague instructions.
Suggestions
Add 2-3 concrete prompt templates showing the 'context, task, constraints' structure with actual Cursor-specific examples
Include a before/after example showing a vague prompt transformed into an effective one
Add specific syntax for @-mentions and .cursorrules format rather than just mentioning them
Define what 'effective' means with measurable criteria for the 'Refine based on output quality' step
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The overview section explains what the skill does which Claude can infer from context. The instructions are reasonably brief but could be tighter - phrases like 'prompt engineering fundamentals' and 'best practices for consistent, high-quality AI responses' add little value. | 2 / 3 |
Actionability | Instructions are vague and abstract ('Structure prompt with context, task, constraints', 'Include specific requirements'). No concrete examples of actual prompts, no executable templates, no specific syntax or formats shown. Describes rather than instructs. | 1 / 3 |
Workflow Clarity | Steps are listed in a logical sequence (structure -> include -> reference -> iterate -> store -> refine), but lacks validation checkpoints. No guidance on how to evaluate if a prompt is 'effective' or when iteration is complete. | 2 / 3 |
Progressive Disclosure | References external files for errors and examples (good), but the main content is too thin - it offloads all concrete guidance to external files while keeping only abstract instructions inline. The skill body itself provides almost no actionable content. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
metadata_version | 'metadata' field is not a dictionary | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.