Create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates. Triggers on "cursor prompts", "prompt engineering cursor", "better cursor prompts", "cursor instructions", "cursor prompt templates".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/cursor-pack/skills/cursor-custom-prompts/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with explicit trigger terms and a clear 'when' clause, making it strong on completeness and distinctiveness. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., generating .cursorrules files, structuring system prompts, creating role-based templates). The trigger terms are well-chosen and cover natural user language variations.
Suggestions
Add more specific concrete actions to the 'what' portion, e.g., 'Generates .cursorrules files, structures system prompts, creates role-based instruction templates for Cursor AI.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Cursor AI prompts) and mentions some actions like 'create effective custom prompts' and references 'project rules, prompt engineering patterns, and reusable templates,' but doesn't list multiple concrete distinct actions (e.g., doesn't specify what kinds of templates, how rules are structured, etc.). | 2 / 3 |
Completeness | Clearly answers both 'what' (create effective custom prompts for Cursor AI using project rules, prompt engineering patterns, and reusable templates) and 'when' (explicit 'Triggers on' clause with specific trigger phrases). | 3 / 3 |
Trigger Term Quality | Includes a good set of natural trigger terms users would actually say: 'cursor prompts', 'prompt engineering cursor', 'better cursor prompts', 'cursor instructions', 'cursor prompt templates'. These cover common variations of how users would phrase requests. | 3 / 3 |
Distinctiveness Conflict Risk | Very specific niche targeting Cursor AI prompt creation specifically, with distinct trigger terms that are unlikely to conflict with general prompt engineering or other IDE-related skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides highly actionable, concrete prompt templates and Cursor-specific configuration examples that are immediately usable. Its main weaknesses are moderate verbosity (some sections explain things Claude already knows, like basic prompt engineering concepts), lack of validation/feedback loops for iterating on prompt quality, and a monolithic structure that could benefit from splitting templates into a separate reference file.
Suggestions
Remove or significantly trim the 'Prompt Anatomy' section and 'Enterprise Considerations'—Claude already understands prompt structure and these generic tips add little value.
Add a brief validation workflow: how to evaluate prompt output quality, when to iterate, and what signals indicate a prompt needs refinement before storing as a project rule.
Move the template library into a separate TEMPLATES.md file and reference it from the main skill to improve progressive disclosure and reduce the main file's length.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some sections that are unnecessary or overly verbose for Claude—the 'Enterprise Considerations' section is generic advice, the anti-patterns table restates common sense, and the 'Prompt Anatomy' section explains concepts Claude already understands well. The templates themselves are useful but collectively take up significant space. | 2 / 3 |
Actionability | The skill provides fully concrete, copy-paste-ready prompt templates with specific examples, complete .cursor/rules YAML files, and detailed patterns like chain-of-thought and few-shot examples. Every section gives specific, usable content rather than abstract descriptions. | 3 / 3 |
Workflow Clarity | The iterative refinement section shows a clear multi-step sequence, and the prompt anatomy provides a clear structure. However, there's no validation or feedback loop—no guidance on how to evaluate whether a prompt worked, how to iterate when output is poor, or checkpoints for verifying prompt effectiveness before storing as project rules. | 2 / 3 |
Progressive Disclosure | The content is well-organized with clear headers and sections, but it's a long monolithic document (~180 lines of substantive content) where the templates could be split into a separate reference file. The external resource links at the end are helpful but the inline content could benefit from better separation between quick-start and reference material. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
3e83543
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.