CtrlK
BlogDocsLog inGet started
Tessl Logo

cursor-tab-completion

Master Cursor Tab autocomplete, ghost text, and AI code suggestions. Triggers on "cursor completion", "cursor tab", "cursor suggestions", "cursor autocomplete", "cursor ghost text", "cursor copilot".

63

Quality

56%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/cursor-pack/skills/cursor-tab-completion/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

62%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has strong trigger terms and clear distinctiveness due to its focus on Cursor editor's specific features. However, it lacks specificity about what concrete actions the skill enables — 'Master' is vague and doesn't tell Claude what it should actually do with this skill (teach, configure, troubleshoot, etc.). The 'what' component needs significant improvement.

Suggestions

Replace 'Master' with specific concrete actions, e.g., 'Explains how to configure and optimize Cursor Tab autocomplete, troubleshoot ghost text issues, and customize AI code suggestion settings.'

Expand the 'Triggers on' clause into a proper 'Use when...' sentence that describes scenarios, e.g., 'Use when the user asks about configuring, troubleshooting, or understanding Cursor's autocomplete, tab completion, or ghost text features.'

DimensionReasoningScore

Specificity

The description says 'Master Cursor Tab autocomplete, ghost text, and AI code suggestions' which names features but doesn't describe concrete actions Claude would perform. 'Master' is vague — it doesn't say what Claude actually does (e.g., configure, troubleshoot, explain, optimize).

1 / 3

Completeness

The 'when' is partially addressed via the 'Triggers on' clause listing trigger terms, but the 'what' is weak — it doesn't clearly explain what the skill does beyond vaguely referencing 'Master' these features. The trigger clause partially compensates but the what-component is insufficient for a score of 3.

2 / 3

Trigger Term Quality

Includes a good set of natural trigger terms: 'cursor completion', 'cursor tab', 'cursor suggestions', 'cursor autocomplete', 'cursor ghost text', 'cursor copilot'. These are terms users would naturally use when asking about this topic.

3 / 3

Distinctiveness Conflict Risk

The skill is clearly scoped to Cursor editor's specific autocomplete/tab/ghost text features, which is a distinct niche unlikely to conflict with other skills. The trigger terms are all prefixed with 'cursor' making them highly specific.

3 / 3

Total

9

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is a well-organized reference guide for Cursor Tab completion with good use of tables and code examples. Its main weaknesses are verbosity (explaining concepts Claude likely already knows, like what ghost text is), a somewhat descriptive rather than instructional tone, and inclusion of sections like Enterprise Considerations that add bulk without clear actionability. It reads more like documentation than a skill that teaches Claude how to perform a specific task.

Suggestions

Trim sections that explain concepts Claude already knows (e.g., what ghost text is, how Tab predictions work at a high level) and focus on actionable guidance like specific settings, conflict resolution steps, and context optimization techniques.

Remove or significantly condense the Enterprise Considerations and Measuring Tab Effectiveness sections — these are informational rather than actionable and consume tokens without teaching Claude how to do something.

Consider splitting detailed reference content (settings table, comparison table, conflict resolution) into a separate reference file to keep the main SKILL.md as a concise quick-start guide.

Reframe the content from 'here is how Tab works' to 'here is what to do when a user asks about Tab' — making it more instructional and less encyclopedic.

DimensionReasoningScore

Conciseness

The skill includes some unnecessary explanations (e.g., 'Tab gets better with usage', enterprise considerations, explaining what ghost text looks like) and could be tightened. However, it's not egregiously verbose — most sections contain useful information, just with some padding that Claude wouldn't need.

2 / 3

Actionability

The skill provides concrete key bindings, settings paths, and code examples showing good vs bad patterns, but it's primarily informational/descriptive rather than instructional. There are no executable commands or scripts — it describes a UI feature rather than providing copy-paste-ready workflows Claude would execute.

2 / 3

Workflow Clarity

The basic flow (type → ghost text appears → accept/dismiss) is clear, and conflict resolution has numbered steps. However, there are no validation checkpoints or feedback loops. The 'Tips for Better Suggestions' section is more advisory than procedural. For a feature-explanation skill this is adequate but not exemplary.

2 / 3

Progressive Disclosure

The content is well-structured with clear headers and tables, but it's a monolithic file with no bundle files to reference. Several sections (Enterprise Considerations, Measuring Tab Effectiveness) could be split out. The external resource links at the end are helpful but the inline content is longer than it needs to be for a single SKILL.md.

2 / 3

Total

8

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.