Suggest relevant GitHub Copilot prompt files from the awesome-copilot repository based on current repository context and chat history, avoiding duplicates with existing prompts in this repository, and identifying outdated prompts that need updates.
Install with Tessl CLI
npx tessl i github:github/awesome-copilot --skill suggest-awesome-github-copilot-prompts72
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description effectively communicates specific capabilities around GitHub Copilot prompt management with good distinctiveness. However, it lacks explicit trigger guidance ('Use when...') which would help Claude know when to select this skill, and could benefit from more natural user-facing keywords.
Suggestions
Add a 'Use when...' clause with trigger scenarios like 'Use when the user asks for Copilot prompt recommendations, wants to find new prompts, or needs to check if existing prompts are outdated.'
Include natural trigger terms users might say: 'copilot prompts', 'find prompts', 'prompt suggestions', 'update prompts', '.prompt.md'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Suggest relevant GitHub Copilot prompt files', 'avoiding duplicates with existing prompts', and 'identifying outdated prompts that need updates'. These are clear, actionable capabilities. | 3 / 3 |
Completeness | Clearly describes WHAT the skill does (suggest prompts, avoid duplicates, identify outdated ones), but lacks an explicit 'Use when...' clause or trigger guidance for WHEN Claude should select this skill. | 2 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'GitHub Copilot', 'prompt files', 'awesome-copilot repository', but missing common user variations like 'copilot prompts', 'find prompts', 'prompt suggestions', or '.prompt.md files'. | 2 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific references to 'awesome-copilot repository', 'GitHub Copilot prompt files', and the unique combination of suggesting, deduplicating, and updating prompts. Unlikely to conflict with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
62%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a well-structured workflow for suggesting GitHub Copilot prompts with strong sequencing and appropriate safety gates before modifications. However, it suffers from some redundancy in explaining the version comparison process and lacks concrete executable examples for the tool invocations it references. The actionability could be improved with specific code snippets or command examples.
Suggestions
Add concrete examples of #fetch tool invocations with actual URLs and expected response handling
Consolidate the duplicate version comparison explanations (steps 4-5 and the dedicated 'Version Comparison Process' section) into a single, authoritative section
Include an example of the YAML front matter structure expected in local prompt files to make the 'Extract Descriptions' step more actionable
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient but includes some redundancy (e.g., the version comparison process is explained twice - once in the main process and again in a dedicated section). Some sections like 'Context Analysis Criteria' add moderate value but could be tighter. | 2 / 3 |
Actionability | Provides clear step-by-step process and specific tool references (#fetch, #todos, githubRepo), but lacks executable code examples. The URL patterns and table format are concrete, but the actual implementation details are left abstract. | 2 / 3 |
Workflow Clarity | Excellent multi-step workflow with clear sequencing (12 numbered steps), explicit validation checkpoint (step 10), and a critical 'AWAIT' gate before destructive operations (step 11). The process includes feedback loops for version comparison and clear decision points. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and headers, but everything is in a single file with no references to external documentation. The skill is moderately long (~100 lines) and some sections (Local Prompts Discovery, Version Comparison) could potentially be separate reference files. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.