Technology-agnostic blueprint generator for creating comprehensive copilot-instructions.md files that guide GitHub Copilot to produce code consistent with project standards, architecture patterns, and exact technology versions by analyzing existing codebase patterns and avoiding assumptions.
55
33%
Does it follow best practices?
Impact
91%
1.40xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/copilot-instructions-blueprint-generator/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear and distinctive niche (GitHub Copilot instructions generation) but suffers from a lack of explicit trigger guidance ('Use when...') and relies on abstract qualifiers rather than concrete action lists. It reads more like a marketing tagline than a functional skill selector, which would make it harder for Claude to reliably choose this skill from a large pool.
Suggestions
Add an explicit 'Use when...' clause with trigger scenarios, e.g., 'Use when the user asks to create or update a copilot-instructions.md file, configure GitHub Copilot for a project, or set up AI coding assistant guidelines.'
Replace abstract qualifiers ('comprehensive', 'technology-agnostic') with concrete actions, e.g., 'Analyzes existing codebase to detect frameworks, versions, and patterns; generates copilot-instructions.md with coding conventions, architecture rules, and dependency constraints.'
Include common user-facing trigger terms and file/path references like '.github/copilot-instructions.md', 'Copilot config', 'Copilot setup', or 'AI assistant project rules'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (copilot-instructions.md generation) and some actions (analyzing existing codebase patterns, guiding GitHub Copilot), but the description is heavy on abstract qualifiers ('comprehensive', 'technology-agnostic', 'consistent with project standards') rather than listing multiple concrete discrete actions. | 2 / 3 |
Completeness | Describes what it does (generates copilot-instructions.md files) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'when' is not even implied clearly, warranting a 1. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'copilot-instructions.md', 'GitHub Copilot', 'codebase patterns', and 'architecture patterns', but misses common user phrasings like 'Copilot setup', 'Copilot config', 'code assistant instructions', or '.github' directory references. | 2 / 3 |
Distinctiveness Conflict Risk | The skill targets a very specific niche — generating copilot-instructions.md files for GitHub Copilot — which is unlikely to conflict with other skills. The mention of 'copilot-instructions.md' and 'GitHub Copilot' creates a clear, distinct trigger domain. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is excessively verbose and repetitive, with most technology-specific sections being near-identical variations of 'detect version and follow existing patterns.' The template approach with configuration variables is a reasonable idea, but the execution results in a bloated document that doesn't provide concrete, actionable guidance—it mostly tells Claude to do things Claude would naturally do (analyze code, follow patterns). The lack of real examples, validation steps, and progressive disclosure significantly weakens its utility.
Suggestions
Reduce the content by 60-70% by consolidating the repetitive technology-specific sections into a single generic pattern with a brief table of technology-specific file locations to check for versions.
Add concrete examples: show a sample input (e.g., a snippet of a real package.json or .csproj) and the corresponding expected output section in copilot-instructions.md.
Split technology-specific guidelines into separate referenced files (e.g., dotnet-guidelines.md, react-guidelines.md) and keep SKILL.md as a concise overview with navigation links.
Add a validation step: after generating the copilot-instructions.md, verify that all referenced versions match actual project files and that no patterns are prescribed that don't exist in the codebase.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~250+ lines. The content is heavily padded with repetitive phrases like 'Follow existing patterns for...' and 'Match the same approach used in the codebase' repeated dozens of times. Most of the technology-specific sections say essentially the same thing ('detect version, follow existing patterns') with minor word variations. Claude already knows how to analyze codebases and generate markdown files. | 1 / 3 |
Actionability | The skill provides a structured template with conditional sections and configuration variables, which gives some concrete guidance. However, it contains no executable code, no real examples from actual codebases, and the instructions are largely abstract directives like 'scan the codebase' and 'follow existing patterns' rather than specific commands or concrete steps Claude should take. | 2 / 3 |
Workflow Clarity | There is a rough sequence (analyze codebase → generate instructions → place file), and the numbered sections provide some structure. However, there are no validation checkpoints—no step to verify the generated file is correct, no feedback loop for checking that detected versions are accurate, and no explicit verification that the output matches actual codebase patterns. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. The entire template is inlined, including all technology-specific sections, all quality focus areas, all testing approaches, and all documentation levels. Content that could be split into separate reference files (e.g., technology-specific guidelines, testing patterns) is all crammed into one massive document. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
4020587
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.