Technical leadership guidance for engineering teams, architecture decisions, and technology strategy. Includes tech debt analyzer, team scaling calculator, engineering metrics frameworks, technology evaluation tools, and ADR templates. Use when assessing technical debt, scaling engineering teams, evaluating technologies, making architecture decisions, establishing engineering metrics, or when user mentions CTO, tech debt, technical debt, team scaling, architecture decisions, technology evaluation, engineering metrics, DORA metrics, or technology strategy.
83
Quality
75%
Does it follow best practices?
Impact
99%
1.73xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/c-level/cto-advisor/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that follows best practices. It provides specific tools and capabilities, includes comprehensive trigger terms covering both formal terminology and common variations, explicitly states when to use the skill, and carves out a distinct niche in technical leadership that won't conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and tools: 'tech debt analyzer, team scaling calculator, engineering metrics frameworks, technology evaluation tools, and ADR templates' along with specific use cases like 'assessing technical debt, scaling engineering teams, evaluating technologies, making architecture decisions.' | 3 / 3 |
Completeness | Clearly answers both what (technical leadership guidance, specific tools like tech debt analyzer, team scaling calculator, etc.) AND when with explicit 'Use when...' clause listing specific scenarios and trigger terms. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'CTO, tech debt, technical debt, team scaling, architecture decisions, technology evaluation, engineering metrics, DORA metrics, technology strategy.' Includes both formal terms and common abbreviations. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche focused on CTO-level technical leadership with distinct triggers like 'DORA metrics', 'ADR templates', 'team scaling calculator' that are unlikely to conflict with general coding or documentation skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a broad overview of CTO responsibilities but suffers from being too comprehensive and generic. While it correctly references external tools and frameworks, much of the content describes what a CTO does rather than providing Claude with actionable, specific guidance. The skill would benefit from significant trimming of generic leadership advice and more focus on the concrete tools (scripts, templates) that differentiate it.
Suggestions
Remove generic CTO knowledge Claude already has (weekly cadence, quarterly planning, stakeholder management basics) and focus on the unique tools and frameworks this skill provides
Show example outputs from the Python scripts (tech_debt_analyzer.py, team_scaling_calculator.py) so Claude knows what to expect and how to interpret results
Move reference content (books, communities, tools lists) to a separate RESOURCES.md file to reduce main file length
Add validation steps after running the analysis scripts - what should Claude verify before presenting results to users?
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is comprehensive but includes significant content Claude already knows (basic CTO responsibilities, generic leadership advice, book recommendations). Many sections like 'Weekly Cadence' and 'Quarterly Planning' are generic management advice rather than actionable technical guidance. | 2 / 3 |
Actionability | References to Python scripts provide concrete entry points, but the scripts themselves aren't shown. Most content is descriptive frameworks and checklists rather than executable guidance. The communication templates and ratios are useful but many sections remain abstract (e.g., 'Define 3-5 year technology vision'). | 2 / 3 |
Workflow Clarity | Some workflows are well-sequenced (Incident Response, Vendor Evaluation with week timelines), but most lack validation checkpoints. The tech debt and team scaling sections point to scripts without showing verification steps or error handling for the outputs. | 2 / 3 |
Progressive Disclosure | References to external files (architecture_decision_records.md, technology_evaluation_framework.md, engineering_metrics.md) are appropriate, but the main file is a monolithic wall of text (~350 lines). Content like 'Tools & Resources', 'Books', and 'Communities' could be in separate reference files. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 10 / 11 Passed | |
6213d1a
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.