CtrlK
BlogDocsLog inGet started
Tessl Logo

cto-advisor

Technical leadership guidance for engineering teams, architecture decisions, and technology strategy. Includes tech debt analyzer, team scaling calculator, engineering metrics frameworks, technology evaluation tools, and ADR templates. Use when assessing technical debt, scaling engineering teams, evaluating technologies, making architecture decisions, establishing engineering metrics, or when user mentions CTO, tech debt, technical debt, team scaling, architecture decisions, technology evaluation, engineering metrics, DORA metrics, or technology strategy.

78

1.73x
Quality

67%

Does it follow best practices?

Impact

99%

1.73x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/c-level/cto-advisor/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, well-crafted skill description that clearly communicates its purpose, lists specific tools and capabilities, and provides comprehensive trigger terms. It follows the recommended pattern with an explicit 'Use when...' clause containing both scenario descriptions and keyword triggers. The description is distinctive enough to avoid conflicts with adjacent skills like general project management or software development.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and tools: tech debt analyzer, team scaling calculator, engineering metrics frameworks, technology evaluation tools, and ADR templates. Also specifies concrete use cases like assessing technical debt, scaling engineering teams, and making architecture decisions.

3 / 3

Completeness

Clearly answers both 'what' (technical leadership guidance, specific tools like tech debt analyzer, team scaling calculator, ADR templates) and 'when' (explicit 'Use when...' clause with detailed trigger scenarios and keyword mentions).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'CTO', 'tech debt', 'technical debt', 'team scaling', 'architecture decisions', 'technology evaluation', 'engineering metrics', 'DORA metrics', 'technology strategy'. These are terms a user would naturally use when seeking this kind of guidance.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche around CTO-level technical leadership and engineering management. The specific triggers like 'DORA metrics', 'ADR templates', 'tech debt analyzer', and 'team scaling calculator' are highly distinctive and unlikely to conflict with general coding or project management skills.

3 / 3

Total

12

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill reads more like a generic CTO handbook than an actionable skill for Claude. The vast majority of content (weekly cadences, quarterly planning themes, book recommendations, communication templates, stakeholder management advice) is general knowledge that Claude already possesses and wastes significant context window. The actionable elements—script references and framework file pointers—are buried in hundreds of lines of management platitudes.

Suggestions

Reduce content by 70-80%: Remove sections Claude already knows (crisis management basics, stakeholder management, weekly cadences, book lists, tool lists) and focus only on project-specific scripts, frameworks, and reference file navigation.

Add concrete input/output examples for the Python scripts (tech_debt_analyzer.py, team_scaling_calculator.py) showing what arguments they accept and what output to expect, so Claude can actually use them.

Add validation steps to workflows: e.g., after running tech_debt_analyzer.py, specify how to verify the output is reasonable and what to do if the script fails or produces unexpected results.

Move the detailed content (DORA metric targets, team ratios, ADR process) into the referenced files and keep SKILL.md as a concise routing document that tells Claude when and how to use each tool/reference.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines, mostly containing general management advice Claude already knows (weekly cadences, quarterly planning, stakeholder management, book recommendations, crisis management basics). Very little is project-specific or adds novel knowledge beyond what any LLM would already understand about CTO responsibilities.

1 / 3

Actionability

References concrete scripts (tech_debt_analyzer.py, team_scaling_calculator.py) and specific reference files, which is good. However, most content is high-level guidance (e.g., 'Define 3-5 year technology vision', 'Foster collaboration') rather than executable instructions. The scripts are mentioned but no details on inputs/outputs or how they work.

2 / 3

Workflow Clarity

Some workflows are sequenced (incident response, vendor evaluation with week timelines, ADR process), but most lack validation checkpoints or feedback loops. The tech debt and team scaling workflows are just 'run script' with no guidance on interpreting results, handling errors, or iterating. Strategic initiatives are listed as generic numbered steps without verification points.

2 / 3

Progressive Disclosure

References to external files (references/architecture_decision_records.md, references/engineering_metrics.md, references/technology_evaluation_framework.md) are present and one-level deep, which is good. However, the SKILL.md itself is a monolithic wall of text that inlines enormous amounts of general CTO advice that should either be in reference files or omitted entirely. The ratio of overview to inline content is poor.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_field

'metadata' should map string keys to string values

Warning

Total

10

/

11

Passed

Repository
majiayu000/claude-skill-registry
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.