CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/ai-first-engineering

Engineering operating model for teams where AI agents generate a large share of implementation output.

34

Quality

34%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description reads as a high-level concept statement rather than a functional skill description. It lacks concrete actions, natural trigger terms, and any explicit guidance on when Claude should select this skill. It would be nearly impossible for Claude to reliably choose this skill from a list of alternatives.

Suggestions

Add specific concrete actions the skill performs, e.g., 'Defines code review workflows, sets up AI agent task delegation policies, and establishes quality gates for AI-generated code.'

Include an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about managing AI coding agents, setting up AI-assisted development workflows, or organizing teams that rely on AI-generated code.'

Replace abstract jargon like 'engineering operating model' and 'implementation output' with terms users would naturally use, such as 'AI coding workflow', 'agent-assisted development', or 'AI pair programming team setup'.

DimensionReasoningScore

Specificity

The description uses abstract, high-level language ('engineering operating model', 'implementation output') without listing any concrete actions. It does not specify what the skill actually does—no verbs like 'creates', 'generates', 'defines', or 'configures' are present.

1 / 3

Completeness

The description vaguely hints at a domain ('what') but provides no explicit 'when' clause or trigger guidance. Both the 'what does this do' and 'when should Claude use it' are weak or missing.

1 / 3

Trigger Term Quality

The terms used ('engineering operating model', 'implementation output') are organizational jargon, not natural keywords a user would type. Users are unlikely to say 'operating model' or 'large share of implementation output' when seeking help.

1 / 3

Distinctiveness Conflict Risk

The description is so vague and broad ('engineering operating model') that it could overlap with any skill related to engineering processes, team management, AI workflows, or development practices. There are no distinct triggers to differentiate it.

1 / 3

Total

4

/

12

Passed

Implementation

37%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is admirably concise and well-structured, but it reads more like a manifesto or set of principles than an actionable skill. It lacks concrete artifacts—checklists, templates, example prompts, sample review comments, or step-by-step workflows—that would let Claude actually operationalize the guidance. As written, it tells Claude what to value but not what to do.

Suggestions

Add a concrete code review checklist template that Claude can apply when reviewing AI-generated PRs (e.g., a markdown checklist with specific questions for security, data integrity, failure handling).

Include a step-by-step workflow for at least one key process, such as 'How to plan and review an AI-generated feature,' with explicit validation checkpoints.

Provide example acceptance criteria or example prompts/evals so the 'Hiring and Evaluation Signals' and 'Testing Standard' sections become actionable rather than descriptive.

Add references to supplementary files (e.g., REVIEW_CHECKLIST.md, TESTING_TEMPLATE.md) for deeper guidance on each section.

DimensionReasoningScore

Conciseness

The content is lean and efficient. Every bullet point adds a distinct, non-obvious insight. There's no padding or explanation of concepts Claude already knows. Each section is tightly scoped.

3 / 3

Actionability

The content is entirely abstract guidance with no concrete examples, commands, templates, or executable artifacts. Phrases like 'explicit boundaries' and 'stable contracts' describe rather than instruct—there are no checklists, review templates, prompt examples, or sample acceptance criteria that Claude could directly apply.

1 / 3

Workflow Clarity

There is no sequenced workflow or process. The skill lists principles and attributes but never describes a step-by-step process for any activity (e.g., how to conduct a code review, how to set up testing, how to plan work). For a skill about an 'operating model,' the absence of any workflow is a significant gap.

1 / 3

Progressive Disclosure

The content is well-organized into clear sections with descriptive headings, making it easy to scan. However, it's a relatively short, flat document with no references to deeper materials (e.g., a review checklist, a testing template, or architecture examples) that would help Claude act on the guidance.

2 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents