CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/automation-audit-ops

Evidence-first automation inventory and overlap audit workflow for ECC. Use when the user wants to know which jobs, hooks, connectors, MCP servers, or wrappers are live, broken, redundant, or missing before fixing anything.

80

Quality

80%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a solid description with an explicit 'Use when' clause, good trigger term coverage, and a clearly distinctive niche. Its main weakness is that the specific actions performed (beyond 'inventory' and 'audit') could be more concretely enumerated—e.g., scanning configurations, generating reports, flagging duplicates.

Suggestions

Add 2-3 more concrete action verbs describing what the skill does, e.g., 'Scans running automations, flags duplicate or conflicting configurations, and generates an inventory report.'

DimensionReasoningScore

Specificity

Names the domain (automation inventory and overlap audit for ECC) and mentions specific entity types (jobs, hooks, connectors, MCP servers, wrappers), but the concrete actions are somewhat vague—'evidence-first workflow' and 'audit' don't enumerate specific steps like 'scan running processes', 'compare configurations', or 'generate overlap report'.

2 / 3

Completeness

Clearly answers both 'what' (evidence-first automation inventory and overlap audit workflow for ECC) and 'when' (explicit 'Use when' clause specifying the user wants to know which jobs/hooks/connectors/MCP servers/wrappers are live, broken, redundant, or missing before fixing anything).

3 / 3

Trigger Term Quality

Includes strong natural trigger terms users would say: 'jobs', 'hooks', 'connectors', 'MCP servers', 'wrappers', 'live', 'broken', 'redundant', 'missing', 'inventory', 'overlap', 'audit'. These cover a good range of how a user might phrase their request.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive—targets a very specific niche (ECC automation inventory/overlap audit) with specific entity types and a clear pre-fix diagnostic purpose. Unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured audit workflow skill with a clear four-step process and strong emphasis on evidence-backed claims. Its main weakness is the lack of concrete, executable commands or tool invocations—it describes what to inventory and classify but doesn't show exactly how to do it. There is also some redundancy between the Guardrails and Workflow sections that could be tightened.

Suggestions

Add concrete example commands or tool calls for each inventory step (e.g., how to list MCP servers, how to check GitHub Actions status, how to inspect hook configurations)

Remove the duplicate listing of classification categories between Guardrails and Workflow Step 2—define them once and reference them

Include a brief worked example showing what a completed output table looks like with realistic data to make the output format more actionable

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but has some redundancy—the classification categories (configured, authenticated, etc.) are listed twice in nearly identical form across Guardrails and Workflow Step 2. The 'Skill Stack' section is useful but somewhat verbose with explanatory clauses that could be tightened.

2 / 3

Actionability

The skill provides structured guidance with clear categories and classification schemes, but lacks concrete executable commands, specific file paths to check, or example tool invocations. It tells Claude what to look for but not exactly how to look (e.g., no specific commands to list hooks, check MCP configs, or query GitHub Actions).

2 / 3

Workflow Clarity

The four-step workflow is clearly sequenced (inventory → classify → prove → recommend) with explicit validation requirements in step 3 (trace proof paths) and clear guidance on what to do when evidence is ambiguous. The guardrail to start read-only and not fix until evidence exists serves as a validation checkpoint.

3 / 3

Progressive Disclosure

The skill references other skills (workspace-surface-audit, knowledge-ops, etc.) for deeper functionality, which is good progressive disclosure. However, the main content itself is somewhat long and could benefit from splitting the detailed classification taxonomy or output format into a referenced file. The references are well-signaled but the inline content is borderline monolithic.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents