CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/workspace-surface-audit

Audit the active repo, MCP servers, plugins, connectors, env surfaces, and harness setup, then recommend the highest-value ECC-native skills, hooks, agents, and operator workflows. Use when the user wants help setting up Claude Code or understanding what capabilities are actually available in their environment.

72

Quality

72%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has good structural completeness with an explicit 'Use when' clause and a distinctive niche. However, it relies heavily on internal jargon ('ECC-native', 'harness setup', 'env surfaces') that users are unlikely to use naturally, and the core actions (audit, recommend) could be more concrete about what the skill actually produces.

Suggestions

Replace jargon like 'ECC-native', 'env surfaces', and 'harness setup' with natural user terms such as 'Claude Code environment', 'available tools', 'configuration'.

Add more natural trigger terms to the 'Use when' clause, such as 'what tools are available', 'configure my setup', 'what MCP servers are connected', or 'what can Claude Code do'.

DimensionReasoningScore

Specificity

The description names several specific domains to audit (repo, MCP servers, plugins, connectors, env surfaces, harness setup) and mentions outputs (skills, hooks, agents, operator workflows), but the actions themselves are somewhat vague—'audit' and 'recommend' are high-level verbs without detailing concrete steps or deliverables.

2 / 3

Completeness

The description clearly answers both 'what' (audit repo, MCP servers, plugins, connectors, env surfaces, harness setup and recommend skills/hooks/agents/workflows) and 'when' (explicit 'Use when' clause specifying setup help or understanding available capabilities).

3 / 3

Trigger Term Quality

Includes some natural terms like 'setting up Claude Code', 'capabilities', and 'environment', which users might say. However, terms like 'ECC-native', 'harness setup', 'env surfaces', and 'operator workflows' are jargon-heavy and unlikely to match natural user language. Missing common variations like 'configure', 'what can you do', 'available tools', or 'MCP setup'.

2 / 3

Distinctiveness Conflict Risk

This skill occupies a clear niche—environment auditing and capability discovery for Claude Code specifically. The combination of auditing infrastructure and recommending Claude Code-native capabilities is distinctive and unlikely to conflict with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured audit skill that clearly defines scope, process phases, and output format. Its main weaknesses are the lack of concrete executable commands for the inspection steps and the absence of validation checkpoints between phases. The content could be tightened by removing some redundant trigger phrases and consolidating the 'Good Outcomes' section into the output format requirements.

Suggestions

Add concrete commands for each inspection step in Phase 1 (e.g., `cat .mcp.json`, `ls .claude/`, `grep -r 'API_KEY\|AUTH_TOKEN\|SECRET' .env* --include='*.env*' | cut -d= -f1`) to make the audit immediately executable.

Add a validation checkpoint between Phase 1 and Phase 2, such as 'Confirm with the user that the inventory is complete and no additional workspaces should be scanned before proceeding to benchmarking.'

Trim the 'When to Use' section to 2-3 core triggers and consolidate 'Good Outcomes' into the output format section to reduce token usage.

DimensionReasoningScore

Conciseness

The skill is reasonably well-structured but includes some unnecessary elaboration. Sections like 'Non-Negotiable Rules' and 'Good Outcomes' contain guidance that could be tightened. The 'When to Use' section lists many trigger phrases that are somewhat redundant. However, it avoids explaining basic concepts Claude already knows.

2 / 3

Actionability

The skill provides a clear process framework and structured output format, but lacks concrete executable commands or code examples. It tells Claude what to inspect (e.g., `.mcp.json`, `.env*` files) but doesn't provide specific commands like `cat .mcp.json | jq .` or `ls .claude/`. The gap-to-shape mapping table is useful but the overall guidance remains at a procedural rather than executable level.

2 / 3

Workflow Clarity

The three-phase audit process (Inventory → Benchmark → Decisions) provides a clear sequence, and the output format with five ordered sections is well-defined. However, there are no validation checkpoints or feedback loops — no step says 'verify the inventory is complete before proceeding to benchmarking' or handles cases where files are missing or inaccessible.

2 / 3

Progressive Disclosure

For a skill of this nature (an audit/assessment workflow), the content is well-organized into logical sections with clear headers. It doesn't need external file references since it's a self-contained procedural skill. The structure flows naturally from context → rules → process → output format → recommendations.

3 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents