CtrlK
BlogDocsLog inGet started
Tessl Logo

dx-data-navigator

Query Developer Experience (DX) data via the DX Data MCP server PostgreSQL database. Use this skill when analyzing developer productivity metrics, team performance, PR/code review metrics, deployment frequency, incident data, AI tool adoption, survey responses, DORA metrics, or any engineering analytics. Triggers on questions about DX scores, team comparisons, cycle times, code quality, developer sentiment, AI coding assistant adoption, sprint velocity, or engineering KPIs.

76

Quality

70%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/dx-data-navigator/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description with excellent trigger term coverage and completeness, clearly specifying both what the skill does and when to use it. Its main weakness is that the core action is somewhat generic ('Query') rather than listing multiple specific concrete actions like generating reports, creating visualizations, or comparing metrics across teams. The rich set of domain-specific trigger terms and explicit 'Use when' and 'Triggers on' clauses make it highly effective for skill selection.

Suggestions

Replace the generic 'Query' action with multiple specific actions such as 'Query, analyze, and compare developer productivity metrics', 'Generate team performance reports', 'Calculate DORA metrics and cycle times' to improve specificity.

DimensionReasoningScore

Specificity

The description names the domain (DX data, PostgreSQL database) and mentions several data areas (PR metrics, deployment frequency, incident data, survey responses), but it describes what data can be queried rather than listing concrete actions like 'generate reports', 'compare teams', or 'calculate DORA metrics'. The primary action is just 'Query'.

2 / 3

Completeness

Clearly answers both 'what' (query DX data via MCP server PostgreSQL database for developer productivity and engineering analytics) and 'when' (explicit 'Use this skill when...' and 'Triggers on...' clauses with detailed trigger scenarios).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'developer productivity metrics', 'PR/code review metrics', 'deployment frequency', 'DORA metrics', 'DX scores', 'cycle times', 'code quality', 'developer sentiment', 'AI coding assistant adoption', 'sprint velocity', 'engineering KPIs', 'team comparisons'. These are terms engineers and managers would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche: DX Data MCP server, developer experience metrics, DORA metrics, AI tool adoption. The specificity of the data source (DX Data MCP server PostgreSQL database) and the domain-specific terminology make it very unlikely to conflict with other skills.

3 / 3

Total

11

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill excels at actionability with comprehensive, executable SQL examples covering every data domain, and the team table disambiguation is genuinely valuable. However, it severely suffers from verbosity — the inline content largely duplicates what the reference files should contain, making it a monolithic document that wastes token budget. The skill would be dramatically improved by keeping only the critical patterns (tool usage, team table selection, data quality notes, DORA metrics) in the main file and pushing domain-specific queries to the reference files.

Suggestions

Move domain-specific SQL examples and column listings to their respective reference files, keeping only 1-2 critical examples inline (e.g., the team table disambiguation queries and DORA metrics)

Add a brief workflow section with validation steps: e.g., 'If query returns 0 rows, check team names via dx_teams; if duplicates appear, ensure GROUP BY with MAX() is used'

Consolidate the 'Data Domains' section into a compact table mapping domains to key tables and reference files, rather than repeating schema details already in reference files

Remove redundant query examples — e.g., the team scores query appears nearly identically in both 'Critical: Team Tables' and 'Core DX Metrics' sections

DimensionReasoningScore

Conciseness

The skill is extremely verbose at ~350+ lines with extensive inline SQL examples that could be in reference files. Much of the content duplicates what the reference files already contain, and many query patterns are variations of each other. The data domain sections essentially replicate schema documentation inline.

1 / 3

Actionability

Every section provides fully executable SQL queries with correct joins, aggregations, and filtering. The tool invocation syntax is explicit, and queries are copy-paste ready with clear column references and expected output semantics (e.g., 'divide by 3600 for hours').

3 / 3

Workflow Clarity

The skill provides a clear first step (query information_schema when uncertain) and distinguishes between team table types well. However, there are no explicit validation checkpoints or error recovery steps for when queries return unexpected results, and the workflow for complex multi-join queries lacks verification guidance.

2 / 3

Progressive Disclosure

Reference files are listed in a clear table at the bottom, which is good. However, the skill massively duplicates content that should live in those reference files — nearly every data domain section includes detailed column listings and multiple query examples that belong in the referenced files, defeating the purpose of progressive disclosure.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
pskoett/pskoett-ai-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.