Databricks documentation reference via llms.txt index. Use when other skills do not cover a topic, looking up unfamiliar Databricks features, or needing authoritative docs on APIs, configurations, or platform capabilities.
73
66%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./databricks-skills/databricks-docs/SKILL.mdQuality
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is well-structured with a clear 'Use when' clause that explicitly defines its role as a Databricks documentation fallback. Its main weakness is moderate specificity—it describes the general capability but doesn't enumerate concrete actions. Trigger terms could be expanded to include more specific Databricks concepts users might reference.
Suggestions
Add specific concrete actions like 'look up API references, retrieve configuration details, find syntax examples, check platform feature documentation'.
Expand trigger terms to include common Databricks-specific concepts users might mention, such as 'Unity Catalog', 'Delta Lake', 'Spark SQL', 'DBFS', 'MLflow', or 'workspace settings'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Databricks documentation) and the general action (documentation reference via llms.txt index), but doesn't list multiple specific concrete actions like 'look up API endpoints, retrieve configuration parameters, find platform feature docs'. | 2 / 3 |
Completeness | Clearly answers both what ('Databricks documentation reference via llms.txt index') and when ('Use when other skills do not cover a topic, looking up unfamiliar Databricks features, or needing authoritative docs on APIs, configurations, or platform capabilities'), with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'Databricks', 'APIs', 'configurations', 'platform capabilities', and 'docs', but misses common user variations like 'Databricks help', 'Databricks SDK', 'Unity Catalog', 'Spark', 'Delta Lake', or specific service names users might mention. | 2 / 3 |
Distinctiveness Conflict Risk | The description is clearly scoped to Databricks documentation lookup and explicitly positions itself as a fallback ('when other skills do not cover a topic'), which creates a distinct niche and reduces conflict risk with other skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a reasonably well-structured reference skill that clearly communicates its purpose and points to related resources effectively. However, it lacks concrete actionability—there are no executable examples of fetching or parsing the llms.txt index—and includes some redundant explanation about its role that Claude could infer. The workflow examples are too high-level to provide strong guidance.
Suggestions
Add a concrete, executable example of using WebFetch to retrieve llms.txt and searching for a specific topic (e.g., show the actual tool call syntax and how to parse results).
Trim the 'Role of This Skill' section—the bullet points largely repeat what the description already conveys, and Claude can infer the distinction between reference and action skills.
Add a brief note on what to do if the llms.txt fetch fails or returns unexpected content (e.g., retry, fall back to known documentation URLs).
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill has some unnecessary explanation (e.g., 'This is a reference skill, not an action skill' and the repeated emphasis on preferring MCP tools). The 'Role of This Skill' section could be trimmed since Claude can infer the distinction between reference and action skills. However, it's not egregiously verbose. | 2 / 3 |
Actionability | The skill provides a concrete URL and a general process (fetch llms.txt, search, fetch specific pages), but lacks executable code or specific commands. The 'How to Use' section says 'Use WebFetch to retrieve this index' without showing the actual tool invocation or a concrete example of parsing/searching the index. | 2 / 3 |
Workflow Clarity | The examples show multi-step sequences but they are high-level and lack validation checkpoints. The numbered steps are more conceptual guidance than precise workflows—there's no verification that the fetched content is relevant or complete, and no error handling for failed fetches or missing documentation. | 2 / 3 |
Progressive Disclosure | The skill is well-organized with clear sections, provides a concise overview, and has well-signaled one-level-deep references to related skills. The documentation structure section gives a useful category breakdown without inlining excessive detail. For a reference skill of this size, the organization is appropriate. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
b4071a0
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.