CtrlK
BlogDocsLog inGet started
Tessl Logo

databricks-observability

Set up comprehensive observability for Databricks with metrics, traces, and alerts. Use when implementing monitoring for Databricks jobs, setting up dashboards, or configuring alerting for pipeline health. Trigger with phrases like "databricks monitoring", "databricks metrics", "databricks observability", "monitor databricks", "databricks alerts", "databricks logging".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/databricks-pack/skills/databricks-observability/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description that excels in completeness and trigger term coverage, with explicit 'Use when' and 'Trigger with' clauses. The main weakness is that the capability description could be more specific about concrete actions (e.g., 'configure Prometheus exporters', 'set up Grafana dashboards', 'create PagerDuty alert rules') rather than listing general categories like 'metrics, traces, and alerts'.

Suggestions

Add more specific concrete actions to the first sentence, e.g., 'configure metric exporters, set up Grafana dashboards, create alert rules for job failures and SLA breaches' instead of the general 'metrics, traces, and alerts'.

DimensionReasoningScore

Specificity

The description names the domain (Databricks observability) and mentions some actions (metrics, traces, alerts, dashboards, alerting), but these are somewhat high-level categories rather than multiple concrete, specific actions like 'extract text', 'fill forms', 'merge documents'.

2 / 3

Completeness

Clearly answers both 'what' (set up comprehensive observability with metrics, traces, and alerts) and 'when' (implementing monitoring for Databricks jobs, setting up dashboards, configuring alerting for pipeline health), with explicit trigger phrases listed.

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including 'databricks monitoring', 'databricks metrics', 'databricks observability', 'monitor databricks', 'databricks alerts', 'databricks logging' — these are phrases users would naturally say when needing this skill.

3 / 3

Distinctiveness Conflict Risk

The combination of 'Databricks' + 'observability/monitoring' creates a very clear niche. The specific trigger terms are all Databricks-prefixed, making it unlikely to conflict with generic monitoring or other platform-specific skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, highly actionable skill with executable SQL and Python throughout, covering a comprehensive range of Databricks observability scenarios. Its main weaknesses are the lack of validation checkpoints (e.g., verifying system table access before proceeding, confirming alert creation) and some content redundancy. The inline length is borderline for a single file but manageable given the breadth of topics covered.

Suggestions

Add a validation step early in the workflow (e.g., 'Step 0: Verify Access') with a simple query like `SELECT 1 FROM system.billing.usage LIMIT 1` to confirm system table access before proceeding.

Add verification after alert creation in Step 5 (e.g., confirm the alert exists with `w.alerts.get(alert.id)` and note how to test-trigger it).

Remove or condense the 'Daily Standup Dashboard' example since it largely duplicates the Step 1 query, or differentiate it more meaningfully.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with executable SQL and Python examples, but includes some redundancy—the 'Daily Standup Dashboard' example in the Examples section largely duplicates the Step 1 query, and the Output section restates what the steps already cover. The overview is appropriately concise.

2 / 3

Actionability

Every step provides fully executable SQL queries or Python code that can be copy-pasted directly. Specific table names, column references, join conditions, and SDK method calls are all concrete and complete. The error handling table provides specific solutions for specific problems.

3 / 3

Workflow Clarity

Steps are clearly sequenced and well-labeled, progressing logically from monitoring to alerting to export. However, there are no validation checkpoints—no guidance on verifying that system tables are accessible before running queries, no feedback loops for when queries return unexpected results, and no verification that alerts are actually firing correctly after creation.

2 / 3

Progressive Disclosure

The content is well-structured with clear headers and a logical progression, and external resource links are provided. However, at ~150 lines of SQL/Python inline, some content (e.g., the cost analysis queries, external export scripts) could be split into separate reference files. Without bundle files, everything is monolithic in a single SKILL.md.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.