Use when designing domain error handling. Keywords: domain error, error categorization, recovery strategy, retry, fallback, domain error hierarchy, user-facing vs internal errors, error code design, circuit breaker, graceful degradation, resilience, error context, backoff, retry with backoff, error recovery, transient vs permanent error, 领域错误, 错误分类, 恢复策略, 重试, 熔断器, 优雅降级
Install with Tessl CLI
npx tessl i github:actionbook/rust-skills --skill m13-domain-error72
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
37%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a keyword dump with a minimal trigger clause but no actual capability description. While the trigger terms are comprehensive and well-chosen, the complete absence of concrete actions (what the skill does, what it produces, how it helps) makes it impossible for Claude to understand the skill's purpose beyond pattern matching on keywords.
Suggestions
Add concrete actions describing what the skill does, e.g., 'Designs domain error hierarchies, implements retry strategies with exponential backoff, creates circuit breaker patterns, and structures error categorization for user-facing vs internal errors.'
Restructure to lead with capabilities before the 'Use when' clause, following the pattern: '[What it does]. Use when [trigger conditions]. Keywords: [terms]'
Remove redundant keywords that are already implied by the capability description to reduce noise and improve readability.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description only states 'Use when designing domain error handling' without listing any concrete actions. It doesn't describe what the skill actually does - no verbs like 'creates', 'generates', 'analyzes', or specific deliverables. | 1 / 3 |
Completeness | The description answers 'when' (designing domain error handling) but completely fails to answer 'what does this do'. There's no explanation of the skill's capabilities, outputs, or actions - only a trigger condition and keyword list. | 1 / 3 |
Trigger Term Quality | Excellent coverage of natural keywords including common terms users would say: 'domain error', 'retry', 'fallback', 'circuit breaker', 'graceful degradation', 'error recovery', 'backoff'. Also includes Chinese translations for international users. | 3 / 3 |
Distinctiveness Conflict Risk | The keywords are fairly specific to domain error handling patterns, but without describing concrete actions, it could overlap with general error handling, logging, or resilience-focused skills. The extensive keyword list helps but doesn't fully compensate. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
87%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong skill document that efficiently covers domain error strategy with actionable code examples and clear categorization tables. The content respects token budget while providing concrete Rust implementations. The main weakness is the workflow section which presents decision points but lacks explicit validation steps for the error design process itself.
Suggestions
Add a concrete workflow with validation checkpoints for designing error types (e.g., '1. Categorize error → 2. Verify audience mapping → 3. Test recovery path → 4. Validate user messages')
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, using tables and code examples without explaining concepts Claude already knows. No padding or unnecessary context about what errors are or how Rust works. | 3 / 3 |
Actionability | Provides fully executable Rust code examples for error hierarchy and retry patterns, concrete crate recommendations (tokio-retry, backoff, failsafe-rs), and specific implementation patterns that are copy-paste ready. | 3 / 3 |
Workflow Clarity | The 'Thinking Prompt' section provides a decision sequence, but lacks explicit validation checkpoints. The trace up/down sections show navigation but don't form a clear step-by-step workflow with feedback loops for error recovery design. | 2 / 3 |
Progressive Disclosure | Well-organized with clear sections, tables for quick reference, and explicit cross-references to related skills (m06-error-handling, m07-concurrency, etc.). Content is appropriately structured without deep nesting. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 13 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata' field is not a dictionary | Warning |
license_field | 'license' field is missing | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 13 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.