This skill should be used when the user asks to "model agent mental states", "implement BDI architecture", "create belief-desire-intention models", "transform RDF to beliefs", "build cognitive agent", or mentions BDI ontology, mental state modeling, rational agency, or neuro-symbolic AI integration.
52
39%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/bdi-mental-states/SKILL.mdQuality
Discovery
44%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is essentially a trigger-term list masquerading as a skill description. While it excels at providing distinctive, domain-specific keywords that would help Claude select it in the right context, it completely fails to describe what the skill actually does—no concrete actions, outputs, or capabilities are mentioned. This makes it impossible for Claude to understand the skill's functionality beyond matching keywords.
Suggestions
Add a 'what it does' clause listing concrete actions, e.g., 'Generates BDI agent architectures from RDF ontologies, creates belief-desire-intention data structures, and implements rational reasoning loops for cognitive agents.'
Restructure to lead with capabilities before the 'Use when...' clause, following the pattern: '[What it does]. Use when [triggers].'
Include specific outputs or artifacts the skill produces (e.g., 'produces Python/Java agent classes', 'outputs OWL ontology files') to give Claude a clearer picture of the skill's function.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lists no concrete actions or capabilities. It only describes when to use the skill via trigger phrases but never explains what the skill actually does (e.g., 'transforms RDF graphs into BDI belief structures' or 'generates agent reasoning loops'). The phrases like 'model agent mental states' and 'implement BDI architecture' are trigger terms, not descriptions of concrete actions the skill performs. | 1 / 3 |
Completeness | The description answers 'when' extensively but completely fails to answer 'what does this do'. There is no explanation of the skill's capabilities, outputs, or concrete actions. Per the rubric, missing 'what' makes this very weak on completeness. | 1 / 3 |
Trigger Term Quality | The description includes a rich set of natural trigger terms that a user working in this domain would plausibly say: 'BDI architecture', 'belief-desire-intention models', 'transform RDF to beliefs', 'cognitive agent', 'BDI ontology', 'mental state modeling', 'rational agency', 'neuro-symbolic AI integration'. These cover multiple natural variations well. | 3 / 3 |
Distinctiveness Conflict Risk | The domain is highly specialized (BDI architecture, belief-desire-intention models, RDF-to-beliefs transformation, neuro-symbolic AI). This is a clear niche that is very unlikely to conflict with other skills. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive and demonstrates deep domain knowledge of BDI ontology modeling, with well-formed RDF/Turtle examples and useful SPARQL queries. However, it is significantly over-verbose, spending many tokens explaining concepts Claude already understands (what beliefs, desires, and intentions are; ontological distinctions) rather than focusing on actionable implementation patterns. The workflow lacks explicit validation checkpoints for what is essentially a multi-step ontology construction process.
Suggestions
Cut the 'Core Concepts > Mental Reality Architecture' explanatory text by 70% — remove definitions of Belief/Desire/Intention as concepts and keep only the modeling patterns (the Turtle examples) with brief annotations on BDI-specific property usage.
Add explicit validation steps to the T2B2T workflow: after Phase 1, validate generated beliefs against the ontology schema; after Phase 2, validate output triples with a concrete SPARQL ASK query or SHACL shape before accepting them.
Move the Competency Questions, Notation Selection table, and Integration Patterns sections into referenced files to reduce the main SKILL.md to a lean overview with pointers.
Replace the pseudocode Python LAG example with either a fully executable snippet (with real function implementations) or remove it and describe the pattern in 2-3 sentences pointing to the framework-integration reference.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~300+ lines, explaining foundational BDI concepts (endurants vs perdurants, DOLCE ontology alignment, what beliefs/desires/intentions are) that Claude already knows. Sections like 'Core Concepts' and 'Mental Reality Architecture' spend significant tokens on conceptual explanations rather than actionable instructions. The 'Guidelines' section largely restates what was already covered in detail above. | 1 / 3 |
Actionability | The Turtle/RDF examples are concrete and well-formed, and the SPARQL queries are executable. However, the Python code is pseudocode-level (e.g., `serialize_ontology`, `validate_triples`, `retry_with_feedback` are undefined), and the Prolog-style rules use a non-standard syntax. The skill describes patterns more than it provides copy-paste-ready implementation guidance. | 2 / 3 |
Workflow Clarity | The T2B2T paradigm outlines a two-phase pipeline, but lacks explicit validation checkpoints or error recovery steps. There's no feedback loop for when triples fail validation (the Python example mentions `retry_with_feedback()` but doesn't define it). For a skill involving RDF manipulation and ontology validation, the absence of concrete validation steps caps this at 2. | 2 / 3 |
Progressive Disclosure | The skill has well-signaled references at the bottom (internal references with clear 'Read when' guidance), which is good. However, the main body is monolithic — the Core Concepts, T2B2T, Temporal Dimensions, Compositional Entities, Integration Patterns, Guidelines, Competency Questions, and Gotchas sections contain extensive inline content that could be split into referenced files, making the SKILL.md itself much leaner. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
7a95d94
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.