Generate photorealistic rendering scripts for PyMOL and UCSF ChimeraX.
44
31%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/Data Analysis/3d-molecule-ray-tracer/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a clear, specialized niche (photorealistic molecular rendering scripts for two specific tools), which gives it strong distinctiveness. However, it lacks a 'Use when...' clause entirely, and the capability description is limited to a single action. Adding explicit trigger guidance and more natural user keywords (e.g., 'molecular visualization', 'protein rendering') would significantly improve skill selection accuracy.
Suggestions
Add a 'Use when...' clause with trigger terms like 'molecular visualization', 'protein rendering', 'PyMOL script', 'ChimeraX script', 'ray tracing', or 'publication-quality molecular images'.
Expand the capability list with more specific actions, e.g., 'Generate photorealistic rendering scripts for PyMOL and UCSF ChimeraX, including lighting setup, surface representations, cartoon styles, and ray-traced output settings.'
Include common file extensions and alternate terms users might mention, such as '.pml', '.cxc', 'molecular graphics', 'structure visualization', or 'PDB rendering'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (molecular visualization) and a specific action (generate rendering scripts), and mentions two specific tools (PyMOL, UCSF ChimeraX), but doesn't list multiple concrete actions beyond 'generate scripts'. | 2 / 3 |
Completeness | Describes what it does (generate rendering scripts) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, a missing 'Use when...' clause caps completeness at 2, and the 'what' is also thin, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes good specific keywords like 'PyMOL', 'ChimeraX', 'photorealistic', and 'rendering scripts' that users in this domain would use, but misses common variations like 'molecular visualization', 'protein structure', 'ray tracing', '3D molecular graphics', or file extensions like '.pml', '.cxc'. | 2 / 3 |
Distinctiveness Conflict Risk | Very clear niche targeting specifically PyMOL and ChimeraX photorealistic rendering scripts — this is unlikely to conflict with other skills given the highly specialized domain and named tools. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
22%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with generic boilerplate templates that have no specific relevance to molecular rendering. While the parameter tables and CLI examples provide some concrete value, the core domain expertise (how to construct photorealistic PyMOL/ChimeraX scripts) is entirely absent—delegated to an opaque script with no fallback. The document would benefit enormously from removing generic sections and adding actual rendering script examples and domain-specific workflow guidance.
Suggestions
Remove all generic boilerplate sections (Risk Assessment, Security Checklist, Response Template, Input Validation, Output Requirements, Error Handling) that contain no molecular-rendering-specific content, cutting the document by ~60%.
Add actual example PyMOL/ChimeraX script snippets showing what the tool generates (e.g., a sample .pml file with ray-tracing commands), so Claude can understand and modify the output.
Replace the generic 5-step 'Workflow' section with a domain-specific workflow: load PDB → choose representation → configure lighting/camera → set rendering parameters → validate script → render.
Eliminate circular self-references ('See ## Features above for related details') and reorganize so Features and Usage appear before Implementation Details.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with massive amounts of boilerplate content that adds no value. Sections like 'Risk Assessment', 'Security Checklist', 'Lifecycle Status', 'Response Template', 'Input Validation', 'Error Handling', and 'Output Requirements' are generic templates not specific to molecular rendering. Multiple sections reference each other circularly ('See ## Features above'). The skill explains obvious concepts and includes extensive scaffolding that wastes tokens. | 1 / 3 |
Actionability | The parameter table and CLI examples are concrete and useful, and the rendering presets table provides specific actionable settings. However, the skill never shows actual generated PyMOL or ChimeraX script content—it only shows how to invoke the generator. The core domain knowledge (what makes a good rendering script) is entirely delegated to the opaque scripts/main.py with no fallback guidance. | 2 / 3 |
Workflow Clarity | The 'Workflow' section is entirely generic boilerplate ('Confirm the user objective...', 'Validate that the request matches the documented scope...') with no molecular-visualization-specific steps. The 'Example run plan' is similarly generic. There are no validation checkpoints specific to rendering (e.g., verifying the generated script syntax, checking PDB file validity). Multiple workflow-like sections exist but none provide clear, domain-specific sequencing. | 1 / 3 |
Progressive Disclosure | References to external files exist ('See references/ for...') and the content is broken into sections, but the document itself is a monolithic wall of text with many sections that should be consolidated or removed. The circular self-references ('See ## Features above for related details') are confusing rather than helpful. Content is poorly organized with key information (Features, Usage) buried below generic boilerplate sections. | 2 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
0b96148
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.