This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON file with resource information and strategic recommendations that inform computational approach decisions such as whether to use parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), or memory-efficient strategies. Use this skill before running analyses, training models, processing large datasets, or any task where resource constraints matter.
74
63%
Does it follow best practices?
Impact
91%
2.84xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/get-available-resources/SKILL.mdQuality
Discovery
85%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong description that clearly articulates what the skill does (detects system resources and provides computational strategy recommendations) and when to use it (before computationally intensive tasks). It provides excellent specificity with concrete tools and frameworks mentioned. The main weakness is that trigger terms lean technical rather than matching natural user language patterns.
Suggestions
Add more natural user-facing trigger terms such as 'check my hardware', 'system specs', 'how much memory/RAM do I have', or 'can my machine handle this' to improve matching with how users naturally phrase resource-related questions.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: detect CPU cores, GPUs, memory, disk space; creates a JSON file with resource information; provides strategic recommendations for parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), and memory-efficient strategies. | 3 / 3 |
Completeness | Clearly answers both 'what' (detect and report available system resources, create JSON file with recommendations) and 'when' with explicit triggers: 'at the start of any computationally intensive scientific task', 'before running analyses, training models, processing large datasets, or any task where resource constraints matter.' | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'CPU cores', 'GPUs', 'memory', 'disk space', 'system resources', 'parallel processing', 'GPU acceleration', but these are somewhat technical. Missing more natural user-facing terms like 'check my hardware', 'how much RAM', 'resource check', or 'system info'. Users may not naturally phrase requests using these exact terms. | 2 / 3 |
Distinctiveness Conflict Risk | Has a very clear niche: system resource detection and computational strategy recommendations for scientific computing. The specific focus on hardware profiling before computation is distinct and unlikely to conflict with other skills like data analysis or model training skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
The skill is highly actionable with executable code examples and clear usage patterns, but it is severely bloated. It explains too much context that Claude already knows (what GPUs are, what parallel processing means, what libraries do), includes exhaustive inline documentation that should be split into reference files, and lacks validation checkpoints in its workflow. Cutting this to ~30% of its current length while extracting reference material into linked files would dramatically improve it.
Suggestions
Cut the content by at least 50%: remove 'When to Use This Skill' examples, the 'How This Skill Works' detailed breakdown of what each resource category means, 'Best Practices', and 'Troubleshooting' sections—Claude can infer these or they belong in separate files.
Move the full JSON schema example, strategic recommendation thresholds, and platform support details into separate reference files (e.g., SCHEMA.md, RECOMMENDATIONS.md) and link to them from the main skill.
Add a validation step after running the detection script, e.g., 'Verify .claude_resources.json was created and contains valid JSON before proceeding' with a concrete check command.
Consolidate the three 'Step 3' code examples into a single concise decision-making snippet rather than showing separate blocks for data loading, parallel processing, and GPU acceleration.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose. The 'When to Use This Skill' section repeats what the description already covers. The detailed JSON output example, the exhaustive recommendation breakdowns, and the lengthy 'How This Skill Works' section explain things Claude can infer. The 'Best Practices' and 'Troubleshooting' sections add significant padding. Much of this could be cut by 60%+ without losing actionable information. | 1 / 3 |
Actionability | Provides fully executable code examples for running the detection script, reading the JSON output, and applying recommendations for data loading, parallel processing, and GPU acceleration. Commands are copy-paste ready with specific library imports and decision logic. | 3 / 3 |
Workflow Clarity | The three-step workflow (run detection → read recommendations → make decisions) is clearly sequenced, but there are no validation checkpoints. There's no step to verify the JSON was generated correctly, no error handling if the script fails, and no feedback loop for re-running if results seem wrong. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. The full JSON schema, all recommendation categories, all platform details, troubleshooting, and best practices are all inline. The strategic recommendations section, platform support, and troubleshooting could easily be split into separate reference files. | 1 / 3 |
Total | 7 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.