CtrlK
BlogDocsLog inGet started
Tessl Logo

get-available-resources

This skill should be used at the start of any computationally intensive scientific task to detect and report available system resources (CPU cores, GPUs, memory, disk space). It creates a JSON file with resource information and strategic recommendations that inform computational approach decisions such as whether to use parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), or memory-efficient strategies. Use this skill before running analyses, training models, processing large datasets, or any task where resource constraints matter.

74

2.84x
Quality

63%

Does it follow best practices?

Impact

91%

2.84x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./scientific-skills/get-available-resources/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

85%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong description that clearly articulates what the skill does (detects system resources and generates strategic recommendations) and when to use it (before computationally intensive tasks). It provides excellent specificity with concrete tools and frameworks mentioned. The main weakness is that trigger terms lean technical rather than matching natural user language patterns.

Suggestions

Add more natural user-facing trigger terms such as 'check my hardware', 'system specs', 'how much memory/RAM do I have', or 'what resources are available' to improve matching with how users naturally phrase requests.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: detect CPU cores, GPUs, memory, disk space; creates a JSON file with resource information; provides strategic recommendations for parallel processing (joblib, multiprocessing), out-of-core computing (Dask, Zarr), GPU acceleration (PyTorch, JAX), and memory-efficient strategies.

3 / 3

Completeness

Clearly answers both 'what' (detect and report available system resources, create JSON file with recommendations) and 'when' ('at the start of any computationally intensive scientific task', 'before running analyses, training models, processing large datasets, or any task where resource constraints matter').

3 / 3

Trigger Term Quality

Includes some relevant keywords like 'CPU cores', 'GPUs', 'memory', 'disk space', 'system resources', 'parallel processing', 'GPU acceleration', but these are somewhat technical. Missing more natural user-facing terms like 'check my hardware', 'how much RAM', 'system info', or 'resource check'. Users may not naturally phrase requests using these exact terms.

2 / 3

Distinctiveness Conflict Risk

Occupies a clear niche as a system resource detection and reporting skill specifically for scientific computing. The focus on hardware detection, JSON output, and computational strategy recommendations makes it highly distinct from data analysis, model training, or general system administration skills.

3 / 3

Total

11

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with executable code examples and clear usage patterns, but it is severely bloated. It explains concepts Claude already understands (what CPU cores are, what memory is), includes exhaustive recommendation tables that could be in a reference file, and presents a monolithic document that should be split into overview + references. The workflow lacks validation checkpoints for confirming successful resource detection before proceeding.

Suggestions

Cut the content by 60%+: Remove 'When to Use This Skill' examples, the 'How This Skill Works > Resource Detection' enumeration of obvious items, and the verbose 'Strategic Recommendations' tables—these are either redundant with the JSON output or explain things Claude already knows.

Split into overview + reference files: Move the full JSON schema example to a SCHEMA.md, the strategic recommendation details to a RECOMMENDATIONS.md, and troubleshooting to a TROUBLESHOOTING.md, keeping SKILL.md as a concise quick-start.

Add a validation step after running the detection script: e.g., 'Verify .claude_resources.json was created and contains valid JSON before proceeding' with a concrete check command.

Remove the 'Best Practices' and 'Platform Support' sections—these are generic advice that Claude can infer from context.

DimensionReasoningScore

Conciseness

Extremely verbose. The 'When to Use This Skill' section repeats what the description already covers. The detailed explanations of what CPU/GPU/memory/disk detection entails are unnecessary—Claude knows what these are. The full JSON output example, the exhaustive recommendation tables, and the lengthy 'How This Skill Works' section explaining obvious concepts all waste tokens. The troubleshooting and best practices sections add marginal value with generic advice.

1 / 3

Actionability

Provides fully executable code examples for running the script, reading the JSON output, and applying recommendations for data loading, parallel processing, and GPU acceleration. The bash commands and Python snippets are copy-paste ready with concrete patterns.

3 / 3

Workflow Clarity

Steps are listed (run detection → read recommendations → make decisions) but there's no validation checkpoint. What if the script fails silently or produces incomplete output? There's no explicit check that the JSON file was created successfully or that the recommendations are valid before proceeding with computational decisions.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with everything inline. The detailed recommendation tables, full JSON schema, multiple usage examples, troubleshooting, and best practices could all be split into separate reference files. At ~200+ lines, this skill would benefit greatly from a concise overview pointing to detailed docs.

1 / 3

Total

7

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.