CtrlK
BlogDocsLog inGet started
Tessl Logo

modal

Cloud computing platform for running Python on GPUs and serverless infrastructure. Use when deploying AI/ML models, running GPU-accelerated workloads, serving web endpoints, scheduling batch jobs, or scaling Python code to the cloud. Use this skill whenever the user mentions Modal, serverless GPU compute, deploying ML models to the cloud, serving inference endpoints, running batch processing in the cloud, or needs to scale Python workloads beyond their local machine. Also use when the user wants to run code on H100s, A100s, or other cloud GPUs, or needs to create a web API for a model.

88

Quality

86%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly identifies the platform (Modal), lists concrete actions, and provides comprehensive trigger guidance with two explicit 'Use when' clauses. It covers both high-level use cases and specific hardware references (H100s, A100s), making it highly distinguishable and easy for Claude to select appropriately. The description is slightly verbose but the additional detail serves the purpose of covering natural trigger terms.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: deploying AI/ML models, running GPU-accelerated workloads, serving web endpoints, scheduling batch jobs, scaling Python code to the cloud, creating web APIs for models.

3 / 3

Completeness

Clearly answers both 'what' (cloud computing platform for running Python on GPUs and serverless infrastructure) and 'when' with explicit 'Use when...' and 'Also use when...' clauses covering multiple trigger scenarios.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: Modal, serverless GPU, H100s, A100s, cloud GPUs, ML models, inference endpoints, batch processing, web API, Python workloads. These are terms users would naturally use when needing this skill.

3 / 3

Distinctiveness Conflict Risk

Clearly targets a specific niche — Modal as a cloud computing platform — with distinct triggers like specific GPU types (H100s, A100s), serverless GPU compute, and the Modal platform name. Unlikely to conflict with general Python or generic cloud skills.

3 / 3

Total

12

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong skill file that excels at actionability and progressive disclosure, providing executable code examples for every major concept while cleanly pointing to detailed reference files. The main weaknesses are moderate verbosity (the 'When to Use' section and some explanatory text could be trimmed) and the absence of validation/error-handling guidance in workflows, which is important for cloud deployment and batch processing operations.

Suggestions

Remove or significantly trim the 'When to Use This Skill' section since it largely duplicates the frontmatter description and is information Claude can infer from context.

Add validation checkpoints to workflow patterns — e.g., checking deployment status after `modal deploy`, handling GPU unavailability in fallback chains, and adding error handling/retry logic to the batch processing example.

DimensionReasoningScore

Conciseness

The skill is generally well-structured but includes some unnecessary explanatory text (e.g., 'Everything in Modal is defined as code — no YAML, no Dockerfiles required' and the 'When to Use This Skill' section which largely duplicates the frontmatter description). The overview section could be tighter, but most content earns its place with concrete code examples and reference pointers.

2 / 3

Actionability

Excellent actionability throughout — nearly every concept is accompanied by fully executable, copy-paste-ready Python code examples. CLI commands are concrete, GPU specifications are specific, and the workflow patterns section provides complete, runnable applications.

3 / 3

Workflow Clarity

While individual concepts are clearly explained with code, the skill lacks explicit validation checkpoints and feedback loops. For example, the deployment workflow doesn't mention verifying deployment success, the batch processing pattern doesn't include error handling or retry logic, and there's no guidance on what to do when things fail (e.g., GPU unavailable, deployment errors, volume write conflicts).

2 / 3

Progressive Disclosure

Excellent progressive disclosure — the main file serves as a clear overview with concise examples for each concept, and every section includes a well-signaled one-level-deep reference to detailed documentation. The reference files section at the end provides a clean navigation index.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_version

'metadata.version' is missing

Warning

Total

10

/

11

Passed

Repository
K-Dense-AI/claude-scientific-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.