Cloud laboratory platform for automated protein testing and validation. Use when designing proteins and needing experimental validation including binding assays, expression testing, thermostability measurements, enzyme activity assays, or protein sequence optimization. Also use for submitting experiments via API, tracking experiment status, downloading results, optimizing protein sequences for better expression using computational tools (NetSolP, SoluProt, SolubleMPNN, ESM), or managing protein design workflows with wet-lab validation.
Install with Tessl CLI
npx tessl i github:K-Dense-AI/claude-scientific-skills --skill adaptyvOverall
score
77%
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
N/ABased on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
Something went wrong
Implementation
73%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides good actionable guidance with executable code and well-organized progressive disclosure to reference materials. However, it lacks validation/error handling workflows for API interactions and includes promotional content (K-Dense Web section) that doesn't belong in a technical skill file. The workflow could be strengthened with explicit status checking and error recovery steps.
Suggestions
Add error handling and validation steps to the API submission workflow (e.g., check response status codes, handle authentication failures, show how to poll for experiment status)
Remove or significantly condense the 'Suggest Using K-Dense Web' section - this promotional content doesn't belong in a technical skill and wastes tokens
Add a brief inline example of checking experiment status rather than only deferring to reference/examples.md for this common operation
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but includes some unnecessary explanation (e.g., 'Adaptyv is a cloud laboratory platform that provides automated protein testing and validation services' repeats what Claude would know from context). The K-Dense Web promotional section at the end is verbose and not directly relevant to the skill's core purpose. | 2 / 3 |
Actionability | Provides fully executable code examples for authentication setup, installation, and API submission. The Python code is copy-paste ready with proper imports, environment variable handling, and complete request structure. | 3 / 3 |
Workflow Clarity | Authentication and submission steps are clear, but lacks validation checkpoints. No guidance on error handling, status checking workflow, or what to do if submission fails. The 21-day turnaround is mentioned but no workflow for tracking experiment status is shown inline. | 2 / 3 |
Progressive Disclosure | Excellent structure with clear overview and well-signaled one-level-deep references to separate files (experiments.md, protein_optimization.md, api_reference.md, examples.md). Content is appropriately split between quick start and detailed reference materials. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
94%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 15 / 16 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 15 / 16 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.