How to use the Adaptyv Bio Foundry API and Python SDK for protein experiment design, submission, and results retrieval. Use this skill whenever the user mentions Adaptyv, Foundry API, protein binding assays, protein screening experiments, BLI/SPR assays, thermostability assays, or wants to submit protein sequences for experimental characterization. Also trigger when code imports `adaptyv`, `adaptyv_sdk`, or `FoundryClient`, or references `foundry-api-public.adaptyvbio.com`.
68
82%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines what the skill does (protein experiment design, submission, and results retrieval via the Adaptyv Bio Foundry API/SDK) and when to use it (with comprehensive trigger terms covering domain concepts, product names, code imports, and API endpoints). It uses proper third-person voice and is both specific and distinctive, making it easy for Claude to select appropriately from a large skill set.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'protein experiment design, submission, and results retrieval' along with specific assay types (BLI/SPR, thermostability, protein binding assays, protein screening experiments) and mentions the Python SDK and API. | 3 / 3 |
Completeness | Clearly answers both 'what' (use the Adaptyv Bio Foundry API and Python SDK for protein experiment design, submission, and results retrieval) and 'when' (explicit 'Use this skill whenever...' clause with detailed trigger conditions covering user mentions, assay types, and code imports). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'Adaptyv', 'Foundry API', 'protein binding assays', 'protein screening experiments', 'BLI/SPR assays', 'thermostability assays', 'protein sequences', plus code-level triggers like 'adaptyv_sdk', 'FoundryClient', and the API URL. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: specific to the Adaptyv Bio Foundry API/SDK, with unique trigger terms like 'Adaptyv', 'FoundryClient', 'adaptyv_sdk', and the specific API URL. Very unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable API skill with good executable examples and clear structure covering the full experiment lifecycle. Its main weaknesses are minor redundancy in authentication guidance, lack of explicit validation/error-handling checkpoints in workflows, and some content (filtering syntax, token management) that could benefit from being split into referenced files. The skill would be stronger with concrete polling/retry code and validation steps in the submission workflow.
Suggestions
Add explicit validation checkpoints in the submission workflow — e.g., check the cost estimate response before proceeding to create, verify experiment status after submit, and provide concrete polling code with error handling for step 5.
Move the detailed filtering/sorting syntax and token management sections into separate reference files to reduce the main SKILL.md length and improve progressive disclosure.
Remove redundant authentication advice (mentioned twice in Quick Start) and the introductory sentence explaining what Adaptyv Bio is — Claude doesn't need this context.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but has some redundancy — authentication/token security advice is repeated (mentioned in Quick Start twice about not hardcoding tokens), and some explanatory text like 'Adaptyv Bio is a cloud lab that turns protein sequences into experimental data' is unnecessary context for Claude. The experiment lifecycle table is thorough but could be more compact. | 2 / 3 |
Actionability | Provides fully executable code examples for multiple patterns (decorator, client, step-by-step workflow), concrete curl commands, specific API URLs, exact environment variable names, and copy-paste-ready Python snippets. The sequence format specifications and filter syntax are precise and immediately usable. | 3 / 3 |
Workflow Clarity | The step-by-step binding screen workflow is well-sequenced with numbered steps, and the lifecycle states are clearly documented. However, there are no explicit validation checkpoints — no error checking after create/submit calls, no guidance on what to do if cost_estimate returns unexpected values, and steps 5-6 (poll until Done, retrieve results) are vague with no concrete polling code or error recovery. | 2 / 3 |
Progressive Disclosure | References `references/api-endpoints.md` for the full 32-endpoint API reference which is good progressive disclosure, but no bundle files are provided to verify this exists. The SKILL.md itself is fairly long (~180 lines) and some content like the full filtering/sorting syntax and token management details could be split into reference files. The structure within the file is well-organized with clear headers though. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.