How to use the Adaptyv Bio Foundry API and Python SDK for protein experiment design, submission, and results retrieval. Use this skill whenever the user mentions Adaptyv, Foundry API, protein binding assays, protein screening experiments, BLI/SPR assays, thermostability assays, or wants to submit protein sequences for experimental characterization. Also trigger when code imports `adaptyv`, `adaptyv_sdk`, or `FoundryClient`, or references `foundry-api-public.adaptyvbio.com`.
88
86%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that clearly defines what the skill does (protein experiment design, submission, and results retrieval via the Adaptyv Bio Foundry API/SDK) and when to use it with comprehensive trigger terms spanning user language, technical terms, code imports, and API endpoints. It uses proper third-person voice and is concise yet thorough, making it easy for Claude to select this skill accurately from a large pool.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'protein experiment design, submission, and results retrieval' along with specific assay types (BLI/SPR, thermostability, protein binding assays, protein screening experiments) and mentions the Python SDK and API. | 3 / 3 |
Completeness | Clearly answers both 'what' (use the Adaptyv Bio Foundry API and Python SDK for protein experiment design, submission, and results retrieval) and 'when' (explicit 'Use this skill whenever...' clause with detailed trigger conditions covering user mentions, assay types, and code imports). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'Adaptyv', 'Foundry API', 'protein binding assays', 'protein screening experiments', 'BLI/SPR assays', 'thermostability assays', 'protein sequences', plus code-level triggers like 'adaptyv_sdk', 'FoundryClient', and the API URL. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: references a specific vendor (Adaptyv Bio), specific SDK names, a specific API URL, and domain-specific assay types. Very unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, well-organized API skill that provides concrete, executable guidance across multiple usage patterns. Its main weaknesses are minor redundancy in authentication guidance and the absence of validation/error-handling steps within the multi-step workflows, which is notable given that these operations involve real cost commitments and irreversible experiment submissions.
Suggestions
Add error checking/validation steps in the step-by-step workflow (e.g., check response status after create, verify experiment status before submitting, handle cost estimate review before proceeding).
Remove the redundant authentication security advice — the 'never hardcode tokens' and 'never commit to source control' points are stated twice in Quick Start.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but has some redundancy — authentication/token security advice is repeated (mentioned in Quick Start twice about not committing tokens), and some explanatory text like 'Adaptyv Bio is a cloud lab that turns protein sequences into experimental data' is unnecessary context for Claude. The experiment lifecycle table is thorough but borderline verbose. | 2 / 3 |
Actionability | Provides fully executable code examples for multiple patterns (decorator, client, step-by-step workflow), concrete curl commands, specific API URLs, exact filter syntax with examples, and copy-paste ready Python snippets covering the full experiment lifecycle from creation to results retrieval. | 3 / 3 |
Workflow Clarity | The step-by-step binding screen workflow is well-sequenced, and the lifecycle states are clearly documented. However, there are no validation checkpoints — no error checking after API calls, no verification that the experiment was created successfully before submitting, and no guidance on handling failures at each step. For an API involving real-world lab experiments with cost implications, validation/feedback loops are important. | 2 / 3 |
Progressive Disclosure | Well-structured with a clear quick start, organized sections for different concerns (SDK patterns, experiment types, lifecycle, sequences, filtering), and a single-level reference to `references/api-endpoints.md` for the full 32-endpoint API reference. Content is appropriately split between overview and detailed reference. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
25e1c0f
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.