Multi-objective optimization framework. NSGA-II, NSGA-III, MOEA/D, Pareto fronts, constraint handling, benchmarks (ZDT, DTLZ), for engineering design and optimization problems.
80
73%
Does it follow best practices?
Impact
93%
2.06xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./scientific-skills/pymoo/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description excels at specificity and distinctiveness by naming concrete algorithms, techniques, and benchmarks that clearly define its niche. Its main weakness is the lack of an explicit 'Use when...' clause, which caps completeness at 2. The trigger terms are excellent for domain experts but the description would benefit from explicit guidance on when Claude should select this skill.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about multi-objective optimization, evolutionary algorithms, Pareto-optimal solutions, or trade-off analysis in engineering design.'
Consider adding a few more natural-language trigger phrases like 'trade-off analysis', 'evolutionary optimization', or 'multi-criteria decision making' to capture users who may not use the exact algorithm names.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions and algorithms: NSGA-II, NSGA-III, MOEA/D, Pareto fronts, constraint handling, and specific benchmarks (ZDT, DTLZ). These are concrete, identifiable techniques rather than vague language. | 3 / 3 |
Completeness | The 'what' is well-covered (multi-objective optimization with specific algorithms and benchmarks), but there is no explicit 'Use when...' clause or equivalent trigger guidance. The mention of 'engineering design and optimization problems' partially implies when, but it's not an explicit trigger statement. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords that users in this domain would actually use: 'multi-objective optimization', 'NSGA-II', 'NSGA-III', 'MOEA/D', 'Pareto fronts', 'constraint handling', 'ZDT', 'DTLZ', 'engineering design'. These are the exact terms someone working on optimization problems would mention. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to the specific algorithm names (NSGA-II, NSGA-III, MOEA/D) and benchmark suites (ZDT, DTLZ). This is a very clear niche that is unlikely to conflict with other skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured and highly actionable skill with excellent executable code examples covering the full range of pymoo functionality. Its main weaknesses are verbosity (explaining concepts Claude already knows, overly detailed inline content that could be in reference files) and lack of validation/verification checkpoints in workflows. The progressive disclosure structure is reasonable but the main file is heavier than it needs to be.
Suggestions
Remove the 'When to Use This Skill' section and trim 'Core Concepts' — Claude understands optimization terminology. Move algorithm selection tables and benchmark problem listings to the referenced files.
Add explicit validation checkpoints to workflows, such as checking convergence (e.g., inspecting result.algorithm.n_gen, plotting convergence history), verifying constraint satisfaction (result.CV), and comparing against known Pareto fronts when available.
Tighten the troubleshooting and best practices sections — most items are general optimization knowledge that Claude already possesses. Keep only pymoo-specific gotchas like constraint formulation conventions and NSGA-III reference direction requirements.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly comprehensive but includes unnecessary sections like 'When to Use This Skill' (Claude can infer this), 'Core Concepts' explaining what single/multi/many-objective means, and verbose best practices that are general optimization knowledge. The algorithm selection tables and code examples are efficient, but overall the document could be significantly tightened. | 2 / 3 |
Actionability | Excellent actionability with fully executable, copy-paste ready code examples for every workflow. Custom problem definitions include both constrained and unconstrained variants with clear constraint formulation rules. Algorithm configuration, operator customization, and visualization all have concrete, runnable code. | 3 / 3 |
Workflow Clarity | Workflows are clearly numbered and sequenced with good algorithm selection guidance per workflow. However, there are no validation checkpoints — no steps to verify solution quality, check convergence, or validate that constraints are satisfied before proceeding. For optimization workflows that can silently produce poor results, explicit verification steps (e.g., checking convergence metrics, validating feasibility) are important but missing. | 2 / 3 |
Progressive Disclosure | The skill references multiple external files (references/*.md, scripts/*.py) with clear navigation and grep patterns, which is good structure. However, the main SKILL.md itself is quite long (~400+ lines) with substantial inline content that could be offloaded to references. The algorithm selection tables, benchmark problem listings, and operator details could live in the referenced files. Without bundle files to verify, the references appear well-organized but the main file retains too much detail. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (570 lines); consider splitting into references/ and linking | Warning |
metadata_version | 'metadata.version' is missing | Warning |
Total | 9 / 11 Passed | |
cbcae7b
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.