CtrlK
BlogDocsLog inGet started
Tessl Logo

endor-score

Evaluate open source package health before adoption. Use when the user says "should I use this package", "is lodash well-maintained", "endor score express", "package health", "compare lodash vs underscore", "evaluate this dependency", or wants activity, popularity, security, and quality scores. Do NOT use for checking known CVEs in a package (/endor-check) or scanning the whole repo (/endor-scan).

95

Quality

93%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that hits all the marks. It provides specific capabilities, abundant natural trigger terms, explicit 'Use when' and 'Do NOT use' clauses, and clear boundaries against related skills. The inclusion of concrete example phrases and negative scope boundaries makes this particularly effective for skill selection.

DimensionReasoningScore

Specificity

Lists multiple concrete actions: evaluate package health, provide activity/popularity/security/quality scores, compare packages. Also explicitly distinguishes what it does NOT do (checking CVEs, scanning repos).

3 / 3

Completeness

Clearly answers both 'what' (evaluate open source package health, provide activity/popularity/security/quality scores) and 'when' (explicit 'Use when...' clause with multiple trigger phrases). Also includes negative boundaries with 'Do NOT use for' guidance.

3 / 3

Trigger Term Quality

Excellent coverage of natural user phrases: 'should I use this package', 'is lodash well-maintained', 'package health', 'compare lodash vs underscore', 'evaluate this dependency'. These are realistic phrases users would actually say.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with clear niche (package health evaluation before adoption) and explicit negative boundaries distinguishing it from related skills (/endor-check for CVEs, /endor-scan for repo scanning). Very unlikely to conflict.

3 / 3

Total

12

/

12

Passed

Implementation

87%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-crafted skill that efficiently communicates a multi-step evaluation workflow with concrete tool invocations and clear output formatting guidance. Its main weakness is the lack of explicit validation checkpoints between steps—particularly verifying that the package UUID was successfully retrieved before querying metrics. The recommendation thresholds and error handling table are excellent additions that make the skill immediately actionable.

Suggestions

Add a brief validation checkpoint after retrieving the package UUID in Step 2 (e.g., 'If no PackageVersion found, skip to error handling—do not attempt Metric query').

DimensionReasoningScore

Conciseness

The content is lean and efficient. Every section serves a purpose—no unnecessary explanations of what packages are or how ecosystems work. The tables and thresholds are compact and informative.

3 / 3

Actionability

Provides concrete CLI commands with actual syntax, specific MCP tool names with their parameters, and clear filter patterns. The bash examples are copy-paste ready with appropriate placeholders.

3 / 3

Workflow Clarity

Steps are clearly sequenced (1-5) with logical progression from data gathering to presentation. However, there are no explicit validation checkpoints or feedback loops—e.g., no guidance on what to do if Step 1 returns partial data before proceeding to Step 2, or how to verify the package UUID was correctly retrieved before using it in the Metric query.

2 / 3

Progressive Disclosure

Well-structured with clear sections, appropriate use of tables for compact reference, and a single-level reference to 'references/data-sources.md' for data source policy. Cross-references to related commands (/endor-check, /endor-scan) are clearly signaled in the Next Steps section.

3 / 3

Total

11

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
endorlabs/skills-ideas
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.