CtrlK
BlogDocsLog inGet started
Tessl Logo

analyzing-nft-rarity

Calculate NFT rarity scores and rank tokens by trait uniqueness. Use when analyzing NFT collections, checking token rarity, or comparing NFTs. Trigger with phrases like "check NFT rarity", "analyze collection", "rank tokens", "compare NFTs".

73

Quality

68%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/crypto/nft-rarity-analyzer/skills/analyzing-nft-rarity/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-crafted skill description that clearly defines its purpose, provides explicit trigger guidance, and occupies a distinct niche. It uses third person voice correctly, lists concrete actions, and includes both a 'Use when' clause and explicit trigger phrases. The description is concise without being vague.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: 'Calculate NFT rarity scores', 'rank tokens by trait uniqueness'. These are clear, actionable capabilities.

3 / 3

Completeness

Clearly answers both 'what' (calculate rarity scores, rank tokens by trait uniqueness) and 'when' (explicit 'Use when' clause plus a 'Trigger with phrases' section listing specific triggers).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms: 'check NFT rarity', 'analyze collection', 'rank tokens', 'compare NFTs', 'NFT collections', 'token rarity'. These are phrases users would naturally say.

3 / 3

Distinctiveness Conflict Risk

NFT rarity scoring is a very specific niche. The terms 'NFT', 'rarity scores', 'trait uniqueness', and 'rank tokens' are highly distinctive and unlikely to conflict with other skills.

3 / 3

Total

12

/

12

Passed

Implementation

37%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides a reasonable CLI reference for an NFT rarity analysis tool with concrete commands, but suffers from confusing inline comments, lack of workflow sequencing, and missing validation steps. The dependency on a script not included in the bundle undermines actionability, and the content reads more like a tool's README than an actionable skill for Claude.

Suggestions

Add a clear sequential workflow (e.g., 'First fetch collection → verify data → analyze rarity → export') with validation checkpoints after API fetching and before export.

Fix or remove the confusing inline comments ('# port 1234 - example/test', '# 5678: 1234: 9012 = configured value') and replace with meaningful explanations.

Include the referenced bundle files (rarity_analyzer.py, errors.md, examples.md) or at minimum provide expected output examples inline so Claude can verify correct behavior.

Remove the vague 'Output' section and 'Supported Collections' section — Claude can infer these from the tool's actual output.

DimensionReasoningScore

Conciseness

Mostly efficient with clear command examples, but includes some unnecessary sections like 'Supported Collections' and 'Overview' bullet points that describe rather than instruct. The algorithm table is useful but the 'Output' section is vague filler.

2 / 3

Actionability

Provides concrete CLI commands which is good, but all commands depend on a script (rarity_analyzer.py) that isn't provided in the bundle. The inline comments like '# port 1234 - example/test' and '# 5678: 1234: 9012 = configured value' are confusing and appear to be garbled or placeholder text rather than meaningful guidance.

2 / 3

Workflow Clarity

Steps are listed but there's no clear workflow sequence — it's a flat list of independent commands with no validation checkpoints, no error recovery guidance, and no indication of when to use which step. For operations involving API calls and data export, there should be validation steps (e.g., verify fetched data before analysis).

1 / 3

Progressive Disclosure

References to errors.md and examples.md are well-signaled and one level deep, which is good. However, no bundle files are provided, so these references are unverifiable. The main content includes some sections (like the algorithm table and output descriptions) that could be in reference files, while the referenced examples.md likely contains critical workflow information that should be summarized inline.

2 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.