Agent skill for pagerank-analyzer - invoke with $agent-pagerank-analyzer
37
7%
Does it follow best practices?
Impact
85%
9.44xAverage score across 3 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./.agents/skills/agent-pagerank-analyzer/SKILL.mdQuality
Discovery
0%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is essentially a placeholder that only names the tool and its invocation command. It provides no information about what the skill does, when to use it, or what user requests should trigger it. It would be nearly impossible for Claude to correctly select this skill from a pool of available options.
Suggestions
Add concrete actions describing what pagerank-analyzer does, e.g., 'Computes PageRank scores for web pages, analyzes link graphs, identifies high-authority nodes in a network.'
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about page rank, link analysis, graph centrality, website authority, or SEO ranking analysis.'
Remove the invocation syntax ('invoke with $agent-pagerank-analyzer') from the description since it is operational detail, not selection criteria, and replace it with domain-specific keywords that help distinguish this skill.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names a tool ('pagerank-analyzer') but describes no concrete actions. There is no indication of what the skill actually does beyond invoking an agent. 'Agent skill for pagerank-analyzer' is abstract and vague. | 1 / 3 |
Completeness | The description fails to answer both 'what does this do' and 'when should Claude use it'. There is no 'Use when...' clause and no explanation of capabilities. | 1 / 3 |
Trigger Term Quality | The only keyword is 'pagerank-analyzer', which is a tool name rather than a natural term a user would say. Users might say 'page rank', 'link analysis', 'graph ranking', or 'SEO analysis', none of which appear. | 1 / 3 |
Distinctiveness Conflict Risk | The description is so vague that it provides no clear niche. Without knowing what the skill does, it could conflict with any analysis-related skill, and the generic 'agent skill' framing offers no differentiation. | 1 / 3 |
Total | 4 / 12 Passed |
Implementation
14%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is heavily padded with abstract descriptions, capability catalogs, and domain overviews that Claude already knows, while lacking the concrete, executable guidance needed for actual use. The MCP tool call examples provide some value but are undermined by undefined helper functions and missing validation steps. The content reads more like a marketing document than an actionable skill file.
Suggestions
Remove all abstract capability lists (Advanced Graph Algorithms, Performance Optimization, Application Domains, Integration Patterns) and focus only on concrete MCP tool usage with real, executable examples.
Make code examples fully executable by replacing undefined helper functions with actual implementations or realistic inline code that demonstrates the complete workflow.
Add explicit validation checkpoints to workflows, e.g., verify PageRank scores sum to ~1.0, check convergence status in results, validate graph matrix properties before computation.
Extract detailed integration patterns and application domain content into separate reference files, keeping SKILL.md as a concise quick-start with tool signatures and one or two complete examples.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose with extensive sections that describe concepts Claude already knows (community detection, graph ML, performance optimization techniques). Bullet-point lists of abstract capabilities like 'Spectral Clustering', 'GPU Acceleration', 'Streaming Algorithms' add no actionable value. The content could be reduced by 70%+ without losing useful information. | 1 / 3 |
Actionability | The code examples show specific MCP tool calls with parameters, which is useful. However, many examples use undefined helper functions (extractTopRecommendations, identifyInfluencers, load_graph_partition) making them non-executable pseudocode. The distributed PageRank Python example won't run as written. Many sections are purely descriptive bullet lists with no concrete guidance. | 2 / 3 |
Workflow Clarity | The 'Example Workflows' section lists high-level steps like 'Build social network graph from user interactions' and 'Compute PageRank scores' without any concrete commands, validation checkpoints, or error recovery. For operations involving large-scale graph processing, there are no validation or verification steps anywhere in the skill. | 1 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files. All content is inline regardless of depth or relevance. Sections like 'Application Domains', 'Integration Patterns', and 'Performance Optimization' are abstract catalogs that could either be removed or split into separate reference files. | 1 / 3 |
Total | 5 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
01070ed
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.