tessl install https://github.com/softaworks/agent-toolkit --skill perplexitygithub.com/softaworks/agent-toolkit
Web search and research using Perplexity AI. Use when user says "search", "find", "look up", "ask", "research", or "what's the latest" for generic queries. NOT for library/framework docs (use Context7) or workspace questions.
Average Score
92%
Content
92%
Description
90%
Generated
Validations
Total score
14/16| Criteria | Score |
|---|---|
skill_md_line_count SKILL.md line count is 129 (<= 500) | |
frontmatter_valid YAML frontmatter is valid | |
name_field 'name' field is valid: 'perplexity' | |
description_field 'description' field is valid (225 chars) | |
description_voice 'description' uses third person voice | |
description_trigger_hint Description includes an explicit trigger hint | |
compatibility_field 'compatibility' field not present (optional) | |
allowed_tools_field 'allowed-tools' field not present (optional) | |
metadata_version 'metadata' field is not a dictionary | |
metadata_field 'metadata' field not present (optional) | |
license_field 'license' field is missing | |
frontmatter_unknown_keys No unknown frontmatter keys found | |
body_present SKILL.md body is present | |
body_examples Examples detected (code fence or 'Example' wording) | |
body_output_format Output/return/format terms detected | |
body_steps Step-by-step structure detected (ordered list) |
Content
Suggestions 1
Total score
11/12| Dimension | Score |
|---|---|
conciseness The content is lean and efficient, using tables, bullet points, and minimal prose. No unnecessary explanations of what Perplexity or search tools are—assumes Claude's competence throughout. | 3/3 |
actionability Provides fully executable TypeScript code examples with specific parameters, clear decision trees for tool selection, and concrete examples of correct vs incorrect usage patterns. | 3/3 |
workflow_clarity Clear priority-ordered tool selection chain with explicit decision criteria. The 'Which Perplexity tool?' quick reference and examples section provide unambiguous guidance for each scenario. | 3/3 |
progressive_disclosure Content is well-organized with clear sections, but everything is inline in a single file. References to other tools (Context7, Graphite MCP, etc.) are mentioned but not linked to their respective skill files. | 2/3 |
Suggestions
Add links to related skill files (e.g., '[Context7 MCP](./context7.md)') to improve navigation between related tools
Overall Assessment
This is a high-quality skill that excels at conciseness and actionability. The tool selection chain and concrete examples make it immediately usable. The only weakness is that cross-references to related skills (Context7, Graphite MCP, Nx MCP) could be linked rather than just mentioned.
Description
Suggestions 1
Total score
11/12| Dimension | Score |
|---|---|
specificity Names the domain (web search/research) and the tool (Perplexity AI), but lacks specific concrete actions beyond generic 'search and research'. Doesn't specify what types of results it returns or capabilities like summarization, source citation, etc. | 2/3 |
completeness Clearly answers both what (web search using Perplexity AI) and when (explicit trigger terms listed). Also includes helpful exclusion criteria ('NOT for library/framework docs, use Context7') which aids skill selection. | 3/3 |
trigger_term_quality Excellent coverage of natural trigger terms users would actually say: 'search', 'find', 'look up', 'ask', 'research', 'what's the latest'. These are common, natural phrases for web search requests. | 3/3 |
distinctiveness_conflict_risk Highly distinctive with explicit boundary conditions. The exclusion of library/framework docs (directing to Context7) and workspace questions creates clear delineation from other skills, reducing conflict risk significantly. | 3/3 |
Suggestions
Add 2-3 specific concrete actions to improve specificity, e.g., 'retrieves current information, summarizes findings with sources, answers factual questions about recent events'
Overall Assessment
This is a well-crafted skill description with strong trigger terms and excellent completeness including explicit exclusion criteria. The main weakness is the lack of specific concrete actions beyond generic 'search and research' - it could benefit from listing specific capabilities like 'retrieve current information', 'summarize findings', or 'cite sources'.