横纵分析法(Horizontal-Vertical Analysis)深度研究Skill。由数字生命卡兹克提出,融合历时-共时分析、纵向-横截面研究设计、案例研究法与竞争战略分析。 当用户想要系统性研究一个产品、公司、概念、技术或人物时使用。纵轴追踪完整生命历程,横轴与竞品系统性横向对比,交叉产出洞察,最终输出PDF研究报告。
70
70%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
75%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description does a good job of establishing a clear niche and explicitly stating both what the skill does and when to use it, which are its strongest aspects. However, it leans heavily on methodological jargon that users are unlikely to use in natural queries, and the specific actions could be more concretely enumerated rather than described abstractly through the framework's terminology.
Suggestions
Add more natural trigger terms users would actually say, such as '竞品分析' (competitive analysis), '行业研究' (industry research), '深度调研' (deep investigation), or English equivalents like 'compare products', 'market analysis'.
List more concrete actions beyond the framework description, e.g., 'collects historical milestones, maps competitive landscapes, identifies strategic inflection points, generates structured PDF reports with visualizations'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (deep research using horizontal-vertical analysis) and mentions some actions like tracking lifecycle on the vertical axis, systematic comparison with competitors on the horizontal axis, and outputting a PDF research report. However, the specific concrete actions are somewhat abstract (e.g., 'cross-producing insights') rather than listing discrete, tangible operations. | 2 / 3 |
Completeness | The description clearly answers both 'what' (horizontal-vertical analysis combining multiple research methodologies, tracking lifecycle, competitor comparison, producing PDF reports) and 'when' ('当用户想要系统性研究一个产品、公司、概念、技术或人物时使用' — use when the user wants to systematically research a product, company, concept, technology, or person). The 'when' clause is explicit with clear trigger conditions. | 3 / 3 |
Trigger Term Quality | It includes some relevant keywords like '研究' (research), '产品' (product), '公司' (company), '技术' (technology), '竞品' (competitors), and 'PDF研究报告' (PDF research report). However, it misses common natural user phrases like 'competitive analysis', 'market research', 'deep dive', 'comparison', or English equivalents that users might naturally say. The methodology-specific jargon (历时-共时分析, 纵向-横截面研究设计) is technical and unlikely to match user queries. | 2 / 3 |
Distinctiveness Conflict Risk | The description carves out a very specific niche — a named methodology (横纵分析法) for systematic deep research combining longitudinal and cross-sectional analysis with competitive comparison, outputting PDF reports. This is distinctive enough to be unlikely to conflict with generic research or analysis skills. | 3 / 3 |
Total | 10 / 12 Passed |
Implementation
50%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a clear high-level research methodology framework with a well-defined 5-step workflow, but it operates at a conceptual/instructional level rather than providing executable, concrete guidance. The academic attribution adds unnecessary tokens, validation checkpoints are missing from the workflow, and the skill relies on an external GitHub link for critical details (sub-agent instructions, PDF script usage, writing norms) without providing enough actionable content in the main file itself.
Suggestions
Add concrete, executable examples: show the actual command to run md_to_pdf.py, provide a sample sub-agent prompt/tool call, and include a snippet of expected Markdown report structure.
Add validation checkpoints: e.g., verify search results cover both axes before proceeding to analysis, review draft structure before full writing, test PDF generation before finalizing.
Remove or drastically shorten the methodology attribution block — Claude doesn't need to know about Saussure or the academic lineage to execute the skill.
Either inline the critical referenced content (sub-agent search instructions, PDF script usage) or provide specific file references (e.g., 'See [SEARCH_GUIDE.md](SEARCH_GUIDE.md)') instead of a generic GitHub link.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The methodology attribution block at the top is unnecessary context for Claude (Saussure references, academic lineage). The type adaptation section and writing style section are reasonably concise. Some redundancy exists but overall it's moderately efficient. | 2 / 3 |
Actionability | The workflow steps are described at a conceptual level rather than with executable specifics. Sub-agent usage is mentioned but no concrete tool calls or code are shown. The PDF generation references a script but doesn't show the actual command. Word count ranges and content coverage lists provide some concrete guidance but lack copy-paste ready examples. | 2 / 3 |
Workflow Clarity | The five-step sequence is clearly laid out and logically ordered, which is good. However, there are no validation checkpoints — no step to verify search results quality, no review gate before PDF generation, no error handling if the script fails. For a multi-step research workflow producing a final deliverable, the absence of feedback loops is notable. | 2 / 3 |
Progressive Disclosure | The skill references a GitHub repo and a scripts directory for detailed content, which is good progressive disclosure in principle. However, the main file itself contains substantial inline detail (word counts, sub-agent specs, writing style rules) that could be split out, and the GitHub link at the end is vague rather than pointing to specific sub-documents. The reference to 'complete version' at the end undermines the current document's authority. | 2 / 3 |
Total | 8 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Reviewed
Table of Contents