AI驱动的综合健康分析系统,整合多维度健康数据、识别异常模式、预测健康风险、提供个性化建议。支持智能问答和AI健康报告生成。
45
33%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/ai-analyzer/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description covers a broad health analysis domain with multiple listed capabilities, but relies heavily on buzzwords like 'AI驱动' and '多维度' without providing concrete, specific actions. The most critical weakness is the complete absence of a 'Use when...' clause, making it difficult for Claude to know when to select this skill over others. The description reads more like a product marketing blurb than a functional skill selector.
Suggestions
Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks about health data analysis, medical test results, health risk assessment, or generating health reports.'
Replace vague buzzwords like 'AI驱动' and '多维度' with concrete actions such as 'Parses blood test results, tracks vital signs trends, flags out-of-range lab values.'
Include common user-facing terms and file types users might mention, such as '体检报告', '血液检查', '心率数据', 'CSV健康数据' to improve trigger term coverage.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (health analysis) and lists several actions (integrate multi-dimensional health data, identify abnormal patterns, predict health risks, provide personalized recommendations, support Q&A and report generation), but many of these are high-level and somewhat buzzword-heavy rather than concrete specific operations. | 2 / 3 |
Completeness | The description addresses 'what does this do' at a general level but completely lacks any 'when should Claude use it' guidance. There is no 'Use when...' clause or equivalent explicit trigger guidance, which per the rubric should cap completeness at 2, and since the 'what' is also somewhat vague and buzzword-laden, a score of 1 is appropriate. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords like '健康数据' (health data), '健康风险' (health risk), '健康报告' (health report), and '智能问答' (intelligent Q&A), but lacks common user-facing trigger variations and natural language terms a user might actually say when requesting this skill. Terms like 'AI驱动' and '多维度' are more marketing language than trigger terms. | 2 / 3 |
Distinctiveness Conflict Risk | The health analysis domain provides some distinctiveness, but the broad scope covering data integration, pattern recognition, risk prediction, recommendations, Q&A, and report generation is so wide that it could overlap with more specific health-related skills or general data analysis skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a product specification or feature overview than an actionable skill for Claude. It is excessively verbose with feature lists, algorithm descriptions, and trigger phrase enumerations that don't add operational value. The workflow lacks validation checkpoints and concrete executable code, and the content would benefit greatly from being split into a concise overview with references to detailed sub-documents.
Suggestions
Cut the feature descriptions, algorithm explanations, and trigger phrase lists drastically—focus only on what Claude needs to DO, not what the system conceptually supports. Move detailed algorithm and data source documentation to separate reference files.
Replace the pseudocode (readFile, exists) with actual executable code or specify the exact tool calls Claude should use (e.g., Read tool with specific file paths), making each step copy-paste actionable.
Add explicit validation checkpoints: verify data files exist and contain expected schema before analysis, validate risk score outputs are within expected ranges, and verify HTML report generation succeeded before updating history.
Restructure as a concise overview (under 80 lines) with links to separate files for data sources, algorithm details, safety guidelines, and report templates.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose, listing extensive feature descriptions, algorithm explanations, and data source tables that Claude already understands conceptually. Much of the content reads like product documentation rather than actionable instructions—e.g., explaining what Pearson correlation is, listing all risk models descriptively, and enumerating trigger phrases extensively. | 1 / 3 |
Actionability | The execution steps provide some concrete file paths and JavaScript-like pseudocode for reading data, but the code is not truly executable (readFile/exists are not real APIs), and critical steps like 'data integration and preprocessing,' 'multi-dimensional analysis,' and 'risk prediction' are described abstractly without concrete implementation. The actual analytical logic is never shown. | 2 / 3 |
Workflow Clarity | The 9-step workflow provides a clear sequence, but there are no validation checkpoints, no error handling steps, and no feedback loops. For a system that generates health risk predictions and reports, missing validation (e.g., checking data completeness, verifying risk score calculations, validating generated HTML) is a significant gap. | 2 / 3 |
Progressive Disclosure | The content is a monolithic wall of text with no references to external files for detailed content. Algorithm explanations, data source tables, and extensive feature lists are all inline when they could be split into separate reference documents. The only external reference is to 'scripts/generate_ai_report.py' which is mentioned but not linked or documented. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
1a9f5ac
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.