Get an external patent examiner review of a patent application. Use when user says "专利审查", "patent review", "审查意见", "examiner review", or wants critical feedback on patent claims and specification.
85
83%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid description with excellent trigger term coverage (including bilingual terms) and a clear 'Use when' clause that makes it easy for Claude to select appropriately. Its main weakness is that the 'what' portion is somewhat general — it could benefit from listing specific actions the skill performs (e.g., evaluating novelty, checking claim scope, identifying prior art issues). Overall, it's a well-constructed description for skill selection purposes.
Suggestions
Add more specific concrete actions to the 'what' portion, e.g., 'Evaluates patent claims for novelty, checks specification sufficiency, identifies prior art concerns, and provides office-action-style feedback.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | It names the domain (patent examination/review) and a general action (get an external patent examiner review), but doesn't list multiple specific concrete actions like analyzing claims, checking novelty, evaluating prior art, or reviewing specification compliance. | 2 / 3 |
Completeness | Clearly answers both 'what' (get an external patent examiner review of a patent application) and 'when' (explicit 'Use when' clause with specific trigger phrases and a broader condition about wanting critical feedback on patent claims). | 3 / 3 |
Trigger Term Quality | Includes strong natural trigger terms in both Chinese and English: '专利审查', 'patent review', '审查意见', 'examiner review', plus the natural phrase 'critical feedback on patent claims and specification'. Good multilingual coverage of terms users would actually say. | 3 / 3 |
Distinctiveness Conflict Risk | Very clear niche — patent examiner review is a highly specific domain. The bilingual trigger terms and focus on examiner-style critical feedback make it unlikely to conflict with general legal, writing, or document review skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill that provides concrete MCP tool calls, detailed prompt templates, and a clear multi-step workflow with severity-based triage and validation checkpoints. Its main weakness is that it's somewhat long for a single file — the large prompt templates and output format could benefit from being split into referenced files. Minor verbosity in explanatory sections could be trimmed, but overall the content is substantive and earns most of its tokens.
Suggestions
Extract the large prompt templates (Round 1 and Round 2) into separate referenced files (e.g., prompts/round1_office_action.md) to improve progressive disclosure and reduce SKILL.md length.
Extract the PATENT_REVIEW.md output template into a separate template file referenced from the main skill to keep the workflow steps concise.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is fairly long but most content is substantive — the detailed prompt templates and structured output format earn their place. However, some sections like the Constants block and the explanatory parentheticals in Step 3 (e.g., explaining what CRITICAL/MAJOR/MINOR mean) add moderate bloat. The prompt template itself is necessarily verbose but could be tightened. | 2 / 3 |
Actionability | The skill provides fully concrete, executable guidance: specific MCP tool calls with config parameters, exact prompt templates to send, a structured output format with table schema, and clear decision criteria (e.g., score below 5/10 triggers rework). The workflow is copy-paste ready with specific file paths and tool invocations. | 3 / 3 |
Workflow Clarity | The 5-step workflow is clearly sequenced with explicit validation checkpoints: Round 1 review → categorize and fix issues by severity → Round 2 follow-up review with threadId continuity → generate report with checklist. The severity triage (CRITICAL must fix before proceeding, MAJOR should fix, MINOR optional) provides a clear feedback loop, and the final checklist serves as a validation gate. | 3 / 3 |
Progressive Disclosure | The skill is a single monolithic file with no bundle files or references to supporting documents. At ~150 lines with two large prompt templates and a full output template, some content (like the detailed prompt templates or the output report format) could be split into referenced files. However, the internal structure with clear headers and steps provides reasonable organization within the single file. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
2028ac4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.