CtrlK
BlogDocsLog inGet started
Tessl Logo

patent-novelty-check

Assess patent novelty and non-obviousness against prior art. Use when user says "专利查新", "patent novelty", "可专利性评估", "patentability check", or wants to evaluate if an invention is patentable.

85

Quality

83%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description with strong trigger terms in both Chinese and English, a clear 'Use when' clause, and a distinctive domain. The main weakness is that the 'what' portion could be more specific about the concrete actions performed (e.g., comparing claims, analyzing prior art references, generating novelty reports).

Suggestions

Expand the capability description with more concrete actions, e.g., 'Assess patent novelty and non-obviousness against prior art, compare invention claims to existing patents, and generate patentability analysis reports.'

DimensionReasoningScore

Specificity

Names the domain (patent assessment) and some actions ('assess patent novelty and non-obviousness against prior art'), but doesn't list multiple concrete actions like searching databases, comparing claims, generating reports, etc.

2 / 3

Completeness

Clearly answers both 'what' (assess patent novelty and non-obviousness against prior art) and 'when' (explicit 'Use when' clause with specific trigger phrases and a general condition about evaluating patentability).

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms including both Chinese ('专利查新', '可专利性评估') and English ('patent novelty', 'patentability check') variations, plus the natural language trigger 'evaluate if an invention is patentable'.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche — patent novelty assessment is a very specific domain unlikely to conflict with other skills. The bilingual trigger terms and domain-specific vocabulary ('non-obviousness', 'prior art', 'patentability') make it clearly distinguishable.

3 / 3

Total

11

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, highly actionable patent novelty assessment skill with clear multi-step workflows, concrete analysis matrices, and specific output formats. Its main weaknesses are moderate verbosity (particularly in the jurisdiction section and some explanatory passages) and references to shared files that don't exist in the bundle, making progressive disclosure harder to verify. The cross-model examiner verification step with specific MCP syntax is a strong differentiator.

Suggestions

Condense the jurisdiction-specific assessment (Step 5) into a single parameterized template rather than repeating the same PASS/FAIL structure three times with minor label variations.

Provide the referenced shared files (patent-writing-principles.md, patent-format-us.md) as bundle files, or remove the references if they don't exist.

DimensionReasoningScore

Conciseness

The skill is reasonably efficient but includes some unnecessary elaboration. The jurisdiction-specific section repeats the same PASS/FAIL pattern three times with minimal differentiation, and some explanatory text (e.g., the motivation-to-combine bullet points) could be tightened. However, it mostly avoids explaining concepts Claude already knows.

2 / 3

Actionability

The skill provides highly concrete, actionable guidance: specific matrix formats for anticipation and obviousness analysis, exact MCP call syntax with prompt template, specific file paths for inputs and outputs, and a complete output template. The claim element extraction, reference testing, and combination analysis are all clearly specified with executable patterns.

3 / 3

Workflow Clarity

The six-step workflow is clearly sequenced with logical dependencies (novelty before obviousness, cross-model verification after initial analysis). Each step has explicit outputs and decision criteria. The anticipation analysis includes a clear verdict framework, and the obviousness analysis includes a structured combination matrix. The fallback for unavailable MCP tool is noted in Key Rules.

3 / 3

Progressive Disclosure

The skill references shared files (`../shared-references/patent-writing-principles.md`, `../shared-references/patent-format-us.md`) and input files (`patent/PRIOR_ART_REPORT.md`, `patent/INVENTION_BRIEF.md`), but no bundle files are provided to support these references. The content is somewhat long (~120 lines) and the jurisdiction-specific section and output template could potentially be split into separate reference files for better organization.

2 / 3

Total

10

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
wanshuiyin/Auto-claude-code-research-in-sleep
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.