CtrlK
BlogDocsLog inGet started
Tessl Logo

notion-automation

Automate Notion tasks via Rube MCP (Composio): pages, databases, blocks, comments, users. Always search tools first for current schemas.

73

2.08x
Quality

60%

Does it follow best practices?

Impact

98%

2.08x

Average score across 3 eval scenarios

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./.trae/skills/notion-automation/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

57%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description identifies a clear domain (Notion automation) and lists the entity types it works with, giving it reasonable distinctiveness. However, it lacks specific action verbs describing what operations are performed and has no explicit 'Use when...' trigger clause, which limits its effectiveness for skill selection. The inclusion of implementation details ('Rube MCP', 'Composio', 'search tools first for current schemas') takes up space that could be used for user-facing trigger terms.

Suggestions

Add an explicit 'Use when...' clause with natural trigger phrases like 'Use when the user wants to create, update, or query Notion pages, databases, or blocks, or mentions Notion workspace management.'

Replace vague 'automate tasks' with specific actions such as 'create pages, query databases, update blocks, post comments, list users' to improve specificity.

Move implementation details like 'Rube MCP (Composio)' and 'search tools first for current schemas' out of the description or to the end, prioritizing user-facing language.

DimensionReasoningScore

Specificity

Names the domain (Notion) and lists entity types (pages, databases, blocks, comments, users) but doesn't describe specific concrete actions beyond 'automate tasks'. What actions on pages? Creating, updating, querying? The entities are listed but the operations are vague.

2 / 3

Completeness

The 'what' is partially addressed (automate Notion tasks involving pages, databases, etc.) but there is no explicit 'Use when...' clause. The 'when' is only implied by the mention of Notion. The instruction to 'always search tools first' is operational guidance, not a trigger condition.

2 / 3

Trigger Term Quality

Includes 'Notion', 'pages', 'databases', 'blocks', 'comments', 'users' which are relevant keywords. However, it includes technical jargon like 'Rube MCP (Composio)' and 'schemas' that users wouldn't naturally say, and misses natural phrases like 'create a Notion page', 'query a database', or 'add a comment'.

2 / 3

Distinctiveness Conflict Risk

The description is clearly scoped to Notion via a specific integration (Rube MCP/Composio), making it highly distinctive. It's unlikely to conflict with other skills unless there's another Notion skill, and the specific tooling reference further narrows its niche.

3 / 3

Total

9

/

12

Passed

Implementation

62%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, well-structured skill that clearly documents Notion automation workflows with good tool sequencing and pitfall documentation. Its main weaknesses are moderate verbosity with some redundancy across sections, and the lack of fully executable tool call examples. The workflow clarity is strong with clear sequences and error conditions, but the content could be more concise by eliminating duplication and splitting detailed reference material into separate files.

Suggestions

Deduplicate repeated pitfalls (pagination, case-sensitivity, archived behavior) by consolidating them in the Known Pitfalls section and removing duplicates from individual workflows.

Add at least one complete, executable MCP tool call example with full parameters (e.g., a complete RUBE_SEARCH_TOOLS call followed by a NOTION_QUERY_DATABASE_WITH_FILTER call) to make the skill more actionable.

Move the Quick Reference table and detailed filter syntax examples to a separate REFERENCE.md file, linked from the main skill, to improve progressive disclosure and reduce the main file's length.

DimensionReasoningScore

Conciseness

The skill is fairly well-organized but is quite long (~180 lines). Some sections are somewhat repetitive (e.g., pitfalls about pagination and case-sensitivity appear multiple times across workflows and the Known Pitfalls section). The quick reference table at the end duplicates information already covered in the workflow sections. However, most content is genuinely useful and not explaining things Claude already knows.

2 / 3

Actionability

The skill provides clear tool names, parameter names, and tool sequences, plus concrete JSON filter examples. However, most 'code' blocks are pseudocode sequences rather than actual executable MCP tool calls with complete parameter structures. The setup steps are actionable, but the core workflows describe what to do rather than showing complete, copy-paste-ready tool invocations with full parameter objects.

2 / 3

Workflow Clarity

Each workflow has a clear numbered sequence with prerequisite/required/optional annotations, explicit tool ordering, and well-documented pitfalls that serve as validation checkpoints. The pagination pattern includes an explicit loop condition (continue until has_more is false). The setup section includes a verification step before proceeding. Error conditions and their causes are clearly documented (404 meanings, validation errors, archived page failures).

3 / 3

Progressive Disclosure

The content is well-structured with clear sections and a useful quick reference table, but it's all in one monolithic file. The five workflow sections plus common patterns, known pitfalls, and quick reference could benefit from splitting into separate files. The only external reference is the Composio toolkit docs link. For a skill this long, some content (e.g., the full quick reference table, detailed filter syntax) could be in supplementary files.

2 / 3

Total

9

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
Lingjie-chen/MT5
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.