Automate GitLab project management, issues, merge requests, pipelines, branches, and user operations via Rube MCP (Composio). Always search tools first for current schemas.
65
55%
Does it follow best practices?
Impact
76%
1.55xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/all-skills/skills/gitlab-automation/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is strong in specificity and distinctiveness, listing concrete GitLab operations and naming the specific integration mechanism. Its main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know precisely when to select this skill over others. The trigger terms are naturally aligned with what users would say when requesting GitLab automation.
Suggestions
Add an explicit 'Use when...' clause, e.g., 'Use when the user asks about GitLab projects, creating or managing issues, reviewing merge requests, checking pipeline status, or managing branches and users.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: project management, issues, merge requests, pipelines, branches, and user operations. Also specifies the mechanism (Rube MCP/Composio) and includes a procedural instruction to search tools first. | 3 / 3 |
Completeness | Clearly answers 'what does this do' (automate GitLab operations via Rube MCP), but lacks an explicit 'Use when...' clause specifying when Claude should select this skill. The when is only implied by the domain terms. | 2 / 3 |
Trigger Term Quality | Includes strong natural keywords users would say: 'GitLab', 'issues', 'merge requests', 'pipelines', 'branches', 'project management'. These are terms users naturally use when working with GitLab. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive due to the specific mention of 'GitLab' and 'Rube MCP (Composio)'. This clearly differentiates it from generic CI/CD skills, GitHub skills, or other project management tools. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in coverage but severely over-engineered for a single SKILL.md file. The biggest weakness is extreme verbosity with duplicated content (pitfalls repeated twice, parameters that RUBE_SEARCH_TOOLS would provide). It would benefit greatly from splitting into a concise overview SKILL.md with references to detailed parameter/pitfall files, and from adding concrete tool call examples instead of parameter lists.
Suggestions
Remove duplicated pitfalls — keep only the consolidated 'Known Pitfalls' section and remove per-workflow pitfall lists, or vice versa.
Strip most parameter documentation since the skill instructs Claude to call RUBE_SEARCH_TOOLS for current schemas — keep only non-obvious gotchas (like `assignee_ids: [0]`).
Add concrete tool call examples showing actual input parameters and expected response structure for at least one workflow.
Split detailed parameter references and the quick reference table into separate bundle files, keeping SKILL.md as a concise overview with navigation links.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is extremely verbose at ~250+ lines. There is massive redundancy: pitfalls are listed per-workflow AND repeated in a consolidated 'Known Pitfalls' section. Parameter lists duplicate what Claude could discover via RUBE_SEARCH_TOOLS (which the skill itself says to always call first). The quick reference table at the end repeats information already covered in each workflow section. | 1 / 3 |
Actionability | The skill provides specific tool names, parameter names, and concrete values (e.g., `assignee_ids: [0]` to unassign), which is useful. However, there are no executable code examples or copy-paste ready command sequences — everything is described rather than demonstrated with actual tool call examples showing input/output. | 2 / 3 |
Workflow Clarity | Workflows are clearly sequenced with numbered steps and labeled as Required/Optional/Prerequisite, which is good. However, there are no validation checkpoints or feedback loops — no steps like 'verify the issue was created successfully' or 'if the API returns an error, check X'. For API operations that can fail (wrong IDs, permissions), this is a gap. | 2 / 3 |
Progressive Disclosure | Everything is crammed into a single monolithic file with no bundle files or references to separate detailed documents. The parameter lists, pitfalls, and quick reference table could easily be split into separate files. The content is a wall of text that would benefit significantly from splitting into focused reference files. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
d065ead
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.