Automate Altoviz tasks via Rube MCP (Composio). Always search tools first for current schemas.
Install with Tessl CLI
npx tessl i github:haniakrim21/everything-claude-code --skill altoviz-automation62
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
22%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description is too vague to effectively guide skill selection. It names specific tools/platforms but fails to explain what tasks can be automated or when Claude should select this skill. The lack of concrete actions and explicit trigger conditions makes it difficult to distinguish from other automation-related skills.
Suggestions
Add specific concrete actions that can be performed (e.g., 'Create invoices, manage inventory, generate reports in Altoviz')
Include a 'Use when...' clause with explicit triggers (e.g., 'Use when the user mentions Altoviz, billing automation, or Composio integrations')
Replace vague 'tasks' with enumerated capabilities that users would naturally request
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description uses vague language like 'Automate Altoviz tasks' without specifying what concrete actions can be performed. No specific capabilities are listed beyond generic 'tasks'. | 1 / 3 |
Completeness | The 'what' is extremely vague ('Automate Altoviz tasks') and there is no 'Use when...' clause or explicit trigger guidance. The instruction to 'search tools first' is implementation guidance, not usage triggers. | 1 / 3 |
Trigger Term Quality | Contains some relevant keywords ('Altoviz', 'Rube MCP', 'Composio') but these are technical/product names rather than natural terms users would say. Missing common task-related trigger terms. | 2 / 3 |
Distinctiveness Conflict Risk | The specific product names (Altoviz, Rube MCP, Composio) provide some distinctiveness, but 'automate tasks' is generic enough to potentially conflict with other automation skills. | 2 / 3 |
Total | 6 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill that efficiently guides Claude through Altoviz automation via Rube MCP. The workflow is clear with proper validation checkpoints, and the content respects token budget. The main weakness is that tool call examples, while structured, could be more concrete with realistic parameter values rather than placeholder comments.
Suggestions
Replace placeholder comments like `/* schema-compliant args from search results */` with a concrete example showing actual Altoviz-specific arguments
Add one complete end-to-end example showing a real Altoviz task from discovery through execution with actual parameter values
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, avoiding explanations of concepts Claude already knows. Every section serves a purpose with no padding or unnecessary context about what Altoviz or Composio are. | 3 / 3 |
Actionability | Provides concrete tool call patterns with specific parameter structures, but the examples are pseudocode-like representations rather than fully executable code. The tool calls show structure but lack complete, copy-paste ready examples with realistic values. | 2 / 3 |
Workflow Clarity | Clear 3-step workflow with explicit validation checkpoint (Step 2: Check Connection must show ACTIVE before proceeding). The Known Pitfalls section reinforces validation requirements and the sequence is unambiguous. | 3 / 3 |
Progressive Disclosure | Well-organized with clear sections progressing from prerequisites to setup to workflow to pitfalls. External reference to toolkit docs is one level deep and clearly signaled. Quick reference table provides efficient navigation for common operations. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.