Generate UI designs and frontend code with Google Stitch via MCP. Use when asked to create screens, mockups, UI designs, or generate frontend code from text descriptions. Supports desktop, mobile, and tablet layouts.
Install with Tessl CLI
npx tessl i github:Dicklesworthstone/pi_agent_rust --skill stitch91
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a well-crafted skill description that excels across all dimensions. It clearly specifies the tool (Google Stitch via MCP), lists concrete capabilities, includes an explicit 'Use when...' clause with natural trigger terms, and carves out a distinct niche for UI/frontend generation that won't conflict with other skills.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Generate UI designs', 'frontend code', supports 'desktop, mobile, and tablet layouts'. Names the specific tool (Google Stitch via MCP) and concrete outputs. | 3 / 3 |
Completeness | Clearly answers both what ('Generate UI designs and frontend code with Google Stitch via MCP') and when ('Use when asked to create screens, mockups, UI designs, or generate frontend code from text descriptions'). | 3 / 3 |
Trigger Term Quality | Includes natural keywords users would say: 'screens', 'mockups', 'UI designs', 'frontend code', 'text descriptions', plus device types 'desktop, mobile, tablet'. Good coverage of variations. | 3 / 3 |
Distinctiveness Conflict Risk | Clear niche with distinct triggers - specifically about UI/frontend generation via Google Stitch MCP. The combination of 'UI designs', 'mockups', 'frontend code', and the specific tool makes it unlikely to conflict with general coding or design skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
79%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a strong, actionable skill with excellent concrete examples and efficient token usage. The main weaknesses are the lack of explicit validation steps in workflows (e.g., checking if generation succeeded before fetching code) and the monolithic structure that could benefit from splitting detailed reference content into separate files.
Suggestions
Add explicit validation steps to workflows, e.g., 'Check get_screen status before calling fetch_screen_code' to handle generation failures gracefully
Consider splitting the prompt engineering examples and error reference into separate files (PROMPTS.md, ERRORS.md) to reduce main skill length
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is lean and efficient, presenting commands and parameters without explaining what UI design is or how MCP works. Every section provides actionable information without padding. | 3 / 3 |
Actionability | Fully executable bash commands throughout with specific parameters, complete examples, and copy-paste ready code blocks. The tool reference table and parameter documentation are concrete and specific. | 3 / 3 |
Workflow Clarity | The iteration workflow section provides a clear sequence, but lacks explicit validation checkpoints. The note about generation taking 1-3 minutes and not retrying is helpful, but there's no feedback loop for handling failures or verifying successful generation before proceeding. | 2 / 3 |
Progressive Disclosure | Content is well-organized with clear sections and tables, but the skill is quite long (~200 lines) with all content inline. Some sections like prompt engineering examples or the complete error reference could be split into separate files for better navigation. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.