This skill should be used when the user asks to "design system architecture", "evaluate microservices vs monolith", "create architecture diagrams", "analyze dependencies", "choose a database", "plan for scalability", "make technical decisions", or "review system design". Use for architecture decision records (ADRs), tech stack evaluation, system design reviews, dependency analysis, and generating architecture diagrams in Mermaid, PlantUML, or ASCII format.
89
78%
Does it follow best practices?
Impact
96%
1.81xAverage score across 6 eval scenarios
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./engineering-team/senior-architect/SKILL.mdQuality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a strong skill description that clearly articulates both what the skill does and when it should be used, with rich trigger terms that match natural user language. The description covers a well-defined domain (system architecture and design) with specific actions and output formats. Minor improvement could come from adding a brief opening sentence summarizing the capability in third person before the trigger clause, but overall this is highly effective.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: design system architecture, evaluate microservices vs monolith, create architecture diagrams, analyze dependencies, choose a database, plan for scalability, make technical decisions, review system design, generate diagrams in Mermaid/PlantUML/ASCII. | 3 / 3 |
Completeness | Explicitly answers both 'what' (architecture decision records, tech stack evaluation, system design reviews, dependency analysis, generating architecture diagrams) and 'when' (opens with 'This skill should be used when the user asks to...' followed by explicit trigger phrases). | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'design system architecture', 'microservices vs monolith', 'architecture diagrams', 'choose a database', 'scalability', 'system design', 'ADRs', 'tech stack evaluation', 'Mermaid', 'PlantUML'. These are terms users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | Clearly occupies a distinct niche around system architecture and design decisions. The specific mentions of ADRs, microservices vs monolith, database selection, and diagram formats (Mermaid, PlantUML, ASCII) make it unlikely to conflict with general coding or documentation skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
57%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with strong progressive disclosure and useful decision frameworks, but it suffers from moderate verbosity (redundant command listings, explanatory fluff) and lacks validation/error-handling steps in its tool workflows. The actionability is decent for the decision workflows but the tool commands assume script availability without verification steps.
Suggestions
Remove the redundant 'Common Commands' section since all commands are already documented in each tool's Usage section, and cut the 'Tech Stack Coverage' and 'Getting Help' sections which add little value for Claude.
Add validation checkpoints to tool workflows: e.g., verify scripts exist before running, validate output format, and provide error recovery steps if a script fails.
Remove the 'Solves:' lines from each tool section—Claude can infer when to use each tool from the description and context.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably well-organized but includes some unnecessary verbosity. The 'Solves' descriptions, the 'Tech Stack Coverage' list, and the 'Getting Help' section add little value for Claude. The repeated command listings (in both tool sections and 'Common Commands') are redundant. However, the decision tables and workflows are information-dense and earn their tokens. | 2 / 3 |
Actionability | The commands are concrete and copy-paste ready, but they reference scripts (e.g., `python scripts/architecture_diagram_generator.py`) that presumably must exist in the project—there's no indication of how to install or access these tools. The decision workflows provide good structured guidance but are more advisory than executable. The example outputs are helpful but the skill assumes tool availability without verification. | 2 / 3 |
Workflow Clarity | The decision workflows (database selection, architecture pattern selection) have clear step sequences, which is good. However, the tool-based workflows lack validation checkpoints—there's no guidance on verifying diagram correctness, validating dependency analysis results, or handling errors from the scripts. For the architecture assessment tool, there's no feedback loop for addressing identified issues. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure structure. The skill provides a clear table of contents, concise quick start, and a well-organized reference table pointing to one-level-deep files (`references/architecture_patterns.md`, `references/system_design_workflows.md`, `references/tech_decision_guide.md`) with clear descriptions of when to load each file. | 3 / 3 |
Total | 9 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
f567c61
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.