Systematic architectural thinking for irreplaceable human capabilities - domain modeling, systems thinking, constraint navigation, and AI-aware problem decomposition. Use proactively when detecting architectural decisions, system design discussions, or multi-component planning.
56
47%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./human-architect-mindset/skills/human-architect-mindset/SKILL.mdQuality
Discovery
67%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has good structural completeness with explicit 'what' and 'when' clauses, which is its strongest aspect. However, it relies heavily on abstract, buzzword-heavy language ('irreplaceable human capabilities,' 'AI-aware problem decomposition') rather than concrete, actionable tasks. The trigger terms are reasonable but could be more comprehensive and grounded in natural user language.
Suggestions
Replace abstract phrases like 'irreplaceable human capabilities' and 'AI-aware problem decomposition' with concrete actions (e.g., 'design system architectures, define component boundaries, evaluate trade-offs between approaches, plan service interactions').
Expand trigger terms to include more natural user phrases such as 'architecture review,' 'design decisions,' 'how should I structure,' 'microservices vs monolith,' 'API design,' or 'tech stack.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (architectural thinking) and lists some actions like 'domain modeling, systems thinking, constraint navigation, and AI-aware problem decomposition,' but these are abstract concepts rather than concrete, actionable tasks. Terms like 'irreplaceable human capabilities' are vague fluff. | 2 / 3 |
Completeness | Clearly answers both 'what' (domain modeling, systems thinking, constraint navigation, AI-aware problem decomposition) and 'when' ('Use proactively when detecting architectural decisions, system design discussions, or multi-component planning') with explicit trigger guidance. | 3 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'architectural decisions,' 'system design,' and 'multi-component planning' that users might naturally say. However, it misses common variations and practical terms users would use (e.g., 'architecture review,' 'design patterns,' 'microservices,' 'API design,' 'tech stack decisions'). | 2 / 3 |
Distinctiveness Conflict Risk | The focus on 'architectural thinking' and 'AI-aware problem decomposition' provides some distinctiveness, but terms like 'systems thinking' and 'constraint navigation' are broad enough to overlap with general planning, project management, or software engineering skills. | 2 / 3 |
Total | 9 / 12 Passed |
Implementation
27%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is extremely ambitious in scope, attempting to cover architectural philosophy, a five-pillar framework, a complete process methodology, common mistakes, AI operational guidance, AND a full Spec Driven Development extension—all in a single file. While the conceptual framework is thoughtful, the content is far too verbose for a skill file, explains many concepts Claude already knows (what domain modeling is, what systems thinking is, what loyalty means), and lacks concrete executable artifacts. The document would benefit enormously from being split into multiple files with the SKILL.md serving as a concise overview.
Suggestions
Reduce the SKILL.md to a concise overview (~100 lines) covering the five pillars briefly, the phase process as a checklist, and links to separate files for details (e.g., SDD.md, PILLARS.md, COMMON_MISTAKES.md).
Remove philosophical/explanatory content that Claude already understands (e.g., the extended loyalty metaphor, explanations of what domain modeling or systems thinking are) and replace with terse, actionable checklists.
Add concrete, copy-paste-ready artifacts: an architecture decision record template, a constraint matrix template, a task decomposition template, or a sample Constitution file—rather than just listing questions to ask.
Add explicit validation gates between phases (e.g., 'Do not proceed to Phase 3 until the system diagram from Phase 2 has been reviewed and confirmed by the user').
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose at ~500+ lines. Extensive philosophical content about 'loyalty' and 'betrayal' that Claude already understands conceptually. The SDD extension alone doubles the document length. Many sections explain obvious concepts (what domain modeling is, what systems thinking is) rather than providing novel, actionable guidance. | 1 / 3 |
Actionability | Provides structured question lists and phase-based processes that are somewhat actionable, but lacks concrete executable examples. No code snippets, no template files, no specific commands. The guidance is mostly 'ask these questions' and 'follow these phases' rather than copy-paste-ready artifacts like architecture decision record templates or actual spec file formats. | 2 / 3 |
Workflow Clarity | The five-phase Architect Process (Domain Discovery → Systems Analysis → Constraint Mapping → AI Decomposition → Solution Synthesis) is clearly sequenced, and the SDD phases are well-ordered. However, there are no explicit validation checkpoints or feedback loops between phases—no 'verify X before proceeding to Y' gates. For a skill involving architectural decisions (which are consequential/hard to reverse), missing validation steps is a significant gap. | 2 / 3 |
Progressive Disclosure | This is a monolithic wall of text with no references to external files for detailed content. The SDD section, tool evaluation tables, self-learning patterns, and user-facing skills sections should all be in separate referenced documents. The 'Related Skills' section at the end hints at cross-references but the main body dumps everything inline, making it overwhelming and hard to navigate. | 1 / 3 |
Total | 6 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
skill_md_line_count | SKILL.md is long (802 lines); consider splitting into references/ and linking | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
20077d3
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.