When the user wants to build GTM automation with code, design workflow architectures, use AI agents for GTM tasks, or implement the 'architecture over tools' principle. Also use when the user mentions 'GTM engineering,' 'GTM automation,' 'n8n,' 'Make,' 'Zapier,' 'workflow automation,' 'Clay API,' 'instruction stacks,' 'AI agents for GTM,' or 'revenue automation.' This skill covers technical GTM infrastructure from workflow design through agent orchestration. Do NOT use for technical implementation, code review, or software architecture.
67
58%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/gtm-engineering/SKILL.mdQuality
Discovery
82%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description has strong trigger term coverage with many specific tool names and domain keywords, and it clearly addresses both 'what' and 'when' with explicit guidance including exclusions. However, the capability descriptions are somewhat abstract rather than listing concrete actions, and there is a contradictory tension between mentioning 'build GTM automation with code' while excluding 'technical implementation,' which could cause confusion in skill selection.
Suggestions
Replace abstract capability phrases like 'implement the architecture over tools principle' with concrete actions such as 'design n8n/Make/Zapier workflow blueprints, configure Clay API enrichment pipelines, build AI agent orchestration sequences'
Resolve the contradiction between 'build GTM automation with code' and 'Do NOT use for technical implementation' — clarify the boundary more precisely (e.g., 'Do NOT use for general-purpose software engineering, code review, or non-GTM system architecture')
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description names the domain (GTM automation) and mentions some actions like 'build GTM automation with code,' 'design workflow architectures,' 'use AI agents for GTM tasks,' but these are somewhat abstract rather than concrete, specific actions. Terms like 'architecture over tools principle' and 'instruction stacks' hint at specifics but aren't fully explained as concrete capabilities. | 2 / 3 |
Completeness | The description clearly answers both 'what' (build GTM automation, design workflow architectures, use AI agents for GTM tasks, implement architecture over tools principle) and 'when' (explicit 'Use when' triggers and even a 'Do NOT use' exclusion clause). The trigger guidance is explicit and well-structured. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'GTM engineering,' 'GTM automation,' 'n8n,' 'Make,' 'Zapier,' 'workflow automation,' 'Clay API,' 'instruction stacks,' 'AI agents for GTM,' 'revenue automation.' These are specific tool names and domain phrases a user would naturally mention. | 3 / 3 |
Distinctiveness Conflict Risk | The description carves out a niche around GTM automation and names specific tools (n8n, Make, Zapier, Clay API), which helps distinctiveness. However, the 'Do NOT use for technical implementation, code review, or software architecture' exclusion creates confusion since the description also says 'build GTM automation with code,' and the boundary between 'workflow design' and 'software architecture' could be blurry, risking overlap with general coding or architecture skills. | 2 / 3 |
Total | 10 / 12 Passed |
Implementation
35%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill is comprehensive in scope but severely over-indexed on conceptual explanation rather than actionable instruction. The role definitions, career trajectory, and detailed platform comparison tables consume enormous token budget explaining things Claude already knows or that belong in reference files. The strongest sections are the discovery questions, troubleshooting patterns, and examples, but they're buried in a wall of descriptive content.
Suggestions
Move the platform comparison tables, role comparison tables, and career trajectory content to reference files, keeping only the decision tree and a 2-3 line summary in the main SKILL.md
Add executable code examples for common GTM automation patterns (e.g., a webhook handler for lead routing, an n8n workflow JSON snippet, a Clay API enrichment call)
Add an explicit step-by-step workflow for designing and deploying a GTM automation with validation checkpoints (e.g., 1. Discovery → 2. Architecture design → 3. Build MVP workflow → 4. Validate with test data → 5. Monitor metrics → 6. Iterate)
Trim the instruction stack and persistent context sections to just the structural diagrams and key rules, removing the paragraph-level explanations of each layer that Claude can infer
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely verbose. The role comparison tables, career trajectory section, and detailed explanations of what GTM engineers do vs adjacent roles are things Claude already knows or can infer. The instruction stack layers, persistent context patterns, and feedback loop tables are conceptual explanations rather than actionable instructions. The platform comparison section alone is hundreds of lines of information that could be a reference file. | 1 / 3 |
Actionability | The content provides structured frameworks (instruction stack layers, decision trees, platform comparisons) and the examples/troubleshooting sections give concrete guidance. However, there is no executable code, no specific API calls, no copy-paste-ready workflow configurations. The 'Before Starting' discovery questions are actionable but the bulk of the content describes concepts rather than providing executable steps. | 2 / 3 |
Workflow Clarity | The 'Before Starting' section provides a clear discovery checklist, and the troubleshooting section gives cause-fix patterns. However, there are no explicit multi-step workflow sequences with validation checkpoints for actually building automations. The platform selection decision tree is helpful but the actual implementation workflow (design → build → test → validate → deploy → monitor) is absent from the main content. | 2 / 3 |
Progressive Disclosure | References to 'references/implementation-guide.md' and 'references/quick-reference.md' show appropriate delegation of detailed content. However, the main SKILL.md itself is monolithic with massive inline tables and conceptual content (role comparisons, platform comparisons) that should be in reference files. The related skills table is well-structured but the core content doesn't practice what progressive disclosure preaches. | 2 / 3 |
Total | 7 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
81e7e0d
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.