CtrlK
BlogDocsLog inGet started
Tessl Logo

gtm-engineering

When the user wants to build GTM automation with code, design workflow architectures, use AI agents for GTM tasks, or implement the 'architecture over tools' principle. Also use when the user mentions 'GTM engineering,' 'GTM automation,' 'n8n,' 'Make,' 'Zapier,' 'workflow automation,' 'Clay API,' 'instruction stacks,' 'AI agents for GTM,' or 'revenue automation.' This skill covers technical GTM infrastructure from workflow design through agent orchestration. Do NOT use for technical implementation, code review, or software architecture.

67

Quality

58%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./packages/skills-catalog/skills/(gtm)/gtm-engineering/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has strong trigger term coverage with specific tool names and domain terminology, and it clearly addresses both 'what' and 'when' with explicit guidance. However, the specificity of capabilities could be improved with more concrete actions, and there is a notable contradiction between mentioning 'build GTM automation with code' as a trigger while excluding 'technical implementation,' which could cause confusion in skill selection.

Suggestions

Resolve the contradiction between 'build GTM automation with code' and 'Do NOT use for technical implementation' — clarify what level of technical work this skill covers versus what should go to another skill.

Replace abstract phrases like 'implement the architecture over tools principle' and 'technical GTM infrastructure from workflow design through agent orchestration' with more concrete actions such as 'design multi-step lead enrichment workflows, configure AI agent pipelines for outbound prospecting, map data flows between GTM tools.'

DimensionReasoningScore

Specificity

The description names the domain (GTM automation) and mentions some actions like 'build GTM automation with code,' 'design workflow architectures,' 'use AI agents for GTM tasks,' but these are somewhat abstract rather than concrete, specific actions. Phrases like 'implement the architecture over tools principle' and 'technical GTM infrastructure from workflow design through agent orchestration' are more conceptual than actionable.

2 / 3

Completeness

The description clearly answers both 'what' (build GTM automation, design workflow architectures, use AI agents for GTM tasks, implement architecture over tools principle) and 'when' with explicit trigger guidance ('When the user wants to...' and 'Also use when the user mentions...'). It also includes negative boundaries ('Do NOT use for...').

3 / 3

Trigger Term Quality

Excellent coverage of natural trigger terms users would say: 'GTM engineering,' 'GTM automation,' 'n8n,' 'Make,' 'Zapier,' 'workflow automation,' 'Clay API,' 'instruction stacks,' 'AI agents for GTM,' 'revenue automation.' These are specific tool names and domain terms that users would naturally mention.

3 / 3

Distinctiveness Conflict Risk

The description carves out a niche with GTM-specific terms and tool names, and the 'Do NOT use for technical implementation, code review, or software architecture' exclusion helps. However, there's potential overlap with general workflow automation skills or coding skills, and the contradiction between 'build GTM automation with code' and 'Do NOT use for technical implementation' creates confusion about boundaries.

2 / 3

Total

10

/

12

Passed

Implementation

35%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is comprehensive in coverage but severely over-engineered for a SKILL.md file. It reads more like a knowledge base article or training document than an actionable instruction set for Claude. The bulk of content—role definitions, career trajectories, detailed platform pricing tables—is either already known to Claude or should be offloaded to reference files, while the actually actionable guidance (how to design and build a specific workflow) is relatively thin and lacks executable examples.

Suggestions

Cut sections 1 (GTM Engineer Role) and 3 (Platform Comparison) down to 2-3 sentences each with references to external files; Claude doesn't need career trajectory information or detailed pricing tables to build automations.

Add executable code examples: a sample n8n workflow JSON snippet, a webhook handler for lead routing, or a Python enrichment waterfall implementation that Claude can adapt.

Move the platform comparison tables and role comparison tables to 'references/platform-comparison.md' and keep only the decision tree in the main file.

Add explicit validation steps to the workflow design process: e.g., 'After designing the instruction stack, validate Layer 1 scoring by running 20 historical leads through the criteria and checking accuracy before proceeding to Layer 2.'

DimensionReasoningScore

Conciseness

Extremely verbose. Extensive sections explaining what GTM engineers do, career trajectories, role comparisons vs adjacent roles, and detailed platform comparison tables are information Claude already knows or can infer. The instruction stack concept explanation is padded with obvious details. The content is ~400+ lines when it could be under 100 with references.

1 / 3

Actionability

Provides structured frameworks (instruction stack layers, decision trees, platform comparisons) and some concrete patterns (persistent context schema, feedback loop table), but lacks executable code examples, specific API calls, or copy-paste-ready workflow configurations. The examples section at the bottom describes outcomes rather than providing actual implementation steps.

2 / 3

Workflow Clarity

The 'Before Starting' discovery checklist is well-structured, and the instruction stack layers provide a clear conceptual sequence. However, there are no explicit validation checkpoints, no feedback loops for error recovery during the build process itself, and the troubleshooting section is disconnected from a step-by-step workflow. The platform selection decision tree is helpful but doesn't constitute a build workflow.

2 / 3

Progressive Disclosure

References to 'references/implementation-guide.md' and 'references/quick-reference.md' show some progressive disclosure, and the related skills table is well-organized. However, the main file contains massive amounts of content that should be in reference files (the entire platform comparison section, the role comparison tables, the architecture vs tools framework). The overview itself is far too heavy.

2 / 3

Total

7

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
tech-leads-club/agent-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.