CtrlK
BlogDocsLog inGet started
Tessl Logo

workflow-engine

Machine-readable workflow DAG for the multi-step agent pipeline. Defines node types, edge conditions, gates, and fan-out patterns. USE FOR: Orchestrator step routing, resume-from-graph, workflow validation. DO NOT USE FOR: Azure infrastructure, code generation, troubleshooting.

73

Quality

66%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./.github/skills/workflow-engine/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

75%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is well-structured with explicit USE FOR and DO NOT USE FOR clauses that clearly delineate scope, making it strong on completeness and distinctiveness. However, the language is heavily technical and jargon-laden, which may limit trigger term matching for users who phrase requests more naturally. The specific capabilities described are more structural definitions than concrete user-facing actions.

Suggestions

Add more natural language trigger terms that users might actually say, such as 'pipeline steps', 'workflow graph', 'agent routing', 'step dependencies', or 'execution flow'.

Make the actions more concrete and user-oriented, e.g., 'Parses and validates multi-step workflow graphs, routes orchestrator steps, and enables resuming pipelines from specific nodes' instead of describing structural elements.

DimensionReasoningScore

Specificity

Names the domain (workflow DAG, agent pipeline) and some actions (step routing, resume-from-graph, workflow validation), but the actions are somewhat abstract and not fully concrete—'defines node types, edge conditions, gates, and fan-out patterns' describes structure rather than specific user-facing operations.

2 / 3

Completeness

Clearly answers both 'what' (machine-readable workflow DAG defining node types, edge conditions, gates, fan-out patterns) and 'when' (USE FOR: orchestrator step routing, resume-from-graph, workflow validation) with explicit positive and negative trigger guidance via the DO NOT USE FOR clause.

3 / 3

Trigger Term Quality

Includes some relevant technical terms like 'workflow DAG', 'orchestrator', 'fan-out patterns', 'workflow validation', but these are fairly specialized jargon. Users might naturally say 'workflow', 'pipeline', or 'routing', but terms like 'resume-from-graph' and 'edge conditions' are less likely to appear in natural user requests.

2 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche around workflow DAG definitions and agent pipeline orchestration. The explicit DO NOT USE FOR clause (Azure infrastructure, code generation, troubleshooting) further reduces conflict risk with other skills.

3 / 3

Total

10

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is well-structured with good progressive disclosure and clear reference navigation. Its main weaknesses are the lack of concrete executable examples (no JSON schema snippet or actual code), some unnecessary explanation of basic concepts like DAGs, and missing validation/error-handling steps in the orchestrator workflow protocol.

Suggestions

Include a concrete snippet of the workflow-graph.json schema (even a partial example with 2-3 nodes) so the orchestrator knows the exact structure without needing to load the file first.

Add validation checkpoints to the orchestrator protocol, e.g., 'Validate graph has no cycles before execution' and 'If apex-recall fails, retry or report error state'.

Remove or condense the DAG model explanation table—Claude already knows what nodes, edges, and DAGs are. Focus only on the domain-specific semantics (gate, fan-out, validation types).

DimensionReasoningScore

Conciseness

The tables defining node types, edge conditions, and the DAG model are reasonably efficient, but some content explains concepts Claude already understands (e.g., what a DAG is, what nodes and edges are). The 'Core Concepts' section could be tightened since these are standard graph theory terms.

2 / 3

Actionability

The 'Reading the Graph' protocol provides a clear step-by-step algorithm, and the IaC routing logic is specific. However, there's no executable code—just pseudocode in a text block—and the actual workflow graph is deferred to an external JSON file without showing its schema or a concrete example snippet.

2 / 3

Workflow Clarity

The orchestrator protocol lists clear sequential steps with branching logic (gate, fan-out, skip), but lacks explicit validation checkpoints or error recovery feedback loops. There's no guidance on what to do if the graph is malformed, if apex-recall fails, or how to verify successful step completion before proceeding.

2 / 3

Progressive Disclosure

The skill provides a clear overview with well-organized tables and a concise reference index pointing to one-level-deep external files (workflow-graph.json, orchestrator-handoff-guide.md, subagent-integration.md). Navigation is easy and references are clearly signaled.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
jonathan-vella/azure-agentic-infraops
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.