Lints, tests, simulates, and formats Fastly VCL code using the falco tool. Also serves as the authoritative VCL reference via the falco Go source, which implements Fastly's full VCL dialect. Use when validating VCL syntax, running VCL linting, testing VCL locally, simulating VCL request handling, formatting VCL files, writing VCL unit tests with assertions, debugging VCL logic errors, looking up VCL function signatures or variable scopes, understanding VCL subroutine behavior, or running `falco lint`/`falco simulate`/`falco test`/`falco fmt`. Also applies when working with VCL syntax errors, type mismatches in VCL, choosing which VCL subroutine to use, or setting up a local VCL development and testing environment.
71
86%
Does it follow best practices?
Impact
—
No eval scenarios have been run
Advisory
Suggest reviewing before use
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that thoroughly covers specific capabilities, provides comprehensive trigger terms, explicitly states both what the skill does and when to use it, and occupies a clearly distinct niche. The description uses proper third-person voice throughout and includes both high-level actions and specific command-level triggers that users would naturally reference.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: lints, tests, simulates, formats VCL code, serves as VCL reference, validates syntax, runs linting, tests locally, simulates request handling, formats files, writes unit tests with assertions, debugs logic errors, looks up function signatures/variable scopes. | 3 / 3 |
Completeness | Clearly answers both 'what' (lints, tests, simulates, formats VCL code, serves as VCL reference) and 'when' with an explicit 'Use when...' clause covering a comprehensive list of trigger scenarios including validating syntax, running linting, testing, debugging, looking up references, and working with specific error types. | 3 / 3 |
Trigger Term Quality | Excellent coverage of natural terms users would say: 'VCL', 'Fastly VCL', 'falco', 'VCL syntax', 'VCL linting', 'VCL unit tests', 'falco lint', 'falco simulate', 'falco test', 'falco fmt', 'VCL syntax errors', 'type mismatches in VCL', 'VCL subroutine', plus specific command invocations users would type. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with a clear niche: Fastly VCL code using the falco tool. The combination of VCL-specific terminology, falco-specific commands, and Fastly-specific context makes it extremely unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured skill with strong actionability and excellent progressive disclosure through its reference table. The main weaknesses are some redundancy between the two source-code-as-reference sections and the lack of an explicit multi-step workflow with validation checkpoints (e.g., lint → fix → re-lint → test → simulate). The common VCL issues section is a valuable addition that demonstrates domain expertise.
Suggestions
Consolidate the two 'source code as reference' sections into one to reduce redundancy and improve conciseness.
Add an explicit end-to-end workflow with validation checkpoints, e.g., 'lint → fix errors → re-lint until clean → run tests → simulate → deploy', with a feedback loop for error recovery.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Generally efficient but has some redundancy — the 'Source Code as VCL Reference' quick lookup table largely duplicates the earlier 'Using Falco Source as VCL Reference' section, and the trigger/scope section restates what the YAML description already covers. Some explanatory prose ('Equally important, the falco source code is the most complete machine-readable specification...') could be tightened. | 2 / 3 |
Actionability | Provides fully executable commands for every workflow (lint, test, simulate, format, terraform), concrete configuration examples in YAML, specific common VCL pitfalls with exact fixes, and precise source file paths for reference lookups. Everything is copy-paste ready. | 3 / 3 |
Workflow Clarity | Individual commands are clear, but there's no explicit multi-step workflow with validation checkpoints — e.g., no 'lint → fix → test → simulate → deploy' sequence with feedback loops. The 'Common VCL Issues' section helps but isn't integrated into a validate-fix-retry workflow. For a tool that catches errors before deployment, a lint-then-fix loop should be explicit. | 2 / 3 |
Progressive Disclosure | Excellent structure with a clear overview in the main file and a well-organized references table pointing to 8 topic-specific files with 'Use when...' guidance. References are one level deep and clearly signaled. The source code lookup table provides a second navigation path for reference questions. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
e0f4205
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.