Google Model Armor: Sanitize a model response through a Model Armor template.
69
62%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/gws-modelarmor-sanitize-response/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific Google Cloud product (Model Armor) which gives it strong distinctiveness, but it lacks depth in explaining concrete capabilities and entirely omits a 'Use when...' clause. The trigger terms are somewhat technical and don't cover natural language variations a user might employ when needing content safety or response sanitization.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user needs to sanitize, filter, or safety-check LLM responses using Google Cloud Model Armor.'
Expand the capability description with more concrete actions, e.g., 'Sanitizes model responses by applying safety filters, detecting harmful content, and enforcing content policies through a Google Model Armor template.'
Include natural keyword variations users might say, such as 'content safety', 'response filtering', 'content moderation', 'Google Cloud', or 'safety policy'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Google Model Armor) and one action (sanitize a model response through a template), but doesn't list multiple concrete actions or elaborate on what sanitization entails (e.g., filtering harmful content, applying safety policies, redacting PII). | 2 / 3 |
Completeness | Describes what it does (sanitize a model response through a Model Armor template) but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is also quite thin, bringing this to a 1. | 1 / 3 |
Trigger Term Quality | Includes 'Model Armor', 'sanitize', 'model response', and 'template' which are relevant but somewhat technical. Missing natural user terms like 'content safety', 'filter response', 'safety check', 'Google Cloud', or 'content moderation' that users might naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | The specific mention of 'Google Model Armor' and 'Model Armor template' creates a clear, distinct niche that is unlikely to conflict with other skills. This is a very specific product/service reference. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, concise CLI skill that efficiently documents a single-purpose command. Its main weakness is slightly incomplete actionability—the examples could be more concrete with realistic values and expected output. Overall it's a strong skill that respects token budget and provides clear navigation.
Suggestions
Add a concrete example with realistic (placeholder) values for the full template resource name and show the expected output or response format.
Include a brief example of the --json flag usage to make all flags actionable.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Every token earns its place. No unnecessary explanations of what Model Armor is or how sanitization works conceptually. The content is lean, assumes Claude's competence, and focuses purely on usage specifics. | 3 / 3 |
Actionability | Provides concrete CLI commands and flag descriptions, but lacks a complete executable example showing the full template resource name or expected output. The pipe example is useful but incomplete (model_cmd is undefined). No example of --json usage is provided. | 2 / 3 |
Workflow Clarity | This is a simple, single-purpose skill (invoke a CLI command to sanitize a response). The single action is unambiguous with clear flag documentation. No multi-step process or destructive operations require validation checkpoints. | 3 / 3 |
Progressive Disclosure | Clean structure with a prerequisite reference to shared auth/flags, well-signaled See Also links to related skills, and content appropriately scoped to this single command. References are one level deep and clearly signaled. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 10 / 11 Passed | |
a3768d0
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.