Google Model Armor: Sanitize a model response through a Model Armor template.
69
62%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/gws-modelarmor-sanitize-response/SKILL.mdQuality
Discovery
40%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific Google Cloud product (Model Armor) which gives it strong distinctiveness, but it lacks depth in explaining concrete capabilities and entirely omits a 'Use when...' clause. The trigger terms are somewhat narrow and technical, missing natural language variations users might employ when needing content safety or response sanitization.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user needs to sanitize, filter, or safety-check LLM responses using Google Cloud Model Armor.'
Expand the capability description with more specific actions, e.g., 'Sanitizes model responses by applying safety filters, content policies, and responsible AI checks through a Google Cloud Model Armor template.'
Include natural keyword variations users might say, such as 'content safety', 'response filtering', 'Google Cloud', 'safety policy', or 'content moderation'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Google Model Armor) and one action (sanitize a model response through a template), but doesn't list multiple concrete actions or elaborate on what sanitization entails (e.g., filtering harmful content, applying safety policies, redacting PII). | 2 / 3 |
Completeness | Describes what it does (sanitize a model response through a Model Armor template) but has no explicit 'Use when...' clause or equivalent trigger guidance, which per the rubric caps completeness at 2, and the 'what' itself is also quite thin, bringing this to a 1. | 1 / 3 |
Trigger Term Quality | Includes 'Model Armor', 'sanitize', 'model response', and 'template' which are relevant but somewhat technical. Missing natural user terms like 'content safety', 'filter response', 'safety check', 'Google Cloud', or 'content moderation' that users might naturally say. | 2 / 3 |
Distinctiveness Conflict Risk | The specific mention of 'Google Model Armor' and 'Model Armor template' creates a clear, distinct niche that is unlikely to conflict with other skills. This is a very specific product/service reference. | 3 / 3 |
Total | 8 / 12 Passed |
Implementation
85%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, concise CLI skill that effectively documents a single-purpose command. Its main weakness is slightly incomplete examples — the --json flag lacks a concrete example, and the pipe usage is abbreviated. Overall it's a strong skill that respects token budget and provides clear navigation.
Suggestions
Add a concrete example showing the --json flag with a sample JSON request body to make all documented flags fully actionable.
Complete the pipe example with a realistic command on the left side of the pipe to make it copy-paste ready.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Every token earns its place. No unnecessary explanations of what Model Armor is or how sanitization works conceptually. The content is lean, assumes Claude's competence, and focuses purely on usage specifics. | 3 / 3 |
Actionability | Provides concrete CLI commands and a flag table, but the examples are incomplete — no full JSON request body example is shown despite the --json flag being documented, and the pipe example is truncated. The examples are illustrative but not fully copy-paste ready for all documented flags. | 2 / 3 |
Workflow Clarity | This is a simple, single-purpose skill (invoke a CLI command to sanitize a response). The single action is unambiguous with clear flag documentation. No multi-step process or destructive operations require validation checkpoints. | 3 / 3 |
Progressive Disclosure | Clear prerequisite reference to shared skill, well-signaled 'See Also' section with one-level-deep references to related skills. Content is appropriately scoped to this single command without inlining shared concerns. | 3 / 3 |
Total | 11 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 10 / 11 Passed | |
c7c6646
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.