Google Model Armor: Sanitize a model response through a Model Armor template.
Install with Tessl CLI
npx tessl i github:googleworkspace/cli --skill gws-modelarmor-sanitize-response70
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description identifies a specific Google product (Model Armor) and a general action (sanitize responses), but lacks critical details about when to use it and what specific sanitization capabilities are available. The absence of a 'Use when...' clause significantly limits Claude's ability to select this skill appropriately from a large skill library.
Suggestions
Add a 'Use when...' clause with explicit triggers like 'Use when checking model outputs for harmful content, applying safety filters, or when the user mentions Model Armor, content moderation, or response sanitization'
Expand the capability description to include specific actions like 'detect harmful content, filter unsafe responses, apply safety policies, check for policy violations'
Include natural user terms like 'content safety', 'harmful content', 'moderation', 'filter responses', 'safety check' to improve trigger term coverage
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (Google Model Armor) and one action (sanitize a model response), but lacks comprehensive detail about what sanitization entails or what specific capabilities are available. | 2 / 3 |
Completeness | Describes what it does (sanitize responses through Model Armor) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. | 1 / 3 |
Trigger Term Quality | Includes 'Model Armor' and 'sanitize' as relevant terms, but misses common variations users might say like 'content filtering', 'safety check', 'moderation', or 'harmful content detection'. | 2 / 3 |
Distinctiveness Conflict Risk | 'Google Model Armor' is a specific product name which helps distinctiveness, but 'sanitize a model response' is vague enough that it could overlap with other content moderation or safety-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
100%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is an exemplary simple skill that maximizes clarity while minimizing token usage. It provides complete actionable guidance with proper cross-references to shared context and related commands. The structure is clean and the content assumes Claude's competence without over-explaining.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Extremely lean and efficient. No unnecessary explanations of what Model Armor is or how sanitization works. Every line serves a purpose - flags, examples, and cross-references only. | 3 / 3 |
Actionability | Provides complete, copy-paste ready commands with clear flag documentation. The examples show both direct text input and piped input patterns, covering the main use cases. | 3 / 3 |
Workflow Clarity | This is a simple single-command skill with no multi-step process. The usage is unambiguous, and the Tips section clearly distinguishes when to use this vs the alternative (+sanitize-prompt). | 3 / 3 |
Progressive Disclosure | Excellent structure with prerequisite reference, concise main content, and clear See Also links to related skills. References are one level deep and well-signaled. | 3 / 3 |
Total | 12 / 12 Passed |
Validation
72%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 8 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_version | 'metadata.version' is missing | Warning |
metadata_field | 'metadata' should map string keys to string values | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 8 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.