Google Model Armor: Filter user-generated content for safety.
61
52%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./skills/gws-modelarmor/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description is too terse and lacks both specific concrete actions and explicit trigger guidance. While mentioning 'Google Model Armor' provides some distinctiveness, the description fails to enumerate what specific safety filtering capabilities are available and completely omits a 'Use when...' clause that would help Claude know when to select this skill.
Suggestions
Add a 'Use when...' clause with explicit triggers, e.g., 'Use when the user asks about content moderation, safety filtering, toxicity detection, or mentions Google Model Armor.'
List specific concrete actions such as 'Screens prompts and responses for harmful content, detects toxicity, classifies safety violations, and applies content filtering policies using Google Model Armor.'
Include natural keyword variations users might use: 'content moderation', 'harmful content detection', 'guardrails', 'safety checks', 'toxicity filtering'.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain ('Google Model Armor') and a general action ('filter user-generated content for safety'), but does not list specific concrete actions like detecting toxicity, blocking harmful prompts, classifying content categories, etc. | 2 / 3 |
Completeness | Provides a brief 'what' (filter content for safety) but completely lacks a 'Use when...' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when' caps completeness at 2, and the 'what' is also weak, so this scores a 1. | 1 / 3 |
Trigger Term Quality | Includes 'Google Model Armor', 'filter', 'safety', and 'user-generated content' which are somewhat relevant, but misses natural variations users might say like 'content moderation', 'safety filtering', 'harmful content', 'toxicity detection', 'guardrails', or 'content policy'. | 2 / 3 |
Distinctiveness Conflict Risk | 'Google Model Armor' is a specific product name which helps with distinctiveness, but 'filter user-generated content for safety' is broad enough to overlap with other content moderation or safety-related skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
72%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, concise hub skill that effectively delegates detailed operations to helper command skills. Its main weakness is the lack of a concrete end-to-end usage example and missing validation/error-handling guidance for safety-critical content filtering operations.
Suggestions
Add at least one concrete, copy-paste-ready example showing a complete `gws modelarmor` invocation with `--params`/`--json` flags to improve actionability.
Include a brief validation step or expected output example after a sanitize call, so Claude knows how to verify the filtering worked correctly.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | Very lean and efficient. No unnecessary explanations of what Model Armor is or how filtering works. Every line serves a purpose—prerequisite, syntax, helper commands, and discovery commands. | 3 / 3 |
Actionability | Provides concrete CLI commands for discovery (`gws modelarmor --help`, `gws schema`) and links to helper command skills, but the skill itself doesn't include executable examples of actual usage (e.g., a complete sanitize-prompt call with params). The actionability is delegated to linked files. | 2 / 3 |
Workflow Clarity | There's a clear discovery workflow (browse resources → inspect method → build params), but no explicit validation or error-handling steps. For a skill that filters content for safety, some guidance on verifying results or handling filter failures would strengthen the workflow. | 2 / 3 |
Progressive Disclosure | Excellent progressive disclosure: concise overview with well-signaled one-level-deep references to helper command skills (sanitize-prompt, sanitize-response, create-template) and a prerequisite link to shared auth/security rules. Navigation is clear and flat. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
metadata_field | 'metadata' should map string keys to string values | Warning |
Total | 10 / 11 Passed | |
c7c6646
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.