CtrlK
BlogDocsLog inGet started
Tessl Logo

gws-modelarmor

Google Model Armor: Filter user-generated content for safety.

61

Quality

52%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/gws-modelarmor/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is too brief and lacks both specific concrete actions and explicit trigger guidance. While mentioning 'Google Model Armor' provides some distinctiveness, the description fails to enumerate what specific safety filtering capabilities are offered and provides no 'Use when...' clause to guide skill selection.

Suggestions

Add a 'Use when...' clause with trigger terms like 'content moderation', 'toxicity detection', 'Model Armor API', 'GCP safety filtering', 'harmful content screening'.

List specific concrete actions such as 'Configures Model Armor policies, screens prompts and responses for toxicity, manages content safety filters on Google Cloud'.

Include natural keyword variations users might use: 'content safety', 'guardrails', 'harmful content', 'prompt injection detection', 'responsible AI'.

DimensionReasoningScore

Specificity

Names the domain ('Google Model Armor') and a general action ('filter user-generated content for safety'), but does not list specific concrete actions like detecting toxicity, blocking harmful prompts, classifying content categories, etc.

2 / 3

Completeness

Provides a brief 'what' (filter content for safety) but completely lacks a 'when should Claude use it' clause. Per the rubric, a missing 'Use when...' clause caps completeness at 2, and the 'what' itself is also quite thin, warranting a score of 1.

1 / 3

Trigger Term Quality

Includes 'Google Model Armor', 'filter', 'safety', and 'user-generated content' which are somewhat relevant, but misses natural variations users might say like 'content moderation', 'toxicity detection', 'harmful content', 'guardrails', 'content filtering API', or 'GCP safety'.

2 / 3

Distinctiveness Conflict Risk

'Google Model Armor' is a specific product name which helps distinctiveness, but 'filter user-generated content for safety' is generic enough to overlap with other content moderation or safety-related skills.

2 / 3

Total

7

/

12

Passed

Implementation

72%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a well-structured, concise hub skill that effectively delegates detailed operations to helper command skills. Its main weakness is the lack of a concrete end-to-end usage example and missing validation/verification guidance for safety-critical content filtering operations.

Suggestions

Add at least one concrete, executable example showing a complete Model Armor call (e.g., sanitizing a prompt with specific --params), so the skill is actionable without navigating to helper files.

Include a brief note on how to interpret or validate Model Armor responses (e.g., what a blocked/flagged result looks like) to improve workflow clarity for this safety-critical operation.

DimensionReasoningScore

Conciseness

Very lean and efficient. No unnecessary explanations of what Model Armor is or how filtering works. Every line serves a purpose—prerequisite, syntax, helper commands, and discovery commands.

3 / 3

Actionability

Provides concrete CLI commands for discovery (`gws modelarmor --help`, `gws schema`) and links to helper command skills, but the skill itself doesn't include executable examples of actual usage (e.g., a complete sanitize-prompt call with params). The actionability is delegated to linked files.

2 / 3

Workflow Clarity

There's a clear discovery workflow (browse resources → inspect method → build params), but no explicit validation or error-handling steps. For a skill that filters content for safety, some guidance on verifying results or handling rejection responses would strengthen the workflow.

2 / 3

Progressive Disclosure

Excellent progressive disclosure: concise overview with well-signaled one-level-deep references to helper command skills (sanitize-prompt, sanitize-response, create-template) and a prerequisite link to shared auth/security rules.

3 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

metadata_field

'metadata' should map string keys to string values

Warning

Total

10

/

11

Passed

Repository
googleworkspace/cli
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.