Build and deploy production-ready generative AI agents using Vertex AI, Gemini models, and Google Cloud infrastructure with RAG, function calling, and multi-modal capabilities. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.
48
37%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/jeremy-vertex-ai/skills/vertex-agent-builder/SKILL.mdQuality
Discovery
32%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description names relevant technologies (Vertex AI, Gemini, Google Cloud) and some capabilities (RAG, function calling, multi-modal), but its trigger guidance is pure filler with no actionable content. The 'Use when appropriate context detected' and 'Trigger with relevant phrases based on skill purpose' clauses are completely vacuous and fail to help Claude determine when to select this skill.
Suggestions
Replace the generic 'Use when appropriate context detected' with explicit trigger conditions, e.g., 'Use when the user asks about building AI agents on Google Cloud, deploying Gemini-based applications, or implementing RAG pipelines with Vertex AI.'
Add natural trigger terms users would actually say, such as 'Vertex AI agent', 'Gemini API', 'Google Cloud AI', 'ADK', 'Agent Builder', 'grounding', '.ipynb notebooks for Vertex'.
List more concrete actions beyond buzzwords — e.g., 'Creates agent configurations, sets up Vertex AI Search datastores, implements tool/function definitions, deploys agents to Agent Engine, configures multi-turn conversations.'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (generative AI agents, Vertex AI, Gemini models) and some actions (build, deploy, RAG, function calling, multi-modal), but the actions are more like buzzword lists than concrete specific tasks. | 2 / 3 |
Completeness | The 'what' is partially addressed but the 'when' is essentially absent — 'Use when appropriate context detected' and 'Trigger with relevant phrases based on skill purpose' are meaningless filler that provide no explicit trigger guidance whatsoever. | 1 / 3 |
Trigger Term Quality | Includes some relevant keywords like 'Vertex AI', 'Gemini models', 'RAG', 'function calling', and 'Google Cloud', but the trigger guidance is completely generic ('Trigger with relevant phrases based on skill purpose') and adds no actual trigger terms a user would say. | 2 / 3 |
Distinctiveness Conflict Risk | The mention of Vertex AI, Gemini, and Google Cloud infrastructure provides some distinctiveness, but the broad scope ('generative AI agents') and vague trigger language could easily overlap with other AI/ML or cloud skills. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
42%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill reads more like a project management checklist than an actionable technical guide. While it has good structure and progressive disclosure with references to detailed materials, it critically lacks any concrete code examples, specific CLI commands, API calls, or configuration snippets that would make it executable. The workflow steps are too abstract to guide Claude through actual implementation.
Suggestions
Add at least one concrete, executable code example showing a minimal Vertex AI agent setup (e.g., Python code using vertexai SDK to create and deploy a basic agent with Agent Engine).
Replace abstract instructions like 'Choose model + region' with specific guidance, e.g., a table of recommended model/region combinations or a concrete configuration snippet.
Add explicit validation commands at each deployment step, such as `gcloud ai endpoints describe ...` or a Python health-check script, to create proper feedback loops.
Include a concrete RAG setup example with actual code for chunking, embedding, and index creation rather than just describing the outcome.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is reasonably efficient and avoids explaining basic concepts, but several sections (Prerequisites, Error Handling, Examples) are somewhat generic and could be tightened. The 'Overview' bullet points add little beyond what the title and first sentence already convey. | 2 / 3 |
Actionability | The skill provides no executable code, no concrete commands, no specific API calls, and no copy-paste-ready examples. The instructions are high-level process descriptions ('Clarify the agent's job', 'Choose model + region') rather than concrete guidance. The examples section describes outcomes but doesn't show actual code or configuration. | 1 / 3 |
Workflow Clarity | There is a numbered sequence of steps, and the error handling section addresses common failure modes. However, validation checkpoints are vague ('add evaluation', 'verify endpoints + permissions') without explicit feedback loops or concrete validation commands. For a deployment-oriented skill involving potentially destructive/costly operations, this lacks the specificity needed for a 3. | 2 / 3 |
Progressive Disclosure | The skill appropriately keeps the main file as a concise overview and references a detailed guide (`SKILL.full.md`), repo standards, and external documentation with clear one-level-deep links. Navigation is well-signaled and content is appropriately split. | 3 / 3 |
Total | 8 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
c8a915c
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.