CtrlK
BlogDocsLog inGet started
Tessl Logo

agentic-actions-auditor

Audits GitHub Actions workflows for security vulnerabilities in AI agent integrations including Claude Code Action, Gemini CLI, OpenAI Codex, and GitHub AI Inference. Detects attack vectors where attacker-controlled input reaches. AI agents running in CI/CD pipelines.

68

Quality

83%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

SKILL.md
Quality
Evals
Security

Quality

Discovery

82%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong, specific description that clearly identifies a narrow and well-defined domain—security auditing of AI agent integrations in GitHub Actions workflows. It names concrete tools and attack patterns, providing excellent trigger terms. The main weakness is the absence of an explicit 'Use when...' clause, which would help Claude know exactly when to select this skill.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when the user asks to review GitHub Actions workflows for security issues, audit CI/CD pipelines with AI agents, or check for prompt injection risks in automated workflows.'

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: audits GitHub Actions workflows, detects security vulnerabilities in AI agent integrations, names specific tools (Claude Code Action, Gemini CLI, OpenAI Codex, GitHub AI Inference), and identifies specific attack vectors (attacker-controlled input reaching AI agents in CI/CD pipelines).

3 / 3

Completeness

Clearly answers 'what does this do' (audits workflows for security vulnerabilities, detects attack vectors), but lacks an explicit 'Use when...' clause or equivalent trigger guidance. The 'when' is only implied by the nature of the actions described.

2 / 3

Trigger Term Quality

Includes strong natural keywords users would say: 'GitHub Actions', 'security', 'vulnerabilities', 'CI/CD', 'Claude Code Action', 'Gemini CLI', 'OpenAI Codex', 'attack vectors', 'workflows'. These are terms a user concerned about pipeline security would naturally use.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive niche: the intersection of GitHub Actions security auditing specifically for AI agent integrations in CI/CD pipelines. This is unlikely to conflict with general security skills, general CI/CD skills, or general AI skills due to its very specific focus.

3 / 3

Total

11

/

12

Passed

Implementation

85%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a high-quality, well-structured skill that provides comprehensive security audit guidance for AI agent integrations in GitHub Actions. Its greatest strengths are the highly actionable step-by-step methodology, excellent progressive disclosure to reference files, and clear workflow with appropriate validation checkpoints. The main area for improvement is conciseness — some sections (rationalizations, bash safety rules, URL parsing) could be tightened without losing clarity.

DimensionReasoningScore

Conciseness

The skill is thorough but includes some unnecessary verbosity. The 'When to Use' / 'When NOT to Use' sections are somewhat redundant, and the 'Rationalizations to Reject' section, while valuable, is lengthy. The methodology itself is well-structured but could be tightened in places (e.g., the URL parsing table and bash safety rules explain things Claude already knows). However, most content earns its place given the complexity of the domain.

2 / 3

Actionability

The skill provides highly concrete, executable guidance: specific `gh api` commands with exact flags and jq expressions, precise action reference matching rules with a lookup table, detailed field names to capture per action type, a structured vector detection table with specific patterns to match, and a complete report format with section ordering. The audit methodology is step-by-step with clear inputs and outputs at each stage.

3 / 3

Workflow Clarity

The 5-step methodology is clearly sequenced with each step building on the previous one. Validation checkpoints are present: Step 1 stops if no workflows found, Step 2 stops if no AI actions found, Step 4 explicitly references security context from Step 3. Error handling is specified for remote analysis (401, 404, empty directory). The cross-file resolution has a depth limit. The report structure includes clean-repo output for the zero-findings case.

3 / 3

Progressive Disclosure

The skill uses excellent progressive disclosure with a clear overview methodology in the main file and well-signaled one-level-deep references to detailed materials: action-profiles.md, foundations.md, individual vector files (vector-a through vector-i), and cross-file-resolution.md. The references section at the bottom provides a clean navigation map. The main file contains enough context to understand each vector's purpose without needing to read the reference files.

3 / 3

Total

11

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Repository
sickn33/antigravity-awesome-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.