CtrlK
BlogDocsLog inGet started
Tessl Logo

codex-review

OpenAI Codex CLI code review with GPT-5.2-Codex, CI/CD integration

48

Quality

37%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/codex-review/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

32%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description is a terse noun-phrase list that names the tool ecosystem (OpenAI Codex CLI, GPT-5.2-Codex) and broad capabilities (code review, CI/CD integration) but fails to describe concrete actions or provide any 'Use when...' guidance. It reads more like a feature tag line than a skill description, making it difficult for Claude to reliably select this skill at the right time.

Suggestions

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks for code review via OpenAI Codex CLI, wants to integrate GPT-5.2-Codex into their CI/CD pipeline, or mentions automated code review.'

List specific concrete actions the skill performs, such as 'Runs code review using Codex CLI, generates review comments on pull requests, configures CI/CD pipelines for automated code analysis, and summarizes code quality findings.'

Include common user-facing synonyms and variations like 'PR review', 'pull request feedback', 'automated code review', 'code quality checks' to improve trigger term coverage.

DimensionReasoningScore

Specificity

Names the domain (code review) and mentions specific tools (OpenAI Codex CLI, GPT-5.2-Codex, CI/CD integration), but doesn't list concrete actions like 'analyze pull requests', 'suggest fixes', or 'flag security issues'.

2 / 3

Completeness

Provides a partial 'what' (code review with specific tools) but completely lacks a 'when' clause or any explicit trigger guidance for when Claude should select this skill. Per rubric guidelines, missing 'Use when...' caps completeness at 2, and the 'what' is also weak, so this scores 1.

1 / 3

Trigger Term Quality

Includes some relevant keywords like 'code review', 'CI/CD', 'Codex CLI', but misses common natural user terms like 'review my code', 'pull request', 'PR review', 'code quality', or 'linting'.

2 / 3

Distinctiveness Conflict Risk

The mention of 'OpenAI Codex CLI' and 'GPT-5.2-Codex' provides some distinctiveness from generic code review skills, but 'code review' and 'CI/CD integration' are broad enough to overlap with other code quality or CI/CD skills.

2 / 3

Total

7

/

12

Passed

Implementation

42%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill is highly actionable with excellent, executable code examples across multiple platforms, but it is far too verbose — it reads like comprehensive product documentation rather than a focused skill file. It lacks progressive disclosure (everything is inlined in one massive file despite mentioning a base.md), and workflow clarity suffers from missing validation checkpoints and error recovery guidance for CI/CD pipelines.

Suggestions

Split CI/CD platform configs (GitHub Actions, GitLab CI, Jenkins) into separate reference files and link to them from the main skill, reducing the main file to a concise overview with one primary example.

Move installation and authentication instructions into the referenced base.md file, since the skill header already says 'Load with: base.md + code-review.md'.

Add explicit validation checkpoints to CI/CD workflows — e.g., check if review.md was generated, validate JSON output against schema before posting, and define what happens when critical findings are detected (fail the pipeline?).

Remove the comparison table with Claude, shell completions section, and troubleshooting table — these are padding that Claude doesn't need in a skill file.

DimensionReasoningScore

Conciseness

Extremely verbose at ~350+ lines. Includes extensive installation instructions (Node.js, brew, nvm, shell completions), authentication setup, full CI/CD configs for GitHub, GitLab, AND Jenkins, a comparison table with Claude, and troubleshooting — much of which Claude already knows or could infer. The skill tries to be a comprehensive reference manual rather than a lean skill file.

1 / 3

Actionability

Highly actionable with fully executable commands, complete YAML workflow configs, JSON schemas, TOML configs, and specific CLI flags. Every section provides copy-paste ready code that can be directly used.

3 / 3

Workflow Clarity

Individual steps are clear (install, authenticate, run review, post results), but there are no explicit validation checkpoints or feedback loops. For CI/CD automation involving code review decisions, there's no guidance on what to do when the review fails, how to gate merges on findings, or how to verify the review output is valid before posting.

2 / 3

Progressive Disclosure

Monolithic wall of content with no references to external files. Everything from installation to CI/CD configs for three platforms to troubleshooting is inlined. The CI/CD configs for GitHub, GitLab, and Jenkins should be in separate reference files, and the installation/auth sections could be in a base.md (mentioned in the load instruction but not leveraged).

1 / 3

Total

7

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

skill_md_line_count

SKILL.md is long (512 lines); consider splitting into references/ and linking

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
alinaqi/claude-bootstrap
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.