CtrlK
BlogDocsLog inGet started
Tessl Logo

senior-computer-vision

Computer vision engineering skill for object detection, image segmentation, and visual AI systems. Covers CNN and Vision Transformer architectures, YOLO/Faster R-CNN/DETR detection, Mask R-CNN/SAM segmentation, and production deployment with ONNX/TensorRT. Includes PyTorch, torchvision, Ultralytics, Detectron2, and MMDetection frameworks. Use when building detection pipelines, training custom models, optimizing inference, or deploying vision systems.

87

1.37x
Quality

78%

Does it follow best practices?

Impact

92%

1.37x

Average score across 6 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./engineering-team/senior-computer-vision/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is an excellent skill description that comprehensively covers the computer vision domain with specific architectures, frameworks, and use cases. It uses proper third-person voice, includes a clear 'Use when' clause with actionable triggers, and is highly distinctive from other potential AI/ML skills. The description is information-dense without being verbose, making it easy for Claude to correctly select this skill when relevant.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions and technologies: object detection, image segmentation, CNN/Vision Transformer architectures, YOLO/Faster R-CNN/DETR detection, Mask R-CNN/SAM segmentation, deployment with ONNX/TensorRT, and specific frameworks like PyTorch, torchvision, Ultralytics, Detectron2, MMDetection.

3 / 3

Completeness

Clearly answers both 'what' (object detection, segmentation, specific architectures and frameworks) and 'when' with an explicit 'Use when building detection pipelines, training custom models, optimizing inference, or deploying vision systems' clause.

3 / 3

Trigger Term Quality

Excellent coverage of natural terms users would say: 'object detection', 'image segmentation', 'YOLO', 'Faster R-CNN', 'DETR', 'SAM', 'computer vision', 'PyTorch', 'ONNX', 'TensorRT', 'detection pipelines', 'vision systems'. These are all terms practitioners naturally use when seeking help with CV tasks.

3 / 3

Distinctiveness Conflict Risk

Highly distinctive with a clear niche in computer vision engineering. The specific mention of detection/segmentation models (YOLO, Faster R-CNN, DETR, Mask R-CNN, SAM) and CV-specific frameworks (Detectron2, MMDetection, Ultralytics) makes it very unlikely to conflict with general ML, NLP, or other AI skills.

3 / 3

Total

12

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a comprehensive computer vision skill with good structure and progressive disclosure, but it suffers from verbosity — many tables and lists catalog information Claude already knows (architecture names, framework lists, metric definitions). The workflows are well-sequenced but rely heavily on custom scripts that aren't provided, reducing true actionability. Validation and error recovery steps are present but incomplete, particularly around what to do when metrics don't meet targets.

Suggestions

Remove or drastically reduce the 'Core Expertise', 'Tech Stack', and architecture comparison tables — Claude already knows these frameworks and their trade-offs. Move detailed architecture tables to a reference file.

Add explicit feedback loops to workflows: e.g., in Workflow 1 Step 6, specify 'If mAP@50 < 0.7: increase epochs, add augmentations, or switch to larger model variant. Re-train and re-evaluate.' Similarly in Workflow 2, specify what to do if accuracy drop exceeds threshold after quantization.

Clarify whether the custom scripts (vision_model_trainer.py, inference_optimizer.py, dataset_pipeline_builder.py) exist in the project or are aspirational — if they don't exist, replace with real executable commands using actual framework CLIs.

Trim the 'Expected output' blocks which are illustrative mock outputs that don't add actionable guidance — a one-line description of what to look for would be more token-efficient.

DimensionReasoningScore

Conciseness

The skill is quite long (~350 lines) with several tables and sections that are informational but not strictly necessary for Claude to act on. The 'Core Expertise' and 'Tech Stack' sections are lists of things Claude already knows. The architecture selection tables, while useful, are verbose and could be in a reference file. However, the workflow sections are reasonably efficient with concrete commands.

2 / 3

Actionability

The skill provides many concrete bash commands and configuration examples, but they reference custom scripts (scripts/vision_model_trainer.py, scripts/inference_optimizer.py, scripts/dataset_pipeline_builder.py) that may not exist, making them not truly executable. The YOLO and Detectron2 commands are real and actionable, but the custom script commands are essentially pseudocode dressed as bash. The 'Expected output' blocks are illustrative rather than verifiable.

2 / 3

Workflow Clarity

The three workflows have clear step sequences and are well-structured. However, validation checkpoints are weak — Step 6 in Workflow 1 lists metrics to analyze but doesn't specify what to do if they fail. The ONNX verification in Workflow 2 Step 3 is a good validation step, but there's no explicit feedback loop for when benchmarks don't meet targets or when accuracy drops exceed thresholds after quantization.

2 / 3

Progressive Disclosure

The skill has a clear table of contents, well-organized sections progressing from quick start to detailed workflows to reference material, and appropriately points to external reference files (references/reference-docs-and-commands.md, references/computer_vision_architectures.md, etc.) for deeper content. Navigation is straightforward and one level deep.

3 / 3

Total

9

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
alirezarezvani/claude-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.