CtrlK
CommunityDocumentationLog inGet started
Tessl Logo

inference-latency-profiler

Inference Latency Profiler - Auto-activating skill for ML Deployment. Triggers on: inference latency profiler, inference latency profiler Part of the ML Deployment skill category.

Install with Tessl CLI

npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill inference-latency-profiler
What are skills?

Overall
score

19%

Does it follow best practices?

Validation for skill structure

SKILL.md
Review
Evals

Activation

7%

This description is essentially a placeholder that provides almost no useful information for skill selection. It only states the skill name and category without describing capabilities, actions, or meaningful trigger conditions. The duplicate trigger term suggests this was auto-generated without human refinement.

Suggestions

Add specific actions the skill performs, e.g., 'Measures model inference time, identifies bottlenecks in prediction pipelines, generates latency reports, and suggests optimization strategies.'

Include a 'Use when...' clause with natural trigger terms like 'slow predictions', 'model latency', 'inference speed', 'deployment performance', 'prediction time optimization'.

Remove the duplicate trigger term and expand with variations users might actually say when needing this skill.

DimensionReasoningScore

Specificity

The description only names the skill ('Inference Latency Profiler') without describing any concrete actions. There are no verbs indicating what the skill actually does - no mention of measuring, analyzing, optimizing, or any specific capabilities.

1 / 3

Completeness

The description fails to answer 'what does this do' beyond the name, and provides no 'when should Claude use it' guidance. The 'Triggers on' section just repeats the skill name rather than providing meaningful trigger scenarios.

1 / 3

Trigger Term Quality

The trigger terms listed are just the skill name repeated twice ('inference latency profiler, inference latency profiler'). Missing natural user terms like 'slow inference', 'model speed', 'prediction time', 'latency issues', 'performance profiling', etc.

1 / 3

Distinctiveness Conflict Risk

The term 'Inference Latency Profiler' is fairly specific to ML deployment contexts, which provides some distinctiveness. However, without concrete actions described, it could overlap with general ML performance or monitoring skills.

2 / 3

Total

5

/

12

Passed

Implementation

0%

This skill is essentially a placeholder with no substantive content. It describes what an inference latency profiler skill would do without providing any actual guidance, code, tools, or methodology. The entire content could be replaced with 'Help with inference latency profiling' and convey the same information.

Suggestions

Add concrete code examples showing how to profile inference latency (e.g., using Python's time module, PyTorch profiler, or TensorFlow profiler)

Include specific metrics to measure (P50/P95/P99 latency, throughput, batch size impact) and how to collect them

Provide a clear workflow: 1) Instrument model, 2) Run profiling, 3) Analyze bottlenecks, 4) Optimize, with validation steps

Add tool-specific guidance for common serving frameworks (TensorRT, ONNX Runtime, TorchServe) with executable examples

DimensionReasoningScore

Conciseness

The content is padded with generic boilerplate that explains nothing Claude doesn't already know. Phrases like 'provides automated assistance' and 'follows industry best practices' are meaningless filler with zero actionable information.

1 / 3

Actionability

No concrete code, commands, or specific guidance is provided. The skill describes what it does in abstract terms but never shows how to actually profile inference latency - no tools, no metrics, no implementation details.

1 / 3

Workflow Clarity

No workflow is defined. Claims to provide 'step-by-step guidance' but contains zero actual steps. There's no sequence, no validation checkpoints, and no process to follow.

1 / 3

Progressive Disclosure

The content is a monolithic block of vague descriptions with no structure for discovery. No references to detailed documentation, no links to examples or advanced topics, and no meaningful organization of content.

1 / 3

Total

4

/

12

Passed

Validation

69%

Validation11 / 16 Passed

Validation for skill structure

CriteriaDescriptionResult

description_trigger_hint

Description may be missing an explicit 'when to use' trigger hint (e.g., 'Use when...')

Warning

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

metadata_version

'metadata' field is not a dictionary

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

body_steps

No step-by-step structure detected (no ordered list); consider adding a simple workflow

Warning

Total

11

/

16

Passed

Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.