CtrlK
BlogDocsLog inGet started
Tessl Logo

fastapi-ml-endpoint

Fastapi Ml Endpoint - Auto-activating skill for ML Deployment. Triggers on: fastapi ml endpoint, fastapi ml endpoint Part of the ML Deployment skill category.

39

1.02x

Quality

11%

Does it follow best practices?

Impact

88%

1.02x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./planned-skills/generated/08-ml-deployment/fastapi-ml-endpoint/SKILL.md
SKILL.md
Quality
Evals
Security

Evaluation results

93%

11%

Deploy a Text Classification Model as a REST API

Production-ready ML serving endpoint

Criteria
Without context
With context

FastAPI framework used

100%

100%

Pydantic request model

100%

100%

Pydantic response model

100%

100%

Lifespan model loading

100%

100%

Async endpoint

0%

100%

Health check endpoint

100%

100%

HTTP error handling

100%

100%

Production config present

60%

100%

No hardcoded model path

80%

80%

Inference endpoint defined

100%

100%

Requirements or dependency list

60%

50%

Without context: $0.7026 · 2m 26s · 33 turns · 36 in / 9,053 out tokens

With context: $0.4754 · 1m 49s · 28 turns · 101 in / 6,106 out tokens

95%

6%

Add Operational Observability to an ML Inference Service

MLOps monitoring and observability

Criteria
Without context
With context

Health check endpoint

100%

100%

Readiness vs liveness distinction

100%

100%

Request logging

100%

100%

Structured log format

70%

100%

Inference latency tracking

100%

100%

Request counter or metrics endpoint

100%

100%

Error rate tracking

100%

100%

Model metadata exposed

100%

100%

Production startup logging

70%

80%

FastAPI + middleware approach

50%

70%

Dependency manifest present

100%

100%

Without context: $0.5063 · 2m 4s · 28 turns · 28 in / 7,005 out tokens

With context: $0.5599 · 2m 2s · 32 turns · 113 in / 6,984 out tokens

76%

-11%

Build an End-to-End ML Inference Pipeline Service

MLOps pipeline integration

Criteria
Without context
With context

FastAPI used

100%

100%

Pipeline stages separated

100%

100%

Preprocessing step present

100%

100%

Postprocessing step present

100%

100%

Model loaded at startup

100%

50%

Config from environment

100%

100%

Async or background processing

37%

25%

Pipeline documented

100%

100%

Input validation

60%

0%

Error propagation

55%

66%

Dependency manifest

100%

100%

Without context: $0.9536 · 3m 21s · 42 turns · 45 in / 12,619 out tokens

With context: $0.9219 · 3m 19s · 45 turns · 77 in / 12,354 out tokens

Repository
jeremylongshore/claude-code-plugins-plus-skills
Evaluated
Agent
Claude Code
Model
Claude Sonnet 4.6

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.