Set up comprehensive observability for Clay integrations with metrics, traces, and alerts. Use when implementing monitoring for Clay operations, setting up dashboards, or configuring alerting for Clay integration health. Trigger with phrases like "clay monitoring", "clay metrics", "clay observability", "monitor clay", "clay alerts", "clay tracing".
Install with Tessl CLI
npx tessl i github:jeremylongshore/claude-code-plugins-plus-skills --skill clay-observability78
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Set up comprehensive observability for Clay integrations.
| Metric | Type | Description |
|---|---|---|
clay_requests_total | Counter | Total API requests |
clay_request_duration_seconds | Histogram | Request latency |
clay_errors_total | Counter | Error count by type |
clay_rate_limit_remaining | Gauge | Rate limit headroom |
import { Registry, Counter, Histogram, Gauge } from 'prom-client';
const registry = new Registry();
const requestCounter = new Counter({
name: 'clay_requests_total',
help: 'Total Clay API requests',
labelNames: ['method', 'status'],
registers: [registry],
});
const requestDuration = new Histogram({
name: 'clay_request_duration_seconds',
help: 'Clay request duration',
labelNames: ['method'],
buckets: [0.05, 0.1, 0.25, 0.5, 1, 2.5, 5],
registers: [registry],
});
const errorCounter = new Counter({
name: 'clay_errors_total',
help: 'Clay errors by type',
labelNames: ['error_type'],
registers: [registry],
});async function instrumentedRequest<T>(
method: string,
operation: () => Promise<T>
): Promise<T> {
const timer = requestDuration.startTimer({ method });
try {
const result = await operation();
requestCounter.inc({ method, status: 'success' });
return result;
} catch (error: any) {
requestCounter.inc({ method, status: 'error' });
errorCounter.inc({ error_type: error.code || 'unknown' });
throw error;
} finally {
timer();
}
}import { trace, SpanStatusCode } from '@opentelemetry/api';
const tracer = trace.getTracer('clay-client');
async function tracedClayCall<T>(
operationName: string,
operation: () => Promise<T>
): Promise<T> {
return tracer.startActiveSpan(`clay.${operationName}`, async (span) => {
try {
const result = await operation();
span.setStatus({ code: SpanStatusCode.OK });
return result;
} catch (error: any) {
span.setStatus({ code: SpanStatusCode.ERROR, message: error.message });
span.recordException(error);
throw error;
} finally {
span.end();
}
});
}import pino from 'pino';
const logger = pino({
name: 'clay',
level: process.env.LOG_LEVEL || 'info',
});
function logClayOperation(
operation: string,
data: Record<string, any>,
duration: number
) {
logger.info({
service: 'clay',
operation,
duration_ms: duration,
...data,
});
}# clay_alerts.yaml
groups:
- name: clay_alerts
rules:
- alert: ClayHighErrorRate
expr: |
rate(clay_errors_total[5m]) /
rate(clay_requests_total[5m]) > 0.05
for: 5m
labels:
severity: warning
annotations:
summary: "Clay error rate > 5%"
- alert: ClayHighLatency
expr: |
histogram_quantile(0.95,
rate(clay_request_duration_seconds_bucket[5m])
) > 2
for: 5m
labels:
severity: warning
annotations:
summary: "Clay P95 latency > 2s"
- alert: ClayDown
expr: up{job="clay"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Clay integration is down"{
"panels": [
{
"title": "Clay Request Rate",
"targets": [{
"expr": "rate(clay_requests_total[5m])"
}]
},
{
"title": "Clay Latency P50/P95/P99",
"targets": [{
"expr": "histogram_quantile(0.5, rate(clay_request_duration_seconds_bucket[5m]))"
}]
}
]
}Implement Prometheus counters, histograms, and gauges for key operations.
Integrate OpenTelemetry for end-to-end request tracing.
Set up JSON logging with consistent field names.
Define Prometheus alerting rules for error rates and latency.
| Issue | Cause | Solution |
|---|---|---|
| Missing metrics | No instrumentation | Wrap client calls |
| Trace gaps | Missing propagation | Check context headers |
| Alert storms | Wrong thresholds | Tune alert rules |
| High cardinality | Too many labels | Reduce label values |
app.get('/metrics', async (req, res) => {
res.set('Content-Type', registry.contentType);
res.send(await registry.metrics());
});For incident response, see clay-incident-runbook.
22fc789
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.