Discover and install skills to enhance your AI agent's capabilities.
| Name | Contains | Score |
|---|---|---|
Use when the user wants regression hunting after a change. Identify nearby flows, shared code paths, error states, and configuration edges that may have broken even if the main fix works. Good triggers include "check for regressions", "what else might this have broken", and "test the surrounding area". Contains: regression-scout Use when the user wants regression hunting after a change. Identify nearby flows, shared code paths, error states, and configuration edges that may have broken even if the main fix works, then run focused checks such as related tests, adjacent commands, or neighboring API paths. Good triggers include "check for regressions", "what else might this have broken", and "test the surrounding area". | Skills | 96 2.72x Agent success vs baseline Impact 98% 2.72xAverage score across 8 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.2 |
Drizzle ORM patterns -- schema definition, indexes, relations, migrations, transactions, upserts, prepared statements, and connection setup Contains: drizzle-best-practices Drizzle ORM patterns -- schema definition, type-safe queries, relations, migrations, transactions, upserts, prepared statements, and connection setup. Use when building or reviewing apps with Drizzle ORM, when setting up a new database with Drizzle, when writing queries or migrations, or when configuring Drizzle for production. | Skills | 96 1.60x Agent success vs baseline Impact 98% 1.60xAverage score across 5 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.2.2 |
Run task evals across multiple Claude models, compare results side-by-side, and identify which skill gaps are model-specific versus universal Contains: review-model-performance Run task evals across multiple Claude models and compare results side-by-side. Use when you want to understand how a skill performs across different models, identify model-specific gaps versus universal tile issues, or validate a skill before publishing it to the registry. | Skills | 96 1.65x Agent success vs baseline Impact 96% 1.65xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.2 |
Build HIPAA-compliant telemedicine apps with PubNub real-time messaging Contains: pubnub-telemedicine Build HIPAA-compliant telemedicine apps with PubNub real-time messaging | Skills | 96 1.88x Agent success vs baseline Impact 100% 1.88xAverage score across 15 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.4 |
Build real-time auction platforms with PubNub bidding and countdowns Contains: pubnub-live-auctions Build real-time auction platforms with PubNub bidding and countdowns | Skills | 96 1.35x Agent success vs baseline Impact 100% 1.35xAverage score across 15 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.1.4 |
pubnub/pubnub-presence v0.1.4 Implement real-time presence tracking with PubNub Contains: pubnub-presence Implement real-time presence tracking with PubNub | Skills | 96 1.33x Agent success vs baseline Impact 100% 1.33xAverage score across 20 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.1.4 |
golang-stretchr-testify samber/cc-skills-golang Comprehensive guide to stretchr/testify for Golang testing. Covers assert, require, mock, and suite packages in depth. Use whenever writing tests with testify, creating mocks, setting up test suites, or choosing between assert and require. Essential for testify assertions, mock expectations, argument matchers, call verification, suite lifecycle, and advanced patterns like Eventually, JSONEq, and custom matchers. Trigger on any Go test file importing testify. | Skills | 96 1.85x Agent success vs baseline Impact 91% 1.85xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
moltycash openclaw/skills Send USDC to molty users via A2A protocol. Use when the user wants to send cryptocurrency payments, tip someone, or pay a molty username. | Skills | 96 1.73x Agent success vs baseline Impact 99% 1.73xAverage score across 3 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.0.1 |
atlassian razbakov/skills Interact with Atlassian Jira and Confluence from the terminal. Search, create, and update Jira tickets; read, search, create, and update Confluence pages. Use when the user mentions Jira tickets, issues, work items, sprints, Confluence pages, documentation, or Atlassian in general. | Skills | 96 1.93x Agent success vs baseline Impact 95% 1.93xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
commit getsentry/skills ALWAYS use this skill when committing code changes — never commit directly without it. Creates commits following Sentry conventions with proper conventional commit format and issue references. Trigger on any commit, git commit, save changes, or commit message task. | Skills | 96 1.24x Agent success vs baseline Impact 97% 1.24xAverage score across 6 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.2 |
groq-prod-checklist jeremylongshore/claude-code-plugins-plus-skills Execute Groq production deployment checklist and go-live procedures. Use when deploying Groq integrations to production, preparing for launch, or implementing go-live procedures. Trigger with phrases like "groq production", "deploy groq", "groq go-live", "groq launch checklist". | Skills | 96 1.25x Agent success vs baseline Impact 99% 1.25xAverage score across 6 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.0.2 |
spring-boot-engineer jeffallan/claude-skills Generates Spring Boot 3.x configurations, creates REST controllers, implements Spring Security 6 authentication flows, sets up Spring Data JPA repositories, and configures reactive WebFlux endpoints. Use when building Spring Boot 3.x applications, microservices, or reactive Java applications; invoke for Spring Data JPA, Spring Security 6, WebFlux, Spring Cloud integration, Java REST API design, or Microservices Java architecture. | Skills | 96 1.33x Agent success vs baseline Impact 99% 1.33xAverage score across 6 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.2 |
firecrawl-incident-runbook jeremylongshore/claude-code-plugins-plus-skills Execute Firecrawl incident response procedures with triage, mitigation, and postmortem. Use when responding to Firecrawl-related outages, investigating scrape/crawl failures, or running post-incident reviews for Firecrawl integration issues. Trigger with phrases like "firecrawl incident", "firecrawl outage", "firecrawl down", "firecrawl on-call", "firecrawl emergency", "firecrawl broken". | Skills | 96 2.14x Agent success vs baseline Impact 90% 2.14xAverage score across 3 eval scenarios Securityby Risky Do not use without reviewing Reviewed: Version: 0.0.1 |
Creates boundary-point validation contracts, defines invariant-based success criteria, and sets up automated verification probes so reliability workflows trigger on objective evidence rather than intuition. Use when designing robust handoff, memory-persistence, or tool-call reliability workflows; when you need to verify handoffs work, check memory persistence, validate tool calls succeeded, or convert vague reliability goals into concrete, testable checks at each boundary point with explicit failure-class mapping (operational vs. critical); or when you want to test your workflow end-to-end, make sure it works, or verify your automation runs correctly using read-back probes and escalation triggers rather than agent confidence. Includes explicit untrusted-content/prompt-injection guardrails for third-party inputs. Contains: detectability-contract Creates boundary-point validation contracts, defines invariant-based success criteria, and sets up automated verification probes so reliability workflows trigger on objective evidence rather than intuition. Use when designing robust handoff, memory-persistence, or tool-call reliability workflows; when you need to verify handoffs work, check memory persistence, validate tool calls succeeded, or convert vague reliability goals into concrete, testable checks at each boundary point with explicit failure-class mapping (operational vs. critical); or when you want to test your workflow end-to-end, make sure it works, or verify your automation runs correctly using read-back probes and escalation triggers rather than agent confidence. | Skills | 96 1.25x Agent success vs baseline Impact 98% 1.25xAverage score across 9 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.2 |
presentation-builder supercent-io/skills-template Build editable presentations with slides-grab. Use when creating slide decks as HTML slides, iterating visually in a browser, and exporting approved decks to PPTX or PDF. | Skills | 96 1.71x Agent success vs baseline Impact 98% 1.71xAverage score across 3 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.0.2 |
github-webhooks hookdeck/webhook-skills Receive and verify GitHub webhooks. Use when setting up GitHub webhook handlers, debugging signature verification, or handling repository events like push, pull_request, issues, or release. | Skills | 96 1.53x Agent success vs baseline Impact 98% 1.53xAverage score across 3 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.0.1 |
swiftui-expert-skill AvdLee/SwiftUI-Agent-Skill Write, review, or improve SwiftUI code following best practices for state management, view composition, performance, macOS-specific APIs, and iOS 26+ Liquid Glass adoption. Use when building new SwiftUI features, refactoring existing views, reviewing code quality, or adopting modern SwiftUI patterns. Also triggers whenever an Xcode Instruments `.trace` file is referenced (to analyse it) or the user asks to **record** a new trace — attach to a running app, launch one fresh, or capture a manually-stopped session with the bundled `record_trace.py`. A target SwiftUI source file is optional; if provided it grounds recommendations in specific lines, but a trace alone is enough to diagnose hangs, hitches, CPU hotspots, and high-severity SwiftUI updates. | Skills | 96 1.18x Agent success vs baseline Impact 96% 1.18xAverage score across 12 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.4 |
swiftui-expert-skill avdlee/swiftui-agent-skill Write, review, or improve SwiftUI code following best practices for state management, view composition, performance, macOS-specific APIs, and iOS 26+ Liquid Glass adoption. Use when building new SwiftUI features, refactoring existing views, reviewing code quality, or adopting modern SwiftUI patterns. Also triggers whenever an Xcode Instruments `.trace` file is referenced (to analyse it) or the user asks to **record** a new trace — attach to a running app, launch one fresh, or capture a manually-stopped session with the bundled `record_trace.py`. A target SwiftUI source file is optional; if provided it grounds recommendations in specific lines, but a trace alone is enough to diagnose hangs, hitches, CPU hotspots, and high-severity SwiftUI updates. | Skills | 96 1.18x Agent success vs baseline Impact 96% 1.18xAverage score across 12 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.4 |
juliusbrussee/caveman v1.0.7 Compressed caveman-style prose for AI coding agents — cuts ~65% output tokens while keeping full technical accuracy Contains: caveman Ultra-compressed communication mode. Cuts token usage ~75% by speaking like caveman while keeping full technical accuracy. Supports intensity levels: lite, full (default), ultra, wenyan-lite, wenyan-full, wenyan-ultra. Use when user says "caveman mode", "talk like caveman", "use caveman", "less tokens", "be brief", or invokes /caveman. Also auto-triggers when token efficiency is requested. | Skills | 96 1.00x No change in agent success vs baseline Impact 96% 1.00xAverage score across 38 eval scenarios Securityby Passed No known issues Reviewed: Version: 1.0.7 |
uinaf/verify v0.1.11 Verify your own completed code changes using the repo's existing infrastructure and an independent evaluator context. Use after implementing a change when you need to run unit or integration tests, check build or lint gates, prove the real surface works with evidence, and challenge the changed code for clarity, deduplication, and maintainability. If the repo is not verifiable yet, hand off to `agent-readiness`; if you are reviewing someone else's code, use `review`. Contains: verify Self-check your own completed change before handing off to `review` — the pre-review sanity pass. Use when you want to check your work, run checks, validate changes, make sure a change is ready, test it end-to-end, run repo guardrails (lint, typecheck, tests, build), exercise the real surface with evidence, and catch obvious self-correctable issues. Produces a `ready for review` / `needs more work` / `blocked` verdict — never a ship decision. If the repo cannot be booted or exercised reliably, hand off to `agent-readiness`. If auditing someone else's diff, branch, or PR, use `review` instead. | Skills | 96 1.02x Agent success vs baseline Impact 94% 1.02xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.11 |
Can't find what you're looking for? Evaluate a missing skill.