Discover and install skills to enhance your AI agent's capabilities.
| Name | Contains | Score |
|---|---|---|
r-benchmarking jjjermiah/dotagents R benchmarking, profiling, and performance analysis with reproducibility and measurement rigor. Use when timing R code execution, profiling with Rprof or profvis, measuring memory allocations, comparing function performance, or optimizing bottlenecks—e.g., "benchmark R function", "profvis profiling", "microbenchmark comparison", "performance analysis", "memory profiling". | Skills | 92 1.54x Agent success vs baseline Impact 88% 1.54xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
markdown-tools fernandezbaptiste/claude-code-skills Converts documents to markdown with multi-tool orchestration for best quality. Supports Quick Mode (fast, single tool) and Heavy Mode (best quality, multi-tool merge). Use when converting PDF/DOCX/PPTX files to markdown, extracting images from documents, validating conversion quality, or needing LLM-optimized document output. | Skills | 92 7.63x Agent success vs baseline Impact 84% 7.63xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
ads-plan AgriciDaniel/claude-ads Strategic paid advertising planning with industry-specific templates. Covers platform selection, campaign architecture, budget planning, creative strategy, and phased implementation roadmap. Use when user says ad plan, ad strategy, campaign planning, media plan, PPC strategy, or advertising plan. | Skills | 92 1.95x Agent success vs baseline Impact 88% 1.95xAverage score across 3 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.0.1 |
proof-failure-explainer ArabelaTso/Skills-4-SE Analyze and explain why Isabelle or Coq proofs fail, identifying the root cause such as type mismatches, missing assumptions, incorrect goals, unification failures, or inapplicable tactics. Use when the user encounters proof failures, error messages in formal verification, stuck proof states, or asks why their Isabelle/Coq proof doesn't work. | Skills | 92 1.01x Agent success vs baseline Impact 88% 1.01xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
markdown-tools daymade/claude-code-skills Converts documents to markdown with multi-tool orchestration for best quality. Supports Quick Mode (fast, single tool) and Heavy Mode (best quality, multi-tool merge). Use when converting PDF/DOCX/PPTX files to markdown, extracting images from documents, validating conversion quality, or needing LLM-optimized document output. | Skills | 92 7.63x Agent success vs baseline Impact 84% 7.63xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
SQLite best practices for Python -- PRAGMAs per connection, context manager transactions, parameterized queries, row_factory, executemany, FK indexes Contains: sqlite-python-best-practices SQLite best practices for Python -- connection setup, WAL mode, foreign keys PRAGMA (must be set per connection!), context managers for transactions, parameterized queries, row_factory, executemany, connection management, indexes on FKs, and money as INTEGER cents. Use when building or reviewing any Python app with SQLite, when you see sqlite3 imports, or when setting up a new local database in a FastAPI/Flask project. | Skills | 92 1.73x Agent success vs baseline Impact 97% 1.73xAverage score across 5 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.2.0 |
Tests memory writes, confirms read-back accuracy, and validates retrieval success to ensure saved information can actually be recovered. Use when you need to verify memory was saved correctly, check if stored data can be retrieved, confirm a memory entry is discoverable, or escalate when saved information appears lost or corrupted. Covers write confirmation, read-back comparison, retrieval smoke testing, and failure escalation. Includes explicit untrusted-content/prompt-injection guardrails for third-party inputs. Contains: memory-roundtrip-guard Tests memory writes, confirms read-back accuracy, and validates retrieval success to ensure saved information can actually be recovered. Use when you need to verify memory was saved correctly, check if stored data can be retrieved, confirm a memory entry is discoverable, or escalate when saved information appears lost or corrupted. Covers write confirmation, read-back comparison, retrieval smoke testing, and failure escalation. | Skills | 92 1.19x Agent success vs baseline Impact 97% 1.19xAverage score across 5 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.2 |
gamussa/presenterm v0.3.1 Create terminal-based presentation slides using presenterm's markdown format with themes, diagrams, code highlighting, and more Contains: presenterm Create terminal-based presentation slides using presenterm's markdown format. presenterm renders markdown files as slides in the terminal with themes, code highlighting, images, column layouts, speaker notes, and more. Use this skill whenever the user wants to create a presenterm presentation, terminal slides, markdown slides for presenterm, or mentions "presenterm" in any context. Also trigger when the user asks for "terminal presentation", "markdown presentation", "slide deck in markdown", or wants to convert content into presenterm format. Even if the user just says "create a presentation" and they have used presenterm before or the context suggests terminal-based slides, use this skill. | Skills | 92 2.15x Agent success vs baseline Impact 97% 2.15xAverage score across 5 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.3.1 |
Review PR comments, address code issues in source files (not generated files), regenerate derived artifacts, run lint/format, commit, push, and reply to the comment thread confirming resolution. Contains: pr-comment-resolver Resolve pull request feedback by critically assessing each item, fixing source files (not generated files), regenerating derived artifacts, running lint/format, committing, pushing, and replying on the appropriate GitHub thread. Use when the user asks to address PR feedback, respond to reviewer suggestions, fix issues from code review, or resolve GitHub review comments, PR issue comments, or review summaries. | Skills | 92 1.21x Agent success vs baseline Impact 97% 1.21xAverage score across 5 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.4.0 |
Closing the intent-to-code chasm - specification-driven development with BDD verification chain Contains: iikit-00-constitution Create or update a CONSTITUTION.md that defines project governance — establishes coding standards, quality gates, TDD policy, review requirements, and non-negotiable development principles with versioned amendment tracking. Use when defining project rules, setting up coding standards, establishing quality gates, configuring TDD requirements, or creating non-negotiable development principles. iikit-01-specify Create a feature specification from a natural language description — generates user stories with Given/When/Then scenarios, functional requirements (FR-XXX), success criteria, and a quality checklist. Use when starting a new feature, writing a PRD, defining user stories, capturing acceptance criteria, or documenting requirements for a product idea. iikit-02-plan Generate a technical design document from a feature spec — selects frameworks, defines data models, produces API contracts, and creates a dependency-ordered implementation strategy. Use when planning how to build a feature, writing a technical design doc, choosing libraries, defining database schemas, or setting up Tessl tiles for runtime library knowledge. iikit-03-checklist Generate quality checklists that validate requirements completeness, clarity, and consistency — produces scored checklist items linked to specific spec sections (FR-XXX, SC-XXX). Use when reviewing a spec for gaps, doing a requirements review, verifying PRD quality, auditing user stories and acceptance criteria, or gating before implementation. iikit-04-testify Generate Gherkin .feature files from requirements before implementation — produces executable BDD scenarios with traceability tags, computes assertion integrity hashes, and locks acceptance criteria for test-driven development. Use when writing tests first, doing TDD, creating test cases from a spec, locking acceptance criteria, or setting up red-green-refactor with hash-verified assertions. iikit-05-tasks Generate dependency-ordered task breakdown from plan and specification. Use when breaking features into implementable tasks, planning sprints, or creating work items with parallel markers. iikit-06-analyze Validate cross-artifact consistency — checks that every spec requirement traces to tasks, plan tech stack matches task file paths, and constitution principles are satisfied across all artifacts. Use when running a consistency check, verifying requirements traceability, detecting conflicts between design docs, or auditing alignment before implementation begins. iikit-07-implement Execute the implementation plan by coding each task from tasks.md — writes source files, runs tests, verifies assertion integrity, and validates output against constitutional principles. Use when ready to build a feature from a tasks.md plan, start coding against an Intent Integrity Kit implementation plan, develop from the task list, resume a partially completed implementation, or run the implement phase of the iikit workflow. iikit-08-taskstoissues Convert tasks from tasks.md into GitHub Issues with labels and dependencies. Use when exporting work items to GitHub, setting up project boards, or assigning tasks to team members. iikit-bugfix Report a bug against an existing feature — creates a structured bugs.md record, generates fix tasks in tasks.md, and optionally imports from or creates GitHub issues. Use when fixing a bug, reporting a defect, importing a GitHub issue into the workflow, or triaging an error without running the full specification process. iikit-clarify Resolve ambiguities in any project artifact — auto-detects the most recent artifact (spec, plan, checklist, testify, tasks, or constitution), asks targeted questions with option tables, and writes answers back into the artifact's Clarifications section. Use when requirements are unclear, a plan has trade-off gaps, checklist thresholds feel wrong, test scenarios are imprecise, task dependencies seem off, or constitution principles are vague. iikit-core Initialize an IIKit (Intent Integrity Kit) project, check IIKit feature progress, select the active IIKit feature, and display the IIKit workflow command reference. Use when starting a new IIKit project, running IIKit init or setup, checking IIKit status, switching between IIKit features, looking up IIKit available commands and phases, or asking for help with the IIKit workflow. | SkillsRules | 92 1.84x Agent success vs baseline Impact 94% 1.84xAverage score across 14 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 2.10.1 |
analyzing-dependencies jeremylongshore/claude-code-plugins-plus-skills This skill analyzes project dependencies for security vulnerabilities, outdated packages, and license compliance issues. It helps identify potential risks in your project's dependencies using the dependency-checker plugin. Use this skill when you need to check dependencies for vulnerabilities, identify outdated packages that need updates, or ensure license compatibility. Trigger phrases include "check dependencies", "dependency check", "find vulnerabilities", "scan for outdated packages", "/depcheck", and "license compliance". This skill supports npm, pip, composer, gem, and go modules projects. | Skills | 92 1.09x Agent success vs baseline Impact 96% 1.09xAverage score across 12 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.4 |
Evidence-first pull request review with independent critique, selective challenger review, and human handoff. Contains: challenger-review Stress-test the primary review with an additional independent reviewer that generates its own findings, compares reviewer conclusions, and identifies issues the primary reviewer may have missed. Use when performing a second opinion or double-check review on a pull request, for medium or high risk PRs, when authoring was heavily AI-assisted, when primary reviewer confidence is low, when findings conflict, or when you need to verify findings with a cross-model or same-model challenger. Supports same-model and cross-model configurations for fair comparison. finding-synthesizer Turn many candidate findings from reviewers and verifiers into a small, decision-useful set. Deduplicates, ranks, and suppresses weak findings to consolidate review results into a prioritized, actionable list with severity ratings and merged confidence scores. Use when you need to merge findings, consolidate feedback, prioritize issues, or summarize review output after review passes are complete and before human handoff. Trigger phrases: "consolidate review results", "merge findings", "deduplicate feedback", "prioritize issues from review", "summarize reviewer output". The evidence threshold is the filter — not an arbitrary cap. fresh-eyes-review Provide an independent critique of a pull request (PR) using a clean reviewer context, identifying bugs, security issues, code quality problems, API misuse, and missing test coverage. Use when performing a code review or pull request review after an evidence pack has been built, for green or yellow risk lane PRs, or as part of a full pipeline for red risk lane PRs. Produces candidate findings (covering correctness, security, and architectural concerns) for downstream synthesis — not final verdicts. Operates as a critic, not a co-author. Common triggers: "review this PR", "code review feedback", "fresh review", "independent review". human-review-handoff Generates a structured, human-readable reviewer packet summarising what changed in a pull request, why it matters, what was verified, and where human attention is most needed. Use when the user asks for a PR review summary, a code review packet, a human-readable change report, or wants to hand off review findings to a human reviewer. Produces a scannable document: quick approvals (low-risk PRs) can be assessed in under 30 seconds; detailed reviews (high-risk PRs) in under 2 minutes. Outputs a formatted markdown packet with risk rating, verification status, ranked findings, unresolved questions, and a recommended review focus — making human review faster without replacing human judgment. pr-evidence-builder Build a compact, trustworthy evidence pack before deeper PR review starts. Use this skill when a pull request needs review — it is always the first step. Triggered by requests to review code, check a PR, review my changes, review a merge request, or any similar code review or pull request review request. Collects PR context, runs deterministic verifiers, classifies risk, maps hotspots, and checks for missing artifacts. Produces the evidence pack that all downstream review skills consume. review-retrospective Evaluates which code review comments (review tiles) actually produced changes after a pull request is merged or closed, by passively collecting outcome data from the GitHub API and git history — zero developer friction. Use when analyzing post-merge pull request outcomes, assessing code review effectiveness, measuring review feedback impact, or answering questions like "how did PR #6 go?", "which review comments were accepted?", or "did any escaped defects appear after this pull request merged?" Produces a structured per-finding outcome record (accepted / rejected / ignored / superseded), merge time delta, escaped defect count, and AI authorship correlation for each PR. | SkillsRules | 92 1.43x Agent success vs baseline Impact 93% 1.43xAverage score across 43 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.8 |
joelclaw-system-check joelhooks/joelclaw Run a comprehensive health check of the joelclaw system — k8s cluster, worker, Inngest, Redis, Typesense/OTEL, tests, TypeScript, repo sync, memory pipeline, pi-tools, git config, active loops, disk, stale tests. Outputs a 1-10 score with per-component breakdown. Use when: 'system health', 'health check', 'is everything working', 'system status', 'how's the system', 'check everything', or at session start to orient. | Skills | 92 7.07x Agent success vs baseline Impact 99% 7.07xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
groq-performance-tuning jeremylongshore/claude-code-plugins-plus-skills Optimize Groq API performance with model selection, caching, streaming, and parallel requests. Use when experiencing slow responses, implementing caching strategies, or optimizing request throughput for Groq integrations. Trigger with phrases like "groq performance", "optimize groq", "groq latency", "groq caching", "groq slow", "groq speed". | Skills | 92 1.56x Agent success vs baseline Impact 91% 1.56xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
api-rate-limiting secondsky/claude-skills Implements API rate limiting using token bucket, sliding window, and Redis-based algorithms to protect against abuse. Use when securing public APIs, implementing tiered access, or preventing denial-of-service attacks. | Skills | 92 1.15x Agent success vs baseline Impact 95% 1.15xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
performance-testing jeremylongshore/claude-code-plugins-plus-skills This skill enables Claude to design, execute, and analyze performance tests using the performance-test-suite plugin. It is activated when the user requests load testing, stress testing, spike testing, or endurance testing, and when discussing performance metrics such as response time, throughput, and error rates. It identifies performance bottlenecks related to CPU, memory, database, or network issues. The plugin provides comprehensive reporting, including percentiles, graphs, and recommendations. | Skills | 92 1.00x No change in agent success vs baseline Impact 100% 1.00xAverage score across 9 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.3 |
e2e NeverSight/skills_feed Run e2e tests, fix flake and outdated tests, identify bugs against spec. Use when running e2e tests, debugging test failures, or fixing flaky tests. Never changes source code logic or API without spec backing. | Skills | 92 1.22x Agent success vs baseline Impact 75% 1.22xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.1 |
gamussa/flink-sql v1.0.3 Apache Flink SQL, Table API, and UDF development for both OSS Flink and Confluent Cloud Contains: flink-sql Apache Flink SQL, Table API, and UDF development for both OSS Flink and Confluent Cloud. Use when: (1) Writing Flink SQL queries (windows, joins, aggregations, MATCH_RECOGNIZE), (2) Building Table API pipelines in Java or Python, (3) Creating UDFs (scalar, table functions) for Flink, (4) Deploying Flink jobs to Confluent Cloud, (5) Converting between DataStream and Table API, (6) Troubleshooting Flink SQL errors. Covers windowing, event-time processing, watermarks, state management, and Confluent-specific patterns. | Skills | 92 1.22x Agent success vs baseline Impact 98% 1.22xAverage score across 5 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 1.0.3 |
api-designer Jeffallan/claude-skills Use when designing REST or GraphQL APIs, creating OpenAPI specifications, or planning API architecture. Invoke for resource modeling, versioning strategies, pagination patterns, error handling standards. | Skills | 92 1.00x No change in agent success vs baseline Impact 98% 1.00xAverage score across 6 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.2 |
gamma-ci-integration jeremylongshore/claude-code-plugins-plus-skills Configure Gamma CI/CD integration with GitHub Actions and testing. Use when setting up automated testing, configuring CI pipelines, or integrating Gamma tests into your build process. Trigger with phrases like "gamma CI", "gamma GitHub Actions", "gamma automated tests", "CI gamma", "gamma pipeline". | Skills | 92 1.58x Agent success vs baseline Impact 100% 1.58xAverage score across 6 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.0.2 |
Can't find what you're looking for? Evaluate a missing skill.