Discover rules to enhance your AI agent's capabilities.
| Name | Contains | Score |
|---|---|---|
jbaruch/pidge v0.1.1 Context tile for pidge notification library v3 - async API with NotificationClient, Message, and dispatch pattern Contains: pidge-integration Configures NotificationClient handlers, implements async dispatch workflows, and handles DispatchError failures for the pidge v3 notification library in async Python services. Use when working with pidge, pidge v3, pidge notifications, NotificationClient, async notifications, or pidge integration — including setting up API keys, dispatching messages, and handling delivery errors. | SkillsDocsRules | 99 2.85x Agent success vs baseline Impact 100% 2.85xAverage score across 3 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.1 |
Rego is the declarative policy language used by Open Policy Agent (OPA). This tile covers writing and testing Rego policies for Kubernetes admission control, Terraform and infrastructure-as-code plan validation, Docker container authorization, HTTP API authorization, RBAC and role-based access control, data filtering, metadata annotations with opa inspect, and OPA policy testing with opa test. | DocsRules | 99 1.19x Agent success vs baseline Impact 99% 1.19xAverage score across 31 eval scenarios Reviewed: Version: 1.17.0 |
jbaruch/speaker-toolkit v0.10.1 Two-skill presentation system: analyze your speaking style into a rhetoric knowledge vault, then create new presentations that match your documented patterns. Includes an 88-entry Presentation Patterns taxonomy for scoring, brainstorming, and go-live preparation. Contains: presentation-creator Create new presentations from scratch using the speaker's documented rhetoric patterns as a constitutional style guide. Follows an interactive, spec-driven process: distill intent from the user's prompt, jointly select rhetorical instruments, architect the talk structure, develop content with speaker notes, and iterate with the author. Use this skill whenever the user wants to create a new presentation, build a talk, write a conference submission, design a slide deck, prepare for a speaking engagement, or mentions "presentation" or "talk" in the context of content creation. Also trigger when the user describes a topic they want to present on, asks to adapt an existing talk for a new audience, or wants to develop a CFP abstract. rhetoric-knowledge-vault This skill parses presentation talks to catalog specific rhetoric patterns: opening hooks, humor style, pacing, transitions, audience interaction, slide design, and verbal signatures. It downloads YouTube transcripts and analyzes slides (from PPTX files or Google Drive PDFs), examining HOW the speaker presents. After enough talks are analyzed, it generates a structured speaker profile and can create a personalized presentation-creator skill tailored to the speaker's style. Triggers: "parse my talks", "run the rhetoric analyzer", "analyze my presentation style", "how many talks have been processed", "update the rhetoric knowledge base", "check rhetoric vault status", "process remaining talks for style patterns", "generate my speaker profile", "update speaker profile". | SkillsRules | 99 Impact Pending Average score across 0 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.10.1 |
Comprehensive documentation and best practices for building Terraform providers with terraform-plugin-framework (v1.17.0). Covers providers, resources, schemas, types, validators, testing, and common pitfalls. | DocsRules | 97 1.04x Agent success vs baseline Impact 97% 1.04xAverage score across 5 eval scenarios Reviewed: Version: 0.1.6 |
Spec-driven workflow covering requirement gathering, spec authoring, implementation review, and verification — with skills, rules, and evaluation scenarios. Contains: requirement-gathering Interview stakeholders to clarify ambiguous or underspecified requirements before writing code. Use when receiving a new task, feature request, or bug report that lacks clear acceptance criteria. Produces clarified requirements ready for spec authoring. Common triggers: "new feature", "build me", "implement", "add support for", or any task where requirements are vague or incomplete. spec-verification Verify that implementation and tests remain synchronized with specs after code changes. Use when code has been generated or modified from specs, after implementation is complete, or when reviewing a PR that touches spec-covered code. Reports mismatched targets, broken test links, and undocumented behavioral changes. Common triggers: "verify the spec", "check spec alignment", "are specs up to date", or after completing implementation work. spec-writer Create or update .spec.md files from clarified requirements. Use when requirements have been gathered and confirmed, and specs need to be written or updated before implementation begins. Produces well-structured spec files with frontmatter, requirements, and test links. Common triggers: "write the spec", "update the spec", "create a spec for", or after requirement-gathering completes. work-review Review completed implementation against approved specs to ensure all requirements are satisfied. Use after finishing implementation work, before marking a task as done, or when a stakeholder asks to verify deliverables against requirements. Produces a review summary with pass/fail per requirement. Common triggers: "review my work", "check against spec", "did I miss anything", "is implementation complete". | SkillsDocsRules | 96 1.19x Agent success vs baseline Impact 98% 1.19xAverage score across 9 eval scenarios Securityby Passed No known issues Reviewed: Version: 2.0.1 |
Closing the intent-to-code chasm - specification-driven development with BDD verification chain Contains: iikit-00-constitution Create or update a CONSTITUTION.md that defines project governance — establishes coding standards, quality gates, TDD policy, review requirements, and non-negotiable development principles with versioned amendment tracking. Use when defining project rules, setting up coding standards, establishing quality gates, configuring TDD requirements, or creating non-negotiable development principles. iikit-01-specify Create a feature specification from a natural language description — generates user stories with Given/When/Then scenarios, functional requirements (FR-XXX), success criteria, and a quality checklist. Use when starting a new feature, writing a PRD, defining user stories, capturing acceptance criteria, or documenting requirements for a product idea. iikit-02-plan Generate a technical design document from a feature spec — selects frameworks, defines data models, produces API contracts, and creates a dependency-ordered implementation strategy. Use when planning how to build a feature, writing a technical design doc, choosing libraries, defining database schemas, or setting up Tessl tiles for runtime library knowledge. iikit-03-checklist Generate quality checklists that validate requirements completeness, clarity, and consistency — produces scored checklist items linked to specific spec sections (FR-XXX, SC-XXX). Use when reviewing a spec for gaps, doing a requirements review, verifying PRD quality, auditing user stories and acceptance criteria, or gating before implementation. iikit-04-testify Generate Gherkin .feature files from requirements before implementation — produces executable BDD scenarios with traceability tags, computes assertion integrity hashes, and locks acceptance criteria for test-driven development. Use when writing tests first, doing TDD, creating test cases from a spec, locking acceptance criteria, or setting up red-green-refactor with hash-verified assertions. iikit-05-tasks Generate dependency-ordered task breakdown from plan and specification. Use when breaking features into implementable tasks, planning sprints, or creating work items with parallel markers. iikit-06-analyze Validate cross-artifact consistency — checks that every spec requirement traces to tasks, plan tech stack matches task file paths, and constitution principles are satisfied across all artifacts. Use when running a consistency check, verifying requirements traceability, detecting conflicts between design docs, or auditing alignment before implementation begins. iikit-07-implement Execute the implementation plan by coding each task from tasks.md — writes source files, runs tests, verifies assertion integrity, and validates output against constitutional principles. Use when ready to build the feature, start coding, develop from the task list, or resume a partially completed implementation. iikit-08-taskstoissues Convert tasks from tasks.md into GitHub Issues with labels and dependencies. Use when exporting work items to GitHub, setting up project boards, or assigning tasks to team members. iikit-bugfix Report a bug against an existing feature — creates a structured bugs.md record, generates fix tasks in tasks.md, and optionally imports from or creates GitHub issues. Use when fixing a bug, reporting a defect, importing a GitHub issue into the workflow, or triaging an error without running the full specification process. iikit-clarify Resolve ambiguities in any project artifact — auto-detects the most recent artifact (spec, plan, checklist, testify, tasks, or constitution), asks targeted questions with option tables, and writes answers back into the artifact's Clarifications section. Use when requirements are unclear, a plan has trade-off gaps, checklist thresholds feel wrong, test scenarios are imprecise, task dependencies seem off, or constitution principles are vague. iikit-core Initialize an IIKit project, check feature progress, select the active feature, and display the workflow command reference. Use when starting a new project, running init, checking status, switching between features, or looking up available commands and phases. | SkillsRules | 94 Impact Pending Average score across 0 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 2.7.16 |
maria/fastapi v0.1.0 FastAPI framework with Pydantic v2 patterns, PII sanitisation, and practical workflows Contains: run-check-server Start a FastAPI dev server, verify docs and OpenAPI schema, test endpoints, and run pytest. Use when running, checking, or debugging a FastAPI application. scaffold-project Scaffold a new FastAPI project with an opinionated directory layout, pydantic-settings config, and starter files. Use when creating a new FastAPI application from scratch. | SkillsDocsRules | 94 Impact Pending Average score across 0 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.0 |
Write professional, persuasive complaint letters to US airlines emphasizing loyalty status, DOT regulations, and airline commitments. Contains: frequent-flyer-advocate Write professional, persuasive complaint letters to US airlines on behalf of passengers. Emphasizes loyalty status, DOT regulations, and the airline's own published commitments. Use when: user wants to complain to an airline, request compensation, write a complaint letter, dispute an airline's response, escalate an airline issue, file a DOT complaint, or mentions a bad flight experience they want to act on. Also trigger when user describes: flight delay, cancellation, lost baggage, damaged baggage, denied boarding, downgrade, poor service, broken amenities, tarmac delay, missed connection, or any airline service failure they want addressed. | SkillsRules | 93 1.38x Agent success vs baseline Impact 93% 1.38xAverage score across 10 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.9.1 |
Automatically monitor GitHub Actions workflows after git push operations. Tracks workflow progress and reports pass/fail results. Contains: github-action-monitor Monitors GitHub Actions workflow runs and reports pass/fail results. Use when git push has been executed, code has been pushed to a remote, or when the user asks about CI status. | SkillsRules | 93 Impact Pending Average score across 0 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.5.0 |
Rules and skills that teach AI agents how to contribute to open source projects without being the villain. Contains: preflight Runs a structured 9-check pre-submission checklist against an open-source contribution before the contributor opens a pull request. Use when the user has written code for an open-source project and needs to prepare a PR, submit a contribution, or verify readiness. Triggers on "submit a PR", "open a pull request", "prepare the contribution", "ready to merge", "check my pull request". IMPORTANT — run this AFTER code is written but BEFORE submission. Checks: AI policy compliance and disclosure (including voluntary disclosure when no policy exists), diff size and focus, PR template, code style, commit conventions, tests, legal requirements (DCO/CLA), agent artifacts, changelog, and human ownership verification. propose Analyzes project contribution guidelines, identifies the right venue (pull request, issue, discussion, RFC/KEP/DEP), checks issue metadata (claims, assignments, labels), searches for prior rejected attempts, and drafts proposals formatted to project templates. Use when the user wants to contribute to an open-source project, fix a bug, submit a PR, improve or refactor code, asks where to submit a change, or needs help choosing between PR/issue/discussion/RFC. Triggers on "fix this issue", "submit a PR", "refactor this", "improve this code", "open a pull request". IMPORTANT — run this AFTER recon and BEFORE writing code to verify the right venue and check for prior attempts. recon Analyze an open source project's contribution norms, AI policy, conventions, and recent PR history before writing any code. Use when the user wants to contribute to an open source or GitHub project, fix a bug, submit a pull request, open a PR, make a contribution, or asks about contribution guidelines. Triggers on phrases like "fix this bug", "submit a PR", "contribute a fix", "open a pull request", "help me contribute", "how do I contribute", "what are the rules for this OSS project". IMPORTANT — run this BEFORE writing any code for an open source project. | SkillsRules | 93 4.13x Agent success vs baseline Impact 95% 4.13xAverage score across 7 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 1.0.1 |
Syncs TripIt travel itineraries to Reclaim.ai timezone segments and Google Calendar OOO blocks. Contains: onboard-tripit-reclaim Guided setup for TripIt→Reclaim travel timezone sync credentials. Validates environment variables and runs a dry-run to confirm everything works. Use when the user wants to set up, configure, or troubleshoot the TripIt to Reclaim calendar sync, API keys, or integration connection. sync-tripit Run the TripIt→Reclaim travel timezone sync and interpret its JSON output. Handles no-change silence, change summaries, overlap warnings, and errors. Use when the user asks to sync time zones, update Reclaim from TripIt, check travel schedule sync, or run the calendar timezone update. | SkillsDocsRules | 91 1.31x Agent success vs baseline Impact 80% 1.31xAverage score across 4 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.1.2 |
popey/java-quality-gate v0.1.0 Run quality checks on Java code before committing. Validates against best practices, enterprise standards, and common issues. Contains: java-quality-gate Run quality checks on Java code before committing. Validates against best practices, enterprise standards, and common issues. MUST be run before any git commit operation that includes Java files. | SkillsRules | 81 Impact Pending Average score across 0 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.0 |
tessl-labs/skill-discovery v0.27.0 Discover and apply best practice skills automatically. Gap analysis scans the codebase, skill-search fills gaps from the registry, skill-classifier separates proactive from reactive skills, quality-standards generates CLAUDE.md guidance, self-review compares code against checklists, and verification-strategy sets up test/lint/typecheck feedback loops. Contains: gap-analysis Scan a project for practice gaps — missing domains, weak implementations, and new technology areas. Use when starting a new project, joining an existing codebase, beginning a major feature (3+ new files), or when the PreToolUse gate hook blocks a write. Produces a structured gap report that drives skill-search and downstream skills. quality-standards Generate project-level quality standards in CLAUDE.md from installed proactive skills. The quality block is the single biggest lever for code quality (4x improvement in experiments). Use after skill-classifier runs, when the skill set changes, or when the user asks "update my quality standards", "what standards should I follow". self-review Compare your code against installed proactive skill checklists and fix gaps. Use after committing, after completing a feature, before submitting a PR, or whenever you want to verify your code meets quality standards. Can be triggered by a post-commit hook, a periodic check, or a direct request like "review your code", "check quality", "did you follow the skills". skill-classifier Classify installed skills as proactive (apply to all code, review at every commit) or reactive (domain-specific, use only when working in that domain). Use after installing new skills via skill-search. The classification drives which skills quality-standards includes in CLAUDE.md and which skills self-review checks against. skill-discovery Orchestrates practice gap discovery and quality improvement. Coordinates gap-analysis, skill-search, skill-classifier, quality-standards, and self-review skills. Use when starting a new project, joining an existing codebase, beginning a major feature, or when the user asks "what skills do I need", "find best practices", "audit this project". skill-search Search the Tessl registry for skills that fill practice gaps identified by gap-analysis. Uses a two-pass strategy: first find language-agnostic best practice skills, then find technology-specific skills for the project's stack. Use after gap-analysis identifies gaps, when entering a new technology domain, or when the user asks "find skills for X", "what best practices exist for Y". verification-strategy Set up self-verification before building features — test runner, type checking, linting, and feedback loops that let the agent confirm its own work. Use when starting a new project, setting up a codebase for the first time, or when the user asks "how will you test this", "set up testing", "make sure this works", or "verify your work". Run this BEFORE writing feature code. | SkillsRules | 80 Impact Pending Average score across 0 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.27.0 |
dld-kit/dld v0.6.0 Decision-Linked Development (DLD) — a workflow for recording, linking, and maintaining development decisions alongside code. Skills for planning, recording, implementing, auditing, and documenting decisions via @decision annotations. Contains: dld-audit-auto Autonomous audit — detects drift, fixes issues, and opens a PR. Designed for scheduled/CI execution without human interaction. dld-audit Scan for drift between decisions and code. Finds orphaned annotations, stale references, and undocumented changes. dld-common Shared utility scripts for DLD skills. Not intended for direct invocation — used internally by other DLD skills. dld-decide Record a single development decision as a markdown file with YAML frontmatter. Collects context, rationale, and code references interactively. dld-implement Implement one or more proposed decisions. Makes code changes, adds `@decision` annotations, and updates decision status. dld-init Bootstrap DLD (Decision-Linked Development) in a repository. Creates dld.config.yaml, the decisions/ directory, and INDEX.md. Run once per project. dld-lookup Look up decisions by ID, tag, code path, or keyword. IMPORTANT — use this proactively whenever you encounter `@decision` annotations in code you are about to read or modify. dld-plan Break down a feature into multiple decisions interactively. Creates a set of decision records grouped by a shared tag. dld-retrofit Bootstrap DLD decisions from an existing codebase. Analyzes code to infer rationale, generates decision records, and adds `@decision` annotations. dld-snapshot Generate SNAPSHOT.md (detailed decision reference) and OVERVIEW.md (narrative synthesis with diagrams) from the decision log. dld-status Quick overview of the decision log state — counts by status, recent decisions, and run tracking info. | SkillsRules | 75 Impact Pending Average score across 0 eval scenarios Securityby Advisory Suggest reviewing before use Reviewed: Version: 0.6.0 |
hbarve1/tessl-llvm v0.1.0 designing a new programming language Contains: tessl-llvm designing a new programming language | SkillsDocsRules | 23 Impact Pending Average score across 0 eval scenarios Securityby Passed No known issues Reviewed: Version: 0.1.0 |
Quickstart example: Express.js API coding standards (rules) | Rules | — |
tessl/cli-setup-beta v0.71.0 Tessl CLI MCP tool usage guidelines | Rules | — |
tessl/cli-setup v0.71.0 Tessl CLI MCP tool usage guidelines | Rules | — |
Can't find what you're looking for? Evaluate a missing skill, or if you're looking for agent context for an open source dependency, request a tile.