github.com/NeoLabHQ/context-engineering-kit
Skill | Added | Review |
|---|---|---|
create-command Interactive assistant for creating new Claude commands with proper structure, patterns, and MCP tool integration | 42 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
critique Comprehensive multi-perspective review using specialized judges with debate and consensus building | 30 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
actualize Reconcile the project's FPF state with recent repository changes | 43 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
plan-do-check-act Iterative PDCA cycle for systematic experimentation and continuous improvement | 53 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
review-pr Comprehensive pull request review using specialized agents | 50 Impact Pending No eval scenarios have been run Securityby Advisory Suggest reviewing before use Reviewed: Version: dedca19 | |
tree-of-thoughts Execute tasks through systematic exploration, pruning, and expansion using Tree of Thoughts methodology with meta-judge evaluation specifications and multi-agent evaluation | 34 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
create-skill Guide for creating effective skills. This command should be used when users want to create a new skill (or update an existing skill) that extends Claude's capabilities with specialized knowledge, workflows, or tool integrations. Use when creating new skills, editing existing skills, or verifying skills work before deployment - applies TDD to process documentation by testing with subagents before writing, iterating until bulletproof against rationalization | 69 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
launch-sub-agent Launch an intelligent sub-agent with automatic model selection based on task complexity, specialized agent matching, Zero-shot CoT reasoning, and mandatory self-critique verification | 42 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
do-and-judge Execute a task with sub-agent implementation and LLM-as-a-judge verification with automatic retry loop | 47 Impact Pending No eval scenarios have been run Securityby Critical Do not install without reviewing Reviewed: Version: dedca19 | |
create-hook Create and configure git hooks with intelligent project analysis, suggestions, and automated testing | 47 Impact Pending No eval scenarios have been run Securityby Advisory Suggest reviewing before use Reviewed: Version: dedca19 | |
judge Launch a meta-judge then a judge sub-agent to evaluate results produced in the current conversation | 45 Impact Pending No eval scenarios have been run Securityby Risky Do not use without reviewing Reviewed: Version: dedca19 | |
create-rule Use when found gap or repetative issue, that produced by you or implemenataion agent. Esentially use it each time when you say "You absolutly right, I should have done it differently." -> need create rule for this issue so it not appears again. | 48 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
kaizen Use when Code implementation and refactoring, architecturing or designing systems, process and workflow improvements, error handling and validation. Provide tehniquest to avoid over-engineering and apply iterative improvements. | 47 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
do-in-parallel Launch multiple sub-agents in parallel to execute tasks across files or targets with intelligent model selection, quality-focused prompting, and meta-judge → LLM-as-a-judge verification | 47 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
load-issues Load all open issues from GitHub and save them as markdown files | 67 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
create-ideas Generate ideas in one shot using creative sampling | 42 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
reset Reset the FPF reasoning cycle to start fresh | 49 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
thought-based-reasoning Use when tackling complex reasoning tasks requiring step-by-step logic, multi-step arithmetic, commonsense reasoning, symbolic manipulation, or problems where simple prompting fails - provides comprehensive guide to Chain-of-Thought and related prompting techniques (Zero-shot CoT, Self-Consistency, Tree of Thoughts, Least-to-Most, ReAct, PAL, Reflexion) with templates, decision matrices, and research-backed patterns | 73 Impact Pending No eval scenarios have been run Securityby Advisory Suggest reviewing before use Reviewed: Version: dedca19 | |
create-workflow-command Create a workflow command that orchestrates multi-step execution through sub-agents with file-based task prompts | 47 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
setup-arxiv-mcp Guide for setup arXiv paper search MCP server using Docker MCP | 52 Impact Pending No eval scenarios have been run Securityby Advisory Suggest reviewing before use Reviewed: Version: dedca19 | |
subagent-driven-development Use when executing implementation plans with independent tasks in the current session or facing 3+ independent issues that can be investigated without shared state or dependencies - dispatches fresh subagent for each task with code review between tasks, enabling fast iteration with quality gates | 75 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
analyze-issue Analyze a GitHub issue and create a detailed technical specification | 51 Impact Pending No eval scenarios have been run Securityby Advisory Suggest reviewing before use Reviewed: Version: dedca19 | |
judge-with-debate Evaluate solutions through multi-round debate between independent judges until consensus | 48 Impact Pending No eval scenarios have been run Securityby Risky Do not use without reviewing Reviewed: Version: dedca19 | |
do-in-steps Execute complex tasks through sequential sub-agent orchestration with intelligent model selection, meta-judge → LLM-as-a-judge verification | 37 Impact Pending No eval scenarios have been run Securityby Passed No known issues Reviewed: Version: dedca19 | |
setup-context7-mcp Guide for setup Context7 MCP server to load documentation for specific technologies. | 45 Impact Pending No eval scenarios have been run Securityby Advisory Suggest reviewing before use Reviewed: Version: dedca19 |