Add a "code mode" tool to an existing MCP server so LLMs can write small processing scripts that run against large API responses in a sandboxed runtime — only the script's compact output enters the LLM context window. Use this skill whenever someone wants to add code mode, context reduction, script execution, sandbox execution, or LLM-generated-code processing to an MCP server. Also trigger when users mention reducing token usage, shrinking API responses, running user-provided code safely, or adding a code execution tool to their MCP server — in any language (TypeScript, Python, Go, Rust, etc.).
89
85%
Does it follow best practices?
Impact
95%
2.20xAverage score across 3 eval scenarios
Advisory
Suggest reviewing before use
Security
1 medium severity finding. This skill can be installed but you should review these findings before use.
The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.
Third-party content exposure detected (high risk: 0.90). The skill's required workflow explicitly states the MCP tool "executes the underlying API call" and passes the raw API response into the sandbox as a DATA variable (SKILL.md Step 3a / Step 4 and references/benchmark-pattern.md which even suggests "Hit real API"), meaning the agent will ingest untrusted, user-generated/public third‑party responses (e.g., GitHub, Slack, SCIM, Kubernetes) and use the processed output to drive subsequent reasoning/actions.
3a2240a
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.