This skill should be used when the user asks to "optimize prompts", "design prompt templates", "evaluate LLM outputs", "build agentic systems", "implement RAG", "create few-shot examples", "analyze token usage", or "design AI workflows". Use for prompt engineering patterns, LLM evaluation frameworks, agent architectures, and structured output design.
76
53%
Does it follow best practices?
Impact
91%
1.18xAverage score across 6 eval scenarios
Advisory
Suggest reviewing before use
Optimize this skill with Tessl
npx tessl skill review --optimize ./engineering-team/senior-prompt-engineer/SKILL.mdSecurity
1 medium severity finding. This skill can be installed but you should review these findings before use.
The skill exposes the agent to untrusted, user-generated content from public third-party sources, creating a risk of indirect prompt injection. This includes browsing arbitrary URLs, reading social media posts or forum comments, and analyzing content from unknown websites.
Third-party content exposure detected (high risk: 0.90). The skill's documentation and code explicitly describe and require web retrieval and use of retrieved contexts — e.g., the Tool Use / Function Calling pattern and "search_web" tool in references/agentic_system_design.md, the SKILL.md / Tools Overview and agent config examples (scripts/agent_orchestrator.py) listing a web_search tool, and the RAG workflows (rag_evaluator.py and SKILL.md) that ingest retrieved contexts — meaning untrusted public web content would be read and used to drive agent actions and decisions.
967fe01
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.