CtrlK
BlogDocsLog inGet started
Tessl Logo

langchain-security-basics

Apply LangChain security best practices for production LLM apps. Use when securing API keys, preventing prompt injection, sandboxing tool execution, or validating LLM outputs. Trigger: "langchain security", "prompt injection", "langchain secrets", "secure langchain", "LLM security", "safe tool execution".

64

Quality

77%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/langchain-pack/skills/langchain-security-basics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description that clearly communicates its purpose, provides explicit trigger guidance, and occupies a distinct niche. The main area for improvement is in specificity—the actions listed are somewhat categorical rather than deeply concrete. The explicit trigger term list is a strong feature that aids skill selection.

Suggestions

Enhance specificity by listing more concrete actions, e.g., 'configure SecretStr for API key management, implement input sanitization to prevent prompt injection, set up sandboxed code execution environments, add output validation schemas'.

DimensionReasoningScore

Specificity

Names the domain (LangChain security) and some actions (securing API keys, preventing prompt injection, sandboxing tool execution, validating LLM outputs), but these are somewhat high-level categories rather than deeply specific concrete actions like 'configure SecretStr for API keys' or 'implement input sanitization filters'.

2 / 3

Completeness

Clearly answers both 'what' (apply LangChain security best practices for production LLM apps) and 'when' (securing API keys, preventing prompt injection, sandboxing tool execution, validating LLM outputs) with explicit 'Use when' and 'Trigger' clauses.

3 / 3

Trigger Term Quality

Includes a well-curated set of natural trigger terms that users would actually say: 'langchain security', 'prompt injection', 'langchain secrets', 'secure langchain', 'LLM security', 'safe tool execution'. These cover multiple natural variations and phrasings.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche at the intersection of LangChain and security. The specific trigger terms like 'langchain security' and 'langchain secrets' are highly distinctive and unlikely to conflict with general security skills or general LangChain skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable security skill with excellent executable TypeScript examples covering the key LangChain security concerns. Its main weakness is length — the file tries to be comprehensive in a single document, and some sections (particularly audit logging) could be trimmed or split out. The security checklist and error handling table are effective summaries, but the skill would benefit from clearer guidance on what to do when security violations are detected.

Suggestions

Add explicit feedback loops for security violations: what should happen when prompt injection is detected (reject request, log and continue, escalate) — this would improve workflow clarity.

Consider splitting the audit logging and safe tool execution sections into separate referenced files to improve progressive disclosure and reduce the main file's token footprint.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good code examples, but some sections are slightly verbose. The comments like '// NEVER hardcode API keys' and '// BAD:' / '// GOOD:' patterns, while useful, add some bulk. The audit logging section is quite long and could be trimmed. However, it largely avoids explaining concepts Claude already knows.

2 / 3

Actionability

Every section provides fully executable TypeScript code with proper imports, concrete patterns, and copy-paste ready examples. The tool execution example includes Zod schemas, timeout configuration, and path traversal prevention. The input sanitization function is complete with regex patterns.

3 / 3

Workflow Clarity

The skill presents security practices as independent modules rather than a sequenced workflow, which is appropriate for the topic. However, the checklist at the end serves as a validation step. The prompt injection section mentions 'log, don't silently modify' but lacks explicit feedback loops for what to do when injection is detected (reject? escalate? retry with sanitized input?).

2 / 3

Progressive Disclosure

The content is well-structured with numbered sections and a clear hierarchy, but it's a long monolithic file (~180 lines of content) with no bundle files to offload detail into. The audit logging and safe tool execution sections could be split into separate reference files. The external resource links and 'Next Steps' reference to langchain-prod-checklist are good but the main file carries too much inline detail.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.