CtrlK
BlogDocsLog inGet started
Tessl Logo

langchain-security-basics

Apply LangChain security best practices for production LLM apps. Use when securing API keys, preventing prompt injection, sandboxing tool execution, or validating LLM outputs. Trigger: "langchain security", "prompt injection", "langchain secrets", "secure langchain", "LLM security", "safe tool execution".

80

Quality

77%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./plugins/saas-packs/langchain-pack/skills/langchain-security-basics/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

89%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a well-structured skill description that clearly defines its niche at the intersection of LangChain and security. It provides explicit 'Use when' guidance and a comprehensive list of trigger terms. The main area for improvement is in the specificity of the concrete actions, which could be more granular.

Suggestions

Expand the capability list with more specific concrete actions, e.g., 'configure secret managers for API key storage, implement input sanitization against prompt injection, set up sandboxed code execution environments, add output validation schemas'.

DimensionReasoningScore

Specificity

Names the domain (LangChain security) and some actions (securing API keys, preventing prompt injection, sandboxing tool execution, validating LLM outputs), but these are somewhat high-level categories rather than deeply specific concrete actions like 'configure secret managers' or 'implement input sanitization filters'.

2 / 3

Completeness

Clearly answers both 'what' (apply LangChain security best practices for production LLM apps) and 'when' (explicit 'Use when' clause covering securing API keys, preventing prompt injection, sandboxing tool execution, validating LLM outputs), plus explicit trigger terms.

3 / 3

Trigger Term Quality

Includes a strong set of natural trigger terms that users would actually say: 'langchain security', 'prompt injection', 'langchain secrets', 'secure langchain', 'LLM security', 'safe tool execution'. These cover multiple natural variations and phrasings.

3 / 3

Distinctiveness Conflict Risk

The combination of LangChain + security creates a clear niche. The trigger terms are specific enough ('langchain security', 'langchain secrets', 'prompt injection') to distinguish this from general security skills or general LangChain development skills.

3 / 3

Total

11

/

12

Passed

Implementation

64%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a solid, actionable security reference with excellent executable code examples covering the key LangChain security concerns. Its main weaknesses are the monolithic structure (could benefit from splitting into focused sub-files) and the lack of a clear implementation workflow with validation checkpoints. The content is slightly verbose with explanatory comments that Claude wouldn't need.

Suggestions

Add a brief implementation workflow at the top: 'When securing a LangChain app, apply these in order: 1. Secrets → 2. Input sanitization → 3. Tool sandboxing → 4. Output validation → 5. Audit logging, validating each step before proceeding.'

Split detailed code examples (audit logger, safe tool execution) into separate reference files and keep SKILL.md as a concise overview with minimal inline code.

Remove redundant commentary like 'NEVER hardcode API keys' and 'DANGEROUS: unrestricted code execution' — the BAD/GOOD code pattern already communicates this clearly.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good code examples, but includes some unnecessary commentary (e.g., 'NEVER hardcode API keys' comments that Claude already knows, the 'DANGEROUS' and 'VULNERABLE' labels are somewhat redundant given the BAD/GOOD pattern). The overall length is reasonable for the breadth of topics covered, but could be tightened.

2 / 3

Actionability

Every section provides fully executable TypeScript code with concrete implementations — the sanitization function, safe shell tool, output validation schema, and audit logger are all copy-paste ready. The code uses real LangChain APIs with proper imports and Zod schemas.

3 / 3

Workflow Clarity

The skill presents five distinct security domains clearly, and the checklist at the end is useful. However, there's no sequenced workflow for implementing these practices (e.g., 'start here, validate this, then proceed'), and for risky operations like tool execution sandboxing, there are no explicit validation/verification steps to confirm the sandbox is properly configured.

2 / 3

Progressive Disclosure

The content is well-structured with numbered sections and a summary table, but it's a long monolithic file (~180 lines of code). The output validation, audit logging, and tool execution sections could each be separate reference files. The 'Next Steps' reference to langchain-prod-checklist is good but the external resource links at the bottom are generic rather than skill-specific deep dives.

2 / 3

Total

9

/

12

Passed

Validation

81%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation9 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

allowed_tools_field

'allowed-tools' contains unusual tool name(s)

Warning

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

9

/

11

Passed

Repository
jeremylongshore/claude-code-plugins-plus-skills
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.