Deploy Langfuse with your application across different platforms. Use when deploying Langfuse to Vercel, AWS, GCP, or Docker, or integrating Langfuse into your deployment pipeline. Trigger with phrases like "deploy langfuse", "langfuse Vercel", "langfuse AWS", "langfuse Docker", "langfuse production deploy".
80
77%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/saas-packs/langfuse-pack/skills/langfuse-deploy-integration/SKILL.mdQuality
Discovery
89%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is a solid skill description with strong trigger terms and completeness. It clearly identifies when to use the skill and provides explicit trigger phrases. The main weakness is that the 'what' portion could be more specific about the concrete actions performed beyond just 'deploy' and 'integrate'.
Suggestions
Add more specific concrete actions such as 'configure environment variables, set up database connections, manage Docker containers, configure Vercel project settings' to improve specificity.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names the domain (deploying Langfuse) and mentions platforms (Vercel, AWS, GCP, Docker), but doesn't list specific concrete actions beyond 'deploy' and 'integrating into deployment pipeline'. It lacks details like configuring environment variables, setting up databases, managing secrets, etc. | 2 / 3 |
Completeness | Clearly answers both 'what' (deploy Langfuse across different platforms) and 'when' (explicit 'Use when' clause with specific scenarios, plus a 'Trigger with phrases' section listing concrete trigger terms). | 3 / 3 |
Trigger Term Quality | Includes explicit natural trigger phrases like 'deploy langfuse', 'langfuse Vercel', 'langfuse AWS', 'langfuse Docker', 'langfuse production deploy' which are terms users would naturally say. Good coverage of platform-specific variations. | 3 / 3 |
Distinctiveness Conflict Risk | Very specific niche — deploying Langfuse specifically. The combination of 'Langfuse' with deployment platforms creates distinct triggers that are unlikely to conflict with generic deployment skills or other observability tool skills. | 3 / 3 |
Total | 11 / 12 Passed |
Implementation
64%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a solid, actionable skill with real executable code for multiple deployment platforms and useful troubleshooting tables. Its main weaknesses are structural: the content is too long for a single SKILL.md (platform-specific details should be in separate files), and the step numbering misleadingly implies a sequential workflow rather than independent deployment options. Validation and error recovery steps could be more explicit, especially for the self-hosted Docker deployment.
Suggestions
Restructure as an overview SKILL.md that briefly describes each platform option and links to separate files (e.g., VERCEL.md, LAMBDA.md, DOCKER.md) for detailed code and instructions.
Rename 'Step 1-5' to clearly indicate these are independent deployment options, not sequential steps (e.g., '## Option: Vercel / Next.js').
Add explicit validation/retry guidance for the Docker self-hosted deployment — e.g., what to do if the health check fails, how to check logs, and a retry loop in the bash script.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is mostly efficient with executable code examples, but includes some unnecessary verbosity — the health check endpoint and some inline comments could be trimmed. The platform considerations and error handling tables add value but the overall length is substantial for what could be more tightly organized. | 2 / 3 |
Actionability | Fully executable code examples across all platforms — bash commands, TypeScript handlers, docker-compose YAML, and Cloud Run deploy commands are all copy-paste ready with specific configurations and real library imports. | 3 / 3 |
Workflow Clarity | Steps are clearly sequenced per platform, but the numbered 'Step 1-5' framing implies a sequential workflow when these are actually independent deployment options. The Docker section includes a health check wait but lacks explicit validation/retry loops — e.g., no guidance on what to do if the health check fails or if docker compose fails to start. | 2 / 3 |
Progressive Disclosure | External resources are linked at the bottom, but the skill is quite long with all platform-specific code inline. The different deployment targets (Vercel, Lambda, Docker, Cloud Run) could be split into separate reference files with the SKILL.md serving as an overview with pointers, rather than containing all implementation details. | 2 / 3 |
Total | 9 / 12 Passed |
Validation
81%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 9 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
frontmatter_unknown_keys | Unknown frontmatter key(s) found; consider removing or moving to metadata | Warning |
Total | 9 / 11 Passed | |
70e9fa4
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.