We asked Claude Code to write a PubNub Function. It failed. Not partially, the generated code wouldn't deploy.
module.exports instead of export default async. Plain objects instead of request.ok(). Hardcoded API keys instead of vault. .then() chains instead of async/await. Zero awareness of the optimized libraries. It looked like JavaScript. It wasn't PubNub Functions JavaScript.
This isn't a PubNub problem. Ask Claude to write a Cloudflare Worker, it reaches for Node.js APIs that don't exist in Workers. Ask for a Lambda handler, it misuses the context object. Every serverless platform has its own runtime, its own module system, its own constraints. The model knows JavaScript. It doesn't know your JavaScript runtime.
So we built a Tessl skill, structured context covering function signatures, modules, constraints, and production patterns. Claude succeeded. First deploy.
Then we used tessl skill review to sharpen it. Three rounds: 60% ā 93% ā 100%.

What We Built
A focused implementation guide for PubNub Functions 2.0, packaged as a Tessl tile:
skills/pubnub-functions/
āāā SKILL.md # Skill definition + workflow
āāā tile.json # Tile manifest
āāā references/
āāā functions-basics.md # Function types, async patterns, limits
āāā functions-modules.md # KVStore, XHR, Vault, Crypto, JWT, etc.
āāā functions-patterns.md # 8 production patterns with full codeSKILL.md is the entry point, defines when to invoke the skill, links to three reference docs. The references are where the value lives. functions-patterns.md alone has eight production-ready patterns an agent can adapt directly.
First Review: 60%
tessl skill review skills/pubnub-functions/SKILL.mdTwo layers: structural validation (16 checks against the Agent Skills spec) and a judge evaluation (scores description + content on rubric criteria, each rated 1-3).
Validation passed with one warning:
ā description_trigger_hint - Description may be missing an explicit
'when to use' trigger hint (e.g., 'Use when...')
Our description was:
description: Develop serverless edge functions with PubNub Functions 2.0
Vague. One verb. No guidance for when an agent should pick this skill.
Description: 33%. Low specificity ("Develop" is too generic), missing trigger variations ("event handlers", "real-time functions"), no "Use when..." clause, and "serverless edge functions" overlaps with Lambda@Edge and Cloudflare Workers.
The judge was direct: "The critical missing element is any guidance on when Claude should use this skill."
Content: 88%. Strong, but workflow_clarity hit 2/3, the review flagged that "Deploy and Test" as a single step isn't enough for functions that touch live message traffic.
Three Changes
1. Rewrote the description
Before:
description: Develop serverless edge functions with PubNub Functions 2.0After:
description: Create, configure, and deploy PubNub Functions 2.0 event handlers,
triggers, and serverless endpoints. Use when building real-time message
transformations, PubNub modules, webhook integrations, or edge data processing.Multiple actions, specific artifacts, explicit "Use when" with four trigger scenarios.
2. Expanded trigger terms
Before:
triggers: pubnub, functions, serverless, edge, kvstore, webhook, transformAfter:
triggers: pubnub, pubnub functions, functions, serverless, edge, kvstore, webhook,
transform, event handler, real-time functions, message processing3. Expanded workflow to with validations
The single "Deploy and Test" step became three steps:
Before:
**Deploy and Test**: Configure channel patterns and test in portalAfter:
**Validate Implementation**: Verify no hardcoded secrets (use vault), confirm async/await usage (no .then() chains), check operation count stays within the 3-operation limit, and ensure proper try/catch wrapping
**Handle Response**: Return ok()/abort() or send() appropriately
**Configure Channel Patterns**: Set wildcard patterns ending with `.*`, max two literal segments before wildcard
**Test in Staging**: Test the function in PubNub Admin Portal with sample messages before enabling on production channels
**Deploy to Production**: Enable the function on live channel patterns and monitor logsValidation checkpoint with specific checks before anything touches live traffic.
Second Review: 93%
All 16 checks passed. Zero warnings.
| Dimension | Before | After | What happened |
|---|---|---|---|
| Description | 33% | 100% | Every criterion hit 3/3 |
| Content | 88% | 85% | workflow_clarity went 2ā3, but conciseness dipped, judge flagged "You are a PubNub Functions 2.0 development specialist" as unnecessary |
| Overall | 60% | 93% | One cycle. Three changes. |
The workflow_clarity improvement was the one that mattered. The conciseness dip pointed us to what to fix next.
Third Review: 100%
Two suggestions from the second review, both about removing things:
- Drop the persona statement ("You are a..."), Claude doesn't need role framing
- Drop the "When to Use This Skill" section, redundant with the description's "Use when..." clause
We cut 14 lines. The skill opens straight into the workflow now.
$ tessl skill review skills/pubnub-functions/SKILL.mdDescription: 100% š
Content: 100% š
Average Score: 100% š
Every criterion at 3/3. First two rounds: adding missing content. Third round: removing unnecessary content. The review knows the difference.
The Progression
| Round | Score | What Changed | Lesson |
|---|---|---|---|
| 1st | 60% | Baseline, description missing triggers, workflow missing validation | Write descriptions for machines, not humans |
| 2nd | 93% | Added "Use when...", expanded triggers, added validation steps | Be explicit about when and how |
| 3rd | 100% | Removed persona statement and redundant section | Less is more |
Using the Skill with Claude Code
Once installed, the skill activates when your prompt matches its triggers. Here's what changes.
"Build me a rate limiter for my PubNub channel"
Without the skill, you get generic rate-limiting code using Redis or in-memory stores. Correct concept. Wrong platform.
With the skill:
export default async (request) =>(correct Functions 2.0 syntax)require('kvstore')(correct module system)db.incrCounter()for atomic increments (not manual get-increment-set)- Stays within the 3-operation limit
- Returns
request.ok()orrequest.abort()(correct flow control) - Time-window bucketing for sliding windows
Code that looks right vs. code that deploys. Big difference.
"Create an HTTP endpoint that manages user profiles"
The skill's REST API pattern kicks in, complete On Request function with CORS handling, method routing, KVStore with TTLs, error responses, input validation. Every reference pattern is a template the agent adapts directly.
What Changed in Practice
- Correct API usage. No more mixing up publish vs fire vs signal. No
.then()chains. No missing try/catch. - Constraint awareness. The agent stays within the 3-op limit. When a request would exceed it, Claude explains why and suggests alternatives (like chaining to a second function with
fire). - Deployment readiness. Output includes channel pattern config and vault setup, not just code, but a deployable artifact.
Publishing
Review at 100%. Ship it.
$ tessl skill publish --workspace pubnub --public ./skills/pubnub-functions
ā Published pubnub/pubnub-functions to https://tessl.io/registry/skills/pubnub/pubnub-functionsTessl auto-lints before uploading, invalid structure fails before anything hits the registry. The tile also passes through moderation before going public.
We bumped 0.1.2 ā 0.2.0. The registry enforces immutable versions, once published, it's permanent. You want changes? Bump the version.
Takeaways
You get a specific, actionable scoreboard. Each criterion maps to a real quality dimension, scored 1-3 with written justification. You know exactly what to fix.
Description is where you're probably underinvesting. Our content was at 88% on the first pass. The description, the thing that determines if the skill gets selected at all, was at 33%. One "Use when..." clause fixed it.
Validation checkpoints are a safety feature. For skills generating code that touches production systems, "deploy and test" isn't enough. The explicit validation step means the agent checks its own output against platform constraints first.
One focused iteration makes a dramatic difference. We didn't rewrite anything from scratch. Three rounds of targeted changes: 60% ā 93% ā 100%. The review tells you where to spend effort, adding or removing.
Treat context like software. Version it. Package it. Review it. Publish it. When PubNub Functions ships a new version, we update the skill, bump the version, re-review, and publish. Every agent gets the update.
Join Our Newsletter
Be the first to hear about events, news and product updates from AI Native Dev.




