Skills on Tessl: a developer-grade package manager for agent skillsLearn more
Logo
Back to articlesClaude Can't Write PubNub Functions. We Fixed That.

17 Feb 20268 minute read

Stephen Blum

Stephen Blum

Stephen Blum is the CTO of PubNub, powering 1B+ devices, holding 68 patents, and investing in AI and API startups.

We asked Claude Code to write a PubNub Function. It failed. Not partially, the generated code wouldn't deploy.

module.exports instead of export default async. Plain objects instead of request.ok(). Hardcoded API keys instead of vault. .then() chains instead of async/await. Zero awareness of the optimized libraries. It looked like JavaScript. It wasn't PubNub Functions JavaScript.

This isn't a PubNub problem. Ask Claude to write a Cloudflare Worker, it reaches for Node.js APIs that don't exist in Workers. Ask for a Lambda handler, it misuses the context object. Every serverless platform has its own runtime, its own module system, its own constraints. The model knows JavaScript. It doesn't know your JavaScript runtime.

So we built a Tessl skill, structured context covering function signatures, modules, constraints, and production patterns. Claude succeeded. First deploy.

Then we used tessl skill review to sharpen it. Three rounds: 60% → 93% → 100%.

image

What We Built

A focused implementation guide for PubNub Functions 2.0, packaged as a Tessl tile:

skills/pubnub-functions/
ā”œā”€ā”€ SKILL.md                  # Skill definition + workflow
ā”œā”€ā”€ tile.json                 # Tile manifest
└── references/
    ā”œā”€ā”€ functions-basics.md   # Function types, async patterns, limits
    ā”œā”€ā”€ functions-modules.md  # KVStore, XHR, Vault, Crypto, JWT, etc.
    └── functions-patterns.md # 8 production patterns with full code

SKILL.md is the entry point, defines when to invoke the skill, links to three reference docs. The references are where the value lives. functions-patterns.md alone has eight production-ready patterns an agent can adapt directly.

First Review: 60%

tessl skill review skills/pubnub-functions/SKILL.md

Two layers: structural validation (16 checks against the Agent Skills spec) and a judge evaluation (scores description + content on rubric criteria, each rated 1-3).

Validation passed with one warning:

⚠ description_trigger_hint - Description may be missing an explicit

'when to use' trigger hint (e.g., 'Use when...')

Our description was:

description: Develop serverless edge functions with PubNub Functions 2.0

Vague. One verb. No guidance for when an agent should pick this skill.

Description: 33%. Low specificity ("Develop" is too generic), missing trigger variations ("event handlers", "real-time functions"), no "Use when..." clause, and "serverless edge functions" overlaps with Lambda@Edge and Cloudflare Workers.

The judge was direct: "The critical missing element is any guidance on when Claude should use this skill."

Content: 88%. Strong, but workflow_clarity hit 2/3, the review flagged that "Deploy and Test" as a single step isn't enough for functions that touch live message traffic.

Three Changes

1. Rewrote the description

Before:

description: Develop serverless edge functions with PubNub Functions 2.0

After:

description: Create, configure, and deploy PubNub Functions 2.0 event handlers,
  triggers, and serverless endpoints. Use when building real-time message
  transformations, PubNub modules, webhook integrations, or edge data processing.

Multiple actions, specific artifacts, explicit "Use when" with four trigger scenarios.

2. Expanded trigger terms

Before:

triggers: pubnub, functions, serverless, edge, kvstore, webhook, transform

After:

triggers: pubnub, pubnub functions, functions, serverless, edge, kvstore, webhook,
  transform, event handler, real-time functions, message processing

3. Expanded workflow to with validations

The single "Deploy and Test" step became three steps:

Before:

**Deploy and Test**: Configure channel patterns and test in portal

After:

**Validate Implementation**: Verify no hardcoded secrets (use vault), confirm async/await usage (no .then() chains), check operation count stays within the 3-operation limit, and ensure proper try/catch wrapping
**Handle Response**: Return ok()/abort() or send() appropriately
**Configure Channel Patterns**: Set wildcard patterns ending with `.*`, max two literal segments before wildcard
**Test in Staging**: Test the function in PubNub Admin Portal with sample messages before enabling on production channels
**Deploy to Production**: Enable the function on live channel patterns and monitor logs

Validation checkpoint with specific checks before anything touches live traffic.

Second Review: 93%

All 16 checks passed. Zero warnings.

DimensionBeforeAfterWhat happened
Description33%100%Every criterion hit 3/3
Content88%85%workflow_clarity went 2→3, but conciseness dipped, judge flagged "You are a PubNub Functions 2.0 development specialist" as unnecessary
Overall60%93%One cycle. Three changes.

The workflow_clarity improvement was the one that mattered. The conciseness dip pointed us to what to fix next.

Third Review: 100%

Two suggestions from the second review, both about removing things:

  1. Drop the persona statement ("You are a..."), Claude doesn't need role framing
  2. Drop the "When to Use This Skill" section, redundant with the description's "Use when..." clause

We cut 14 lines. The skill opens straight into the workflow now.

$ tessl skill review skills/pubnub-functions/SKILL.md

Description: 100% šŸŽ‰

Content: 100% šŸŽ‰

Average Score: 100% šŸŽ‰

Every criterion at 3/3. First two rounds: adding missing content. Third round: removing unnecessary content. The review knows the difference.

The Progression

RoundScoreWhat ChangedLesson
1st60%Baseline, description missing triggers, workflow missing validationWrite descriptions for machines, not humans
2nd93%Added "Use when...", expanded triggers, added validation stepsBe explicit about when and how
3rd100%Removed persona statement and redundant sectionLess is more

Using the Skill with Claude Code

Once installed, the skill activates when your prompt matches its triggers. Here's what changes.

"Build me a rate limiter for my PubNub channel"

Without the skill, you get generic rate-limiting code using Redis or in-memory stores. Correct concept. Wrong platform.

With the skill:

  • export default async (request) => (correct Functions 2.0 syntax)
  • require('kvstore') (correct module system)
  • db.incrCounter() for atomic increments (not manual get-increment-set)
  • Stays within the 3-operation limit
  • Returns request.ok() or request.abort() (correct flow control)
  • Time-window bucketing for sliding windows

Code that looks right vs. code that deploys. Big difference.

"Create an HTTP endpoint that manages user profiles"

The skill's REST API pattern kicks in, complete On Request function with CORS handling, method routing, KVStore with TTLs, error responses, input validation. Every reference pattern is a template the agent adapts directly.

What Changed in Practice

  • Correct API usage. No more mixing up publish vs fire vs signal. No .then() chains. No missing try/catch.
  • Constraint awareness. The agent stays within the 3-op limit. When a request would exceed it, Claude explains why and suggests alternatives (like chaining to a second function with fire).
  • Deployment readiness. Output includes channel pattern config and vault setup, not just code, but a deployable artifact.

Publishing

Review at 100%. Ship it.

$ tessl skill publish --workspace pubnub --public ./skills/pubnub-functions

āœ” Published pubnub/pubnub-functions to https://tessl.io/registry/skills/pubnub/pubnub-functions

Tessl auto-lints before uploading, invalid structure fails before anything hits the registry. The tile also passes through moderation before going public.

We bumped 0.1.2 → 0.2.0. The registry enforces immutable versions, once published, it's permanent. You want changes? Bump the version.

Takeaways

You get a specific, actionable scoreboard. Each criterion maps to a real quality dimension, scored 1-3 with written justification. You know exactly what to fix.

Description is where you're probably underinvesting. Our content was at 88% on the first pass. The description, the thing that determines if the skill gets selected at all, was at 33%. One "Use when..." clause fixed it.

Validation checkpoints are a safety feature. For skills generating code that touches production systems, "deploy and test" isn't enough. The explicit validation step means the agent checks its own output against platform constraints first.

One focused iteration makes a dramatic difference. We didn't rewrite anything from scratch. Three rounds of targeted changes: 60% → 93% → 100%. The review tells you where to spend effort, adding or removing.

Treat context like software. Version it. Package it. Review it. Publish it. When PubNub Functions ships a new version, we update the skill, bump the version, re-review, and publish. Every agent gets the update.

Join Our Newsletter

Be the first to hear about events, news and product updates from AI Native Dev.