12 Mar 20264 minute read

What's new in Tessl: global installs, watch mode, GitHub badges, and a unified score
12 Mar 20264 minute read

Four new features just dropped. Dru Knox, our head of Product, describes what they do and why they matter in this video. I’ve included a description of these with links to more information below.

Install skills globally
Not every skill belongs in a single repo. You’ll often have context preferences that you carry from project to project. These don't fit neatly into a tessl.json that gets committed alongside your code.
You can now install skills globally with a single flag:
tessl install your-workspace/your-skill --globalGlobal skills are stored in ~/.tessl/ and are available across all your projects without touching any per-project manifest. When you want to remove one, the same flag applies:
tessl uninstall your-workspace/your-skill --globalMore info CLI command reference
Watch local changes automatically
If you've ever developed a skill that's installed across multiple agents, you'll know the loop: make a change, copy the updated files into every agent's folder, restart each session, and check whether the change did what you wanted. Repeat.
The new --watch-local flag eliminates the copy-and-restart step. When you install a local skill with it, Tessl monitors the source directory and automatically syncs any changes across all your agents as you save:
tessl install ./my-skill --watch-localLeave it running in a terminal while you edit. Your agents pick up the changes without any intervention. The feedback loop that used to take minutes now takes seconds.
Show your eval score on GitHub
If you've put the work into building a skill that genuinely improves agent behavior, written clear triggers, tested it against real scenarios, tuned the instructions, you now have a way to show that on your GitHub repository.
Tessl will generate a badge you can embed in your README that displays your skill's eval score. For anyone browsing your repo and deciding whether your skill is worth installing, it's a quick, credible signal: this wasn't just written, it was tested.
It's the same quality and impact scores that appear in tessl search results, now surfaced directly where your skill lives.
One score to represent skill quality
Tessl has always run two types of evaluations on skills:
- Reviews check your skill against established best practices, structure, clarity, trigger quality, actionability of instructions.
- Task evals run agents through simulated real-world scenarios, comparing performance with and without your skill installed, to measure whether it actually changes behavior.
These two signals answer different questions. Reviews tell you about the quality of your skill. Is well-constructed? Task evals tell you about the impact the skill can make to an agent. Does it work in practice?. Until now, you'd need to look at both scores separately and form your own view.
We've now combined them into a single aggregate score on the registry. One number that blends construction quality with real-world impact. This is visible on every skill's registry page and available to your agents when they're deciding whether a skill is the right fit for a given workflow.
These four features ship today. To get the latest CLI:
tessl updateAnd to install from scratch:
curl -fsSL https://get.tessl.io | shAs always, questions and feedback welcome in Discord.
Related Articles
More by Simon Maple

Anthropic, OpenAI, or Cursor model for your agent skills? 7 learnings from running 880 evals (including Opus 4.7)
21 Apr 2026
Baptiste Fernandez, Simon Maple

Stop guessing whether your Skill works: skill-optimizer measures and improves it
25 Mar 2026
Simon Maple

The Tessl Registry now has security scores, powered by Snyk
17 Mar 2026
Simon Maple

Your skill works on opus. Does it make haiku worse? Benchmarking AI skills across Claude models
11 Mar 2026
Simon Maple

Your AGENTS.md file isn't the problem. Your lack of AI Agent Evaluations is.
24 Feb 2026
Simon Maple



