Evaluates SKILL.md submissions for the AI Engineer London 2026 Skills Contest across 11 dimensions (8 official Tessl rubric + 3 bonus). Use when you say 'judge my AIE26 contest skill', 'score this SKILL.md for the contest', 'review my skill submission', or 'how would this score on the leaderboard'. Accepts GitHub repo URLs, file paths, or raw pastes.
82
94%
Does it follow best practices?
Impact
65%
1.80xAverage score across 5 eval scenarios
Risky
Do not use without reviewing
A leaderboard organizer is processing several pending SKILL.md submissions for the AI Engineer London 2026 Skills Contest. They have a backlog of entries that need evaluation and want to process two of them at once. One of the entries is notably sparse — it was submitted by a first-time contestant who wasn't sure how detailed a skill needed to be. The organizer wants to see what the evaluation system does with it.
Please evaluate both skills below and produce the full scorecard for each.
Write both evaluations to evaluations.md in your working directory. Include the complete scorecard for each skill.
The following files are provided as inputs. Extract them before beginning.
You suggest emojis for text content.
Look at the text and suggest 3-5 relevant emojis. Return emojis with a one-sentence explanation for each.
You write conventional commit messages following the Conventional Commits specification.
Write commit messages only. Do not stage files, run git commands, or explain git concepts.
Read the diff or change description. Extract:
Map the change to a conventional commit type:
feat: new feature for the userfix: bug fix for the userdocs: documentation onlystyle: formatting, no logic changerefactor: code restructure, no behavior changetest: adding or fixing testschore: build process, toolingFollow the format: <type>(<scope>): <subject>
Rules:
auth, api, cli)Closes #123, Fixes #456)Return:
docs
superpowers
evals
scenario-1
scenario-2
scenario-3
scenario-4
scenario-5
references