Interactive skill creation and eval-driven optimization. Triggers: create a skill, make a skill, new skill, scaffold skill, optimize skill, run evals, improve skill. Uses AskUserQuestion for interview; WebSearch for research; Bash for eval execution. Outputs: complete skill directory with SKILL.md, tile.json, evals, and repo integration.
93
94%
Does it follow best practices?
Impact
91%
1.26xAverage score across 3 eval scenarios
Passed
No known issues
The platform engineering team at a mid-sized SaaS company has been discussing ways to speed up code review. After several planning sessions, they've settled on building a reusable AI skill called pr-review-assistant that will help engineers conduct consistent, thorough pull request reviews across all of their repositories.
The team has already gathered all the requirements they need. They have a clear picture of what the skill should do, when it should activate, what steps it must always follow, the common mistakes it should avoid, and what outputs it produces. All of that information is captured in the decision map below.
Your job is to turn this decision map into a complete, deployable skill directory. The directory should work within an existing mono-repo that already contains other skills and has a README and CI workflow in place. Everything needed for the skill to be published and discovered must be in place when you are done.
Produce the following files:
skills/pr-review-assistant/SKILL.md — the skill entry pointskills/pr-review-assistant/tile.json — the tile manifestREADME.md with a row for the new skill.github/workflows/tessl-publish.yml with the new skill includedIf you produce any companion rule files, place them in skills/pr-review-assistant/rules/.
The following files are provided as inputs. Extract them before beginning.
=============== FILE: inputs/decision-map.json =============== { "skill_name": "pr-review-assistant", "core_purpose": "Guide engineers through a structured pull request review: check for missing tests, flag security anti-patterns, verify changelog updates, and summarise findings into a review comment.", "trigger_signals": [ "review this PR", "check this pull request", "give feedback on my PR", "code review", "review my changes" ], "non_negotiables": [ "Always read the PR diff before forming any opinion", "Always check for test coverage gaps", "Always flag hardcoded secrets or credentials immediately", "Never approve a PR that has a TODO referencing a security fix", "Always end with a structured summary comment" ], "gotchas": [ "Diffs can be very large — chunk them rather than loading all at once", "Changelog format varies per repo — check CHANGELOG.md conventions before suggesting edits", "Auto-generated files (lock files, generated code) should be skipped" ], "sub_steps": [ "1. Fetch and parse the PR diff", "2. Identify changed files and categorise (src, tests, docs, config)", "3. Check each source file for security patterns", "4. Verify test files exist for new/changed source files", "5. Check CHANGELOG.md for an entry matching the PR", "6. Compose and output the structured review comment" ], "outputs_artifacts": [ "Structured review comment (Markdown, printed to stdout or saved as review-comment.md)", "Optional: review-checklist.json with per-criterion pass/fail" ], "existing_patterns": "The team uses GitHub PRs; diffs are accessed via GitHub CLI (gh pr diff). Changelogs follow Keep a Changelog format.", "success_metrics": "Review comment covers security, tests, and changelog; no false positives on auto-generated files; engineer can act on every item in the comment without asking for clarification.", "anti_patterns": [ "Approving without reading the full diff", "Flagging auto-generated files as issues", "Giving vague feedback like 'improve test coverage' without citing specific files", "Skipping the changelog check because it feels minor" ], "companion_files": [ "rules/security-patterns.md — a reference list of anti-patterns to flag (to be created)", "rules/changelog-format.md — Keep a Changelog conventions (to be created)" ], "version": "1.0.0" }
=============== FILE: README.md ===============
A collection of reusable AI skills for the engineering organisation.
| Skill | Description |
|---|---|
| deploy-helper | Guides safe production deployments with rollback steps |
| incident-responder | Structured incident triage and runbook lookup |
| standup-formatter | Formats daily standup notes into a consistent template |
Add new skills in skills/<skill-name>/. Update this table and the CI matrix when adding a skill.
=============== FILE: .github/workflows/tessl-publish.yml =============== name: Publish Tessl Tiles
on: push: branches: [main]
jobs: publish: runs-on: ubuntu-latest strategy: matrix: tile: - skills/deploy-helper - skills/incident-responder - skills/standup-formatter steps: - uses: actions/checkout@v4 - name: Publish tile run: tessl tile publish ${{ matrix.tile }}