Interactive skill creation and eval-driven optimization. Triggers: create a skill, make a skill, new skill, scaffold skill, optimize skill, run evals, improve skill. Uses AskUserQuestion for interview; WebSearch for research; Bash for eval execution. Outputs: complete skill directory with SKILL.md, tile.json, evals, and repo integration.
93
94%
Does it follow best practices?
Impact
91%
1.26xAverage score across 3 eval scenarios
Passed
No known issues
A developer tools company wants to add a new AI skill to their internal skill library: release-notes-writer. This skill will help engineers generate polished release notes from a list of merged PRs and commit messages. The team has been doing this manually for years and keeps producing inconsistent output — some releases have detailed notes, others just say "bug fixes and improvements."
You have been asked to create this skill from scratch. Because this is a new skill with no prior documentation, you need to gather requirements before you start building anything.
The team lead — Jordan — has agreed to answer your questions. Your task has two parts:
Part 1 — Document your interview plan. Before conducting any interview, write out your full interview plan as interview-plan.md. This should describe every question you intend to ask Jordan, what options you would present for each question, and how you will decide when you have enough information to proceed. This document will be shared with other team members who may conduct similar interviews in the future, so it should be thorough and reusable.
Part 2 — Scaffold the skill. Jordan's answers are provided below. Using those answers, produce the initial skill scaffold: at minimum SKILL.md and tile.json.
interview-plan.md — your full interview plan, including all questions and their answer optionsskills/release-notes-writer/SKILL.mdskills/release-notes-writer/tile.jsonThe following file contains Jordan's answers to your interview questions. Use these to produce the scaffold.
=============== FILE: inputs/jordan-answers.md ===============
What does the skill do / what problem does it solve? It takes a list of merged PRs (titles, labels, authors) and a target version number, then produces a formatted release notes document. The audience is external customers, so tone should be user-facing, not internal jargon.
What phrases or situations should trigger this skill? "write release notes", "generate changelog for this release", "draft release notes for v2.3", "help me write the release announcement". Also triggers if someone pastes a list of PRs and asks for a summary for customers.
What steps must the skill always perform, no matter what? Always group PRs by label (features, bug-fixes, breaking-changes). Always include a "Breaking Changes" section at the top if any breaking-change PRs are present — even if there's just one. Always output a version header with the release date.
What are the common mistakes or things that can go wrong? PRs without labels are easy to miss — they must go into an "Other" or "Uncategorised" section, never silently dropped. Version numbers can be ambiguous (1.2 vs 1.2.0) — always normalise to semver. Draft PRs sometimes get included accidentally — always filter them out.
What sub-steps does the workflow involve?
What does the skill produce? A Markdown file: release-notes-<version>.md. Optionally a JSON summary: release-summary-<version>.json with counts per category.
Are there existing patterns or tools the team already uses?
The team uses GitHub labels consistently. PRs are listed in JSON from gh pr list --json title,labels,author,number,isDraft. The Markdown follows the Keep a Changelog format loosely.
How will you know the skill is working well? Release notes are ready to paste into the GitHub release form without editing. No PRs are missing. Breaking changes always appear first.
What should the skill never do? Never include draft PRs. Never silently drop unlabelled PRs. Never use internal jargon or commit hashes in the customer-facing output.
Are there any companion reference files that would help? A label taxonomy reference (what each GitHub label maps to in the output) would be useful. Maybe a template for the version header section.