CtrlK
BlogDocsLog inGet started
Tessl Logo

gedsys/skillify

Capture the current session's repeatable process into a reusable SKILL.md skill file. Use when the user wants to create a skill, save a workflow as a skill, turn a process into a reusable skill, or mentions "skillify", "create skill", "make a skill", "save as skill", "capture workflow", "turn this into a skill", "new skill", or wants to automate a repeatable process they just performed.

90

Quality

90%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

SKILL.md

name:
skillify
description:
Captures the current session's repeatable process into a reusable SKILL.md skill file by interviewing the user, extracting steps from the conversation, generating YAML frontmatter, and writing structured markdown instructions. Use when the user wants to create a skill, save a workflow as a skill, turn a process into a reusable skill, or mentions "skillify", "create skill", "make a skill", "save as skill", "capture workflow", "turn this into a skill", "new skill", or wants to automate a repeatable process they just performed.
allowed-tools:
Read Write Edit Glob Grep AskUserQuestion Bash(mkdir:*)
argument-hint:
[description of the process you want to capture]
arguments:
description

Skillify

Capture this session's repeatable process into a reusable SKILL.md skill file. Call at the end of the process you want to capture, optionally with a description.

Inputs

  • $description: (Optional) Description of the process you want to capture as a skill.

Goal

Create a well-structured, reusable SKILL.md file that captures a repeatable process from the current session so it can be invoked again later.

Steps

1. Analyze the Session

Before asking any questions, analyze the conversation to identify the repeatable process: its inputs/parameters, ordered steps, success artifacts/criteria per step, places where the user corrected or steered you, and the tools, agents, and permissions involved.

Success criteria: You have a clear mental model of the process, its steps, inputs, outputs, and success criteria.

2. Interview the User

Use AskUserQuestion for ALL questions. Never ask questions via plain text. For each round, iterate as much as needed until the user is happy. The user always has a freeform "Other" option to type edits or feedback -- do NOT add your own "Needs tweaking" option. Just offer the substantive choices.

Round 1: High-level confirmation

  • Suggest a name and description for the skill based on your analysis. Ask the user to confirm or rename.
  • Suggest high-level goal(s) and specific success criteria for the skill.

Round 2: More details

  • Present the high-level steps you identified as a numbered list. Tell the user you will dig into the detail in the next round.
  • If you think the skill will require arguments, suggest arguments based on what you observed. Make sure you understand what someone would need to provide.
  • If it's not clear, ask if this skill should run inline (in the current conversation) or forked (as a sub-agent with its own context). Forked is better for self-contained tasks that don't need mid-process user input; inline is better when the user wants to steer mid-process.
  • Ask where the skill should be saved. Suggest a default based on context. Options:
    • This repo (.claude/skills/<name>/SKILL.md) -- for workflows specific to this project
    • Personal (~/.claude/skills/<name>/SKILL.md) -- follows you across all repos

Round 3: Breaking down each step

For each major step, if it's not glaringly obvious, ask:

  • What does this step produce that later steps need? (data, artifacts, IDs)
  • What proves that this step succeeded, and that we can move on?
  • Should the user be asked to confirm before proceeding? (especially for irreversible actions like merging, sending messages, or destructive operations)
  • Are any steps independent and could run in parallel? (e.g., posting to Slack and monitoring CI at the same time)
  • How should the skill be executed? (e.g. always use a Task agent to conduct code review, or invoke an agent team for a set of concurrent steps)
  • What are the hard constraints or hard preferences? Things that must or must not happen?

You may do multiple rounds of AskUserQuestion here, one round per step, especially if there are more than 3 steps or many clarification questions.

IMPORTANT: Pay special attention to places where the user corrected you during the session, to help inform your design.

Round 4: Final questions

  • Confirm when this skill should be invoked, and suggest/confirm trigger phrases too. (e.g. "Use when the user wants to cherry-pick a PR to a release branch. Examples: 'cherry-pick to release', 'CP this PR', 'hotfix'.")
  • Ask for any other gotchas or things to watch out for, if still unclear.

Stop interviewing once you have enough information. Don't over-ask for simple processes!

Success criteria: You have all the information needed to write the SKILL.md file and the user has confirmed the design.

3. Write the SKILL.md

Create the skill directory and file at the location the user chose in Round 2. Use this format:

---
name: {{skill-name}}
description: {{one-line description}}
allowed-tools: {{space-delimited list of tool permission patterns observed during session}}
when_to_use: {{detailed description of when Claude should automatically invoke this skill, including trigger phrases and example user messages}}
argument-hint: "{{hint showing argument placeholders}}"
arguments:
  {{list of argument names}}
context: {{inline or fork -- omit for inline}}
---

# {{Skill Title}}

Description of skill

## Inputs

- `$arg_name`: Description of this input

## Goal

Clearly stated goal for this workflow. Best if you have clearly defined artifacts or criteria for completion.

## Steps

### 1. Step Name

What to do in this step. Be specific and actionable. Include commands when appropriate.

**Success criteria**: ALWAYS include this! This shows that the step is done and we can move on. Can be a list.

Per-step annotations (include where relevant):

  • Success criteria: REQUIRED on every step.
  • Execution: Direct (default), Task agent, Teammate (true parallelism + inter-agent comms), or [human]. Specify only if not Direct.
  • Artifacts: Data this step produces that later steps depend on (e.g., PR number, commit SHA).
  • Human checkpoint: Pause for user confirmation before irreversible actions (merging, sending messages), error judgment (merge conflicts), or output review.
  • Rules: Hard constraints for the workflow. User corrections from the reference session are especially valuable here.

Step structure tips:

  • Concurrent steps use sub-numbers: 3a, 3b
  • Steps requiring user action get [human] in the title
  • Keep simple skills simple — a 2-step skill doesn't need annotations on every step

Frontmatter rules:

  • allowed-tools: Minimum permissions needed; use patterns like Bash(gh:*) not Bash; space-delimited string
  • context: fork only for self-contained skills with no mid-process user input
  • when_to_use: Start with "Use when..." and include trigger phrases — tells the model when to auto-invoke
  • arguments / argument-hint: Omit if the skill takes no parameters

Success criteria: The complete SKILL.md content has been drafted.

4. Review and Save

Before writing the file, output the complete SKILL.md content as a yaml code block in your response so the user can review it with proper syntax highlighting. Then ask for confirmation using AskUserQuestion with a simple question like "Does this SKILL.md look good to save?"

After writing, tell the user:

  • Where the skill was saved
  • How to invoke it: /{{skill-name}} [arguments]
  • That they can edit the SKILL.md directly to refine it

Success criteria: The file is written to disk and the user has been informed how to use it.

SKILL.md

tile.json