CtrlK
BlogDocsLog inGet started
Tessl Logo

user-file-ops

Simple operations on user-provided text files including summarization.

40

Quality

25%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./examples/skillrun/skills/user_file_ops/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

0%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This description is too vague to be useful for skill selection. It fails to enumerate specific capabilities beyond 'summarization', uses generic language like 'simple operations', and lacks any explicit trigger guidance for when Claude should select this skill. It would easily conflict with many other file-processing or text-handling skills.

Suggestions

List specific concrete operations (e.g., 'Summarizes text files, counts words, extracts key phrases, searches for patterns') instead of the vague 'simple operations'.

Add an explicit 'Use when...' clause with natural trigger terms, e.g., 'Use when the user asks to summarize, read, search, or perform basic analysis on .txt or plain text files.'

Include file type specifics and natural user terms like '.txt', 'plain text', 'text document', 'summarize a file' to improve trigger term coverage and distinctiveness.

DimensionReasoningScore

Specificity

The description mentions 'simple operations' and 'summarization' but 'simple operations' is extremely vague. Only one concrete action (summarization) is named, and the rest is abstract language.

1 / 3

Completeness

The description weakly addresses 'what' (simple operations and summarization) but completely lacks any 'when' clause or explicit trigger guidance. The absence of a 'Use when...' clause caps this at 2, but the 'what' is also very weak, resulting in a 1.

1 / 3

Trigger Term Quality

The terms 'simple operations', 'text files', and 'summarization' are overly generic. Missing natural user terms like 'summarize', 'read file', 'extract', 'count words', '.txt', or specific operation names users would actually say.

1 / 3

Distinctiveness Conflict Risk

'Simple operations on text files' is extremely generic and would conflict with virtually any skill that processes files, reads content, or manipulates text. There is no clear niche or distinct trigger.

1 / 3

Total

4

/

12

Passed

Implementation

50%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

The skill provides a clear pattern for invoking the summarize script but is weakened by repetitive examples that all demonstrate the same command structure, lack of output format specification, and missing information about what the script actually produces. The content could be significantly tightened by showing one example with sample output and noting the path conventions briefly.

Suggestions

Consolidate the three nearly identical examples into one example with a note that paths can vary (e.g., work/inputs/ for user files, out/ for skill-produced files), reducing redundancy.

Add a sample of what the output file actually contains (e.g., line count, word count, byte count, preview text) so Claude knows the expected format.

Include brief error handling guidance—what to do if the input file doesn't exist or the script returns an error.

DimensionReasoningScore

Conciseness

The content includes some unnecessary explanation (e.g., explaining where user-provided files are typically exposed, how skill_run stages files) that Claude would already know or could infer. The three examples are repetitive—they all show the same command pattern with different filenames, which could be condensed to one example with a note about varying paths.

2 / 3

Actionability

The commands are concrete and copy-paste ready (bash scripts/summarize_file.sh ...), but the actual script content is never shown or referenced, so Claude cannot verify what the script does or adapt it. There's no indication of what the output format looks like or what 'basic statistics' and 'short preview' actually contain.

2 / 3

Workflow Clarity

The skill is relatively simple (single command), but there's no validation or error handling guidance—what if the file doesn't exist, or the script fails? For a simple skill this could be acceptable, but the lack of any mention of expected output format or how to verify success keeps it at 2.

2 / 3

Progressive Disclosure

The content is organized with sections (Overview, Examples, Output Files), which is reasonable. However, it doesn't use markdown headers properly (no ## syntax), and the Output Files section is just a static list that duplicates information from the examples without adding value. For a simple skill under 50 lines, the structure is adequate but not well-polished.

2 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
trpc-group/trpc-agent-go
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.