CtrlK
BlogDocsLog inGet started
Tessl Logo

quasi-coder

Expert 10x engineer skill for interpreting and implementing code from shorthand, quasi-code, and natural language descriptions. Use when collaborators provide incomplete code snippets, pseudo-code, or descriptions with potential typos or incorrect terminology. Excels at translating non-technical or semi-technical descriptions into production-quality code.

64

1.00x
Quality

47%

Does it follow best practices?

Impact

96%

1.00x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./skills/quasi-coder/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

67%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description has a clear 'Use when' clause and addresses both what the skill does and when to use it, which is its strongest aspect. However, it lacks concrete specific actions (listing only high-level capabilities), uses somewhat jargon-heavy language ('quasi-code', '10x engineer'), and could overlap with general code generation skills. The '10x engineer' phrasing is unnecessary fluff that doesn't aid skill selection.

Suggestions

Add more concrete actions such as 'fixes syntax errors in rough code drafts', 'converts bullet-point specifications into working functions', 'completes partial code snippets' to improve specificity.

Include more natural trigger terms users would actually say, such as 'rough code', 'sketch', 'draft implementation', 'convert idea to code', 'fix my pseudo-code'.

Remove the 'Expert 10x engineer' fluff phrasing and replace with factual capability statements to improve clarity and reduce noise in skill selection.

DimensionReasoningScore

Specificity

It names the domain (interpreting shorthand/quasi-code) and some actions (translating descriptions into production-quality code), but doesn't list multiple concrete actions—it stays at a high level without specifying particular operations like 'parse pseudo-code', 'fix typos', 'generate functions from natural language'.

2 / 3

Completeness

Clearly answers both 'what' (interpreting and implementing code from shorthand, quasi-code, and natural language) and 'when' (when collaborators provide incomplete code snippets, pseudo-code, or descriptions with potential typos or incorrect terminology), with an explicit 'Use when' clause.

3 / 3

Trigger Term Quality

Includes some relevant terms like 'pseudo-code', 'shorthand', 'natural language descriptions', 'typos', 'incorrect terminology', and 'incomplete code snippets'. However, these are somewhat niche and may not match what users naturally say (e.g., 'rough code', 'sketch out', 'draft code'). Coverage of natural variations is incomplete.

2 / 3

Distinctiveness Conflict Risk

The skill occupies a somewhat specific niche around interpreting imprecise code descriptions, but it could easily overlap with general coding assistance skills or code generation skills. The boundary between 'translating natural language to code' and general code writing is blurry.

2 / 3

Total

9

/

12

Passed

Implementation

27%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is significantly over-engineered and verbose for what is essentially a simple instruction: 'interpret shorthand notation marked with ()=> and replace it with production code.' The core actionable content (marker syntax, ()=> convention, removal rule) could be conveyed in ~30 lines, but is buried in ~300+ lines of role-playing descriptions, expertise assessment frameworks, and best practices Claude already follows. The redundancy between sections (shorthand notation is explained three separate times) further inflates the token cost.

Suggestions

Reduce to ~50 lines by eliminating the role description, expertise level taxonomy, best practices list, and resource management sections—these describe behaviors Claude already exhibits and waste context window tokens.

Consolidate the three overlapping shorthand specification sections (Shorthand Interpretation, Shorthand Key, Variables and Markers) into a single concise reference block.

Extract the troubleshooting table and advanced usage examples into a separate reference file, keeping only the core marker syntax and one complete input→output example in the main skill.

Add a concrete validation step after implementation (e.g., 'verify no ()=> lines remain in output' and 'confirm code is syntactically valid for the target language') to improve workflow clarity.

DimensionReasoningScore

Conciseness

Extremely verbose at ~300+ lines. Extensively explains concepts Claude already knows (what a 10x engineer is, how to assess expertise levels, what email validation looks like, what design patterns are). The architect metaphor, role descriptions, and expertise level taxonomy are unnecessary padding. The entire 'Best Practices' section lists things Claude inherently does.

1 / 3

Actionability

The shorthand marker system and `()=>` notation are concrete and actionable, and the validation example shows a complete input→output transformation. However, much of the content is abstract guidance ('apply expert judgment', 'compensate for erroneous descriptions') rather than executable instructions, and the 'Advanced Usage' examples show inputs without complete output implementations.

2 / 3

Workflow Clarity

The example workflow shows a clear 3-step process (assess → interpret → implement) and the interpretation process has numbered steps. However, there are no validation checkpoints or feedback loops for verifying the implemented code works correctly, and the workflow is more of a thinking framework than an operational sequence with concrete verification steps.

2 / 3

Progressive Disclosure

Monolithic wall of text with no references to external files. All content is inline including the troubleshooting table, advanced usage, resource management, and variable specifications—much of which could be split into separate reference documents. The document is poorly organized for quick scanning with many redundant sections (shorthand key, variables and markers, and shorthand interpretation all cover overlapping content).

1 / 3

Total

6

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
github/awesome-copilot
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.