CtrlK
BlogDocsLog inGet started
Tessl Logo

tdg-personal/regex-vs-llm-structured-text

Decision framework for choosing between regex and LLM when parsing structured text — start with regex, add LLM only for low-confidence edge cases.

73

Quality

73%

Does it follow best practices?

Impact

Pending

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

Quality

Discovery

57%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description communicates a clear and distinctive concept — a decision framework for regex vs LLM parsing — which gives it a strong niche identity. However, it lacks an explicit 'Use when...' clause and could benefit from more concrete action verbs and natural trigger terms that users would employ when seeking this guidance.

Suggestions

Add an explicit 'Use when...' clause, e.g., 'Use when deciding between regex and LLM for text parsing, or when building parsers for structured/semi-structured data.'

Include more natural trigger terms users might say, such as 'text extraction', 'pattern matching', 'data parsing', 'when to use regex vs AI', or 'parser design'.

DimensionReasoningScore

Specificity

The description names the domain (parsing structured text) and describes the core approach (regex vs LLM decision framework), but doesn't list multiple concrete actions — it's more of a strategy statement than a list of specific capabilities.

2 / 3

Completeness

The 'what' is reasonably clear (a decision framework for regex vs LLM parsing), but there is no explicit 'Use when...' clause or equivalent trigger guidance. The when is only implied by the description itself.

2 / 3

Trigger Term Quality

Includes relevant terms like 'regex', 'LLM', 'parsing', 'structured text', and 'edge cases', which are natural for developers. However, it misses common variations users might say like 'text extraction', 'pattern matching', 'data parsing', or 'when to use regex'.

2 / 3

Distinctiveness Conflict Risk

This is a very specific niche — a decision framework for choosing between regex and LLM for text parsing. It's unlikely to conflict with other skills since it targets a narrow architectural decision rather than a broad capability.

3 / 3

Total

9

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a strong, actionable skill with a clear decision framework and executable code for each pipeline stage. Its main weakness is moderate verbosity — redundant sections (When to Activate vs When to Use, Best Practices vs Anti-Patterns) and inline implementation details that could be split into a reference file. The workflow is well-sequenced with confidence scoring serving as an effective validation checkpoint.

Suggestions

Merge 'When to Activate' and 'When to Use' into a single concise section to eliminate redundancy.

Move the full implementation code (sections 1-4) into a separate IMPLEMENTATION.md or EXAMPLES.md, keeping only the decision framework diagram and a minimal code snippet in the main skill.

DimensionReasoningScore

Conciseness

The skill is mostly efficient with good code examples, but includes some unnecessary sections like 'When to Activate' and 'When to Use' which overlap significantly, and the anti-patterns section partially restates the best practices in negated form. The real-world metrics table adds value but the overall content could be tightened by ~20%.

2 / 3

Actionability

Provides fully executable Python code for each pipeline stage — regex parser, confidence scorer, LLM validator, and hybrid pipeline orchestrator. The code is copy-paste ready with concrete dataclasses, specific patterns, and a clear API client call example.

3 / 3

Workflow Clarity

The architecture diagram clearly sequences the pipeline stages, and the hybrid pipeline function (process_document) implements an explicit workflow: regex → confidence check → LLM for edge cases only. The confidence threshold acts as a validation checkpoint, and the pipeline gracefully handles the case where no LLM client is available.

3 / 3

Progressive Disclosure

The content is well-structured with clear headers and numbered implementation steps, but it's a monolithic file with no references to external resources for advanced topics. The ~180 lines of implementation code could benefit from being split into a separate reference file, with the SKILL.md focusing on the decision framework and quick-start pattern.

2 / 3

Total

10

/

12

Passed

Validation

90%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation10 / 11 Passed

Validation for skill structure

CriteriaDescriptionResult

frontmatter_unknown_keys

Unknown frontmatter key(s) found; consider removing or moving to metadata

Warning

Total

10

/

11

Passed

Reviewed

Table of Contents