CtrlK
BlogDocsLog inGet started
Tessl Logo

gstack-openclaw-office-hours

Use when asked to brainstorm, evaluate whether an idea is worth building, run office hours, or think through a new product idea or design direction before any code is written.

77

Quality

71%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Passed

No known issues

Optimize this skill with Tessl

npx tessl skill review --optimize ./openclaw/skills/gstack-openclaw-office-hours/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

64%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

The description excels at providing trigger terms and a clear 'Use when' clause, making it easy for Claude to know when to select it. However, it is almost entirely a 'when' statement with no 'what'—it never explains what the skill actually does, what outputs it produces, or what methodology it follows. Adding a concrete capability statement would significantly improve it.

Suggestions

Add a 'what does this do' clause before the 'Use when' clause, e.g., 'Guides early-stage product ideation by helping evaluate feasibility, define target users, identify risks, and shape design direction.'

Specify concrete outputs or deliverables the skill produces (e.g., 'produces a structured evaluation of viability, competitive landscape, and recommended next steps') to improve specificity.

DimensionReasoningScore

Specificity

The description names several actions (brainstorm, evaluate ideas, run office hours, think through product ideas/design direction) but they remain somewhat abstract—'run office hours' and 'think through a new product idea' lack concrete deliverables or outputs.

2 / 3

Completeness

The description is essentially all 'when' (Use when...) but lacks a clear 'what does this do' statement. There is no explanation of what the skill actually produces or how it helps—only when to invoke it.

2 / 3

Trigger Term Quality

Includes natural trigger terms users would actually say: 'brainstorm', 'idea worth building', 'office hours', 'product idea', 'design direction', 'before any code is written'. These cover a good range of how users would phrase early-stage ideation requests.

3 / 3

Distinctiveness Conflict Risk

The focus on pre-code ideation and product thinking provides some distinctiveness, but terms like 'brainstorm' and 'design direction' could overlap with general brainstorming or design skills. The 'before any code is written' qualifier helps but doesn't fully eliminate conflict risk.

2 / 3

Total

9

/

12

Passed

Implementation

77%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This is a highly actionable and well-structured skill with an exceptionally clear multi-phase workflow, explicit decision gates, and concrete examples throughout. Its main weakness is length—at 300+ lines in a single file, it could benefit from splitting reference material (pushback patterns, design doc templates, forcing questions) into separate files. The content is genuinely novel and instructive, though some sections (particularly the pushback patterns) are repetitive in their core lesson.

Suggestions

Split pushback patterns, design doc templates, and the Six Forcing Questions into separate referenced files (e.g., PUSHBACK_PATTERNS.md, DESIGN_TEMPLATES.md) to reduce the main skill's token footprint.

Condense the anti-sycophancy rules and pushback patterns—several examples teach the same principle (demand specificity over vagueness) and could be reduced to 2-3 representative examples with a summary rule.

DimensionReasoningScore

Conciseness

The skill is quite long (~300+ lines) and contains substantial content that could be tightened. The operating principles, anti-sycophancy rules, and pushback patterns are valuable but verbose—many examples repeat the same lesson (specificity > vagueness). The 'bad/good' response pairs are useful but could be condensed. However, most content is genuinely instructive rather than explaining things Claude already knows.

2 / 3

Actionability

Extremely actionable throughout. Every phase has concrete outputs, specific question templates with exact wording, explicit good/bad response examples, structured templates for design docs, and clear decision trees (mode mapping, smart routing by product stage). The pushback patterns with BAD/GOOD examples are particularly executable guidance.

3 / 3

Workflow Clarity

The 6-phase workflow is clearly sequenced with explicit gates between phases (user must confirm premises before Phase 4, must approve approach before Phase 5, must approve design doc before Phase 6). The 'STOP after each question' instructions, escape hatches, and mode-switching rules provide clear validation checkpoints. The hard gate against implementation is a strong safety constraint.

3 / 3

Progressive Disclosure

Everything is in a single monolithic file with no references to supporting documents. The design doc templates, pushback pattern library, and forcing questions could each be separate reference files. For a skill this long, the lack of any content splitting means Claude must load the entire document even for simple sessions. However, the internal structure with clear phase headers provides reasonable navigation within the single file.

2 / 3

Total

10

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
garrytan/gstack
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.