CtrlK
BlogDocsLog inGet started
Tessl Logo

gstack-openclaw-retro

Weekly engineering retrospective. Analyzes commit history, work patterns, and code quality metrics with persistent history and trend tracking. Team-aware with per-person contributions, praise, and growth areas. Use when asked for weekly retro, what shipped this week, or engineering retrospective.

83

Quality

78%

Does it follow best practices?

Impact

No eval scenarios have been run

SecuritybySnyk

Advisory

Suggest reviewing before use

Optimize this skill with Tessl

npx tessl skill review --optimize ./openclaw/skills/gstack-openclaw-retro/SKILL.md
SKILL.md
Quality
Evals
Security

Quality

Discovery

100%

Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.

This is a strong skill description that clearly communicates its purpose, lists concrete capabilities, and provides explicit trigger terms. It effectively distinguishes itself through the combination of retrospective format, engineering metrics, and team-awareness features. The 'Use when...' clause with natural language triggers makes it easy for Claude to select appropriately.

DimensionReasoningScore

Specificity

Lists multiple specific concrete actions: analyzes commit history, work patterns, code quality metrics, persistent history, trend tracking, per-person contributions, praise, and growth areas.

3 / 3

Completeness

Clearly answers both what (analyzes commit history, work patterns, code quality metrics with persistent history and trend tracking, team-aware with per-person contributions) and when ('Use when asked for weekly retro, what shipped this week, or engineering retrospective').

3 / 3

Trigger Term Quality

Includes natural trigger terms users would say: 'weekly retro', 'what shipped this week', 'engineering retrospective'. These are phrases engineers commonly use when requesting this type of analysis.

3 / 3

Distinctiveness Conflict Risk

Occupies a clear niche combining weekly retrospective format with commit analysis, team contributions, and trend tracking. The specific combination of retrospective + engineering metrics + team awareness makes it unlikely to conflict with generic code analysis or project management skills.

3 / 3

Total

12

/

12

Passed

Implementation

57%

Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.

This skill is highly actionable with concrete git commands, specific thresholds, and detailed output formats, making it very executable. However, it suffers from being a monolithic document (~300+ lines) that could benefit from progressive disclosure and better organization. The workflow lacks validation checkpoints and error recovery steps, which is notable given the multi-step complexity and file-writing operations involved.

Suggestions

Split the 14-step workflow into a concise overview in SKILL.md with detailed steps in separate reference files (e.g., DATA_GATHERING.md, METRICS.md, OUTPUT_FORMAT.md) to improve progressive disclosure.

Add validation checkpoints: verify git repo exists and has commits before Step 1, check that git commands succeeded before computing metrics, and verify memory/ directory exists before saving history.

Condense metric computation details (Steps 2-8) — Claude can infer how to calculate percentages, build histograms, and format leaderboards from brief specifications rather than exhaustive formatting examples.

Add an error recovery flow for common failures (no commits in window, missing origin/main, memory/ write failures) integrated into the workflow steps rather than just as completion statuses.

DimensionReasoningScore

Conciseness

The skill is quite long (~300+ lines) with some redundancy and over-specification. Many of the metric calculations and formatting details could be condensed since Claude can infer how to compute LOC ratios, build histograms, and format leaderboards. However, most content is genuinely instructive rather than explaining basic concepts.

2 / 3

Actionability

Excellent actionability throughout — concrete, executable git commands are provided for every data-gathering step, specific metric formulas are defined, output formats are shown with examples, and the workflow is fully specified with exact thresholds (45-min gap, 50+ min deep sessions, etc.).

3 / 3

Workflow Clarity

The 14-step sequence is clearly numbered and logically ordered, but there are no validation checkpoints or error-handling feedback loops. For example, there's no guidance on what to do if git commands fail, if the repo has no commits, or if the memory directory doesn't exist. The BLOCKED status is mentioned but not integrated into the workflow steps.

2 / 3

Progressive Disclosure

This is a monolithic wall of text with all 14 steps inline in a single file. There are no references to supporting files despite the complexity warranting separation (e.g., the git commands could be in a reference file, the output format template in another). The content would benefit significantly from splitting into overview + detailed reference files.

1 / 3

Total

8

/

12

Passed

Validation

100%

Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.

Validation11 / 11 Passed

Validation for skill structure

No warnings or errors.

Repository
garrytan/gstack
Reviewed

Table of Contents

Is this your skill?

If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.