Use when the user asks for a summary of their GitHub activity, work log, contributions, or accomplishments over a time period. Triggers include phrases like "what did I work on", "work summary", "weekly update", "standup notes", or requests for activity across an organization.
Install with Tessl CLI
npx tessl i github:shousper/claude-kit --skill github-work-summary69
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
37%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This description has excellent trigger term coverage but critically fails to explain what the skill actually does. It reads as a 'Use when...' clause without the preceding capability statement. The description needs to lead with concrete actions like 'Generates summaries of GitHub activity by aggregating commits, PRs, and issues' before the trigger guidance.
Suggestions
Add a capability statement at the beginning describing concrete actions (e.g., 'Generates summaries of GitHub contributions by aggregating commits, pull requests, issues, and code reviews')
Specify the outputs the skill produces (e.g., 'Creates formatted work logs, standup notes, or weekly reports')
Mention specific GitHub artifacts it analyzes (commits, PRs, issues, reviews) to improve specificity and distinctiveness
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | The description lacks concrete actions - it never states what the skill actually does (e.g., 'generates summaries', 'compiles commit history', 'aggregates PR data'). It only describes when to use it, not what capabilities it provides. | 1 / 3 |
Completeness | The description answers 'when' extensively but completely fails to answer 'what does this do'. There is no explanation of the skill's capabilities or actions - only trigger conditions. This is the inverse of the typical problem. | 1 / 3 |
Trigger Term Quality | Excellent coverage of natural trigger terms users would say: 'what did I work on', 'work summary', 'weekly update', 'standup notes', 'GitHub activity', 'contributions', 'accomplishments'. These are realistic phrases users would naturally use. | 3 / 3 |
Distinctiveness Conflict Risk | The GitHub-specific context and detailed trigger phrases help distinguish it, but 'work summary' and 'accomplishments' are generic enough to potentially conflict with other productivity or reporting skills. The lack of specific capabilities makes it harder to differentiate. | 2 / 3 |
Total | 7 / 12 Passed |
Implementation
77%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-structured, highly actionable skill with excellent workflow clarity and concrete executable guidance. The main weakness is token efficiency - the inline Python script and some verbose table formatting could be tightened or split into referenced files. The skill excels at providing clear steps, validation checkpoints, and common mistake prevention.
Suggestions
Move the Python data collection script to a separate referenced file (e.g., `scripts/gh_work_summary.py`) to reduce inline content and improve scannability
Condense the 'When to Use' section - the bullet points largely repeat information Claude can infer from the overview
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is mostly efficient but includes some unnecessary verbosity, such as the full Python script inline (could be referenced) and explanatory tables that could be more compact. The 'When to Use' section and some table formatting add tokens without proportional value. | 2 / 3 |
Actionability | Provides fully executable code with a complete Python script, specific gh CLI commands, and copy-paste ready examples. The step-by-step process includes concrete commands and clear output format templates. | 3 / 3 |
Workflow Clarity | Excellent multi-step workflow with a trackable checklist, clear sequencing (Steps 1-5), explicit validation points ('If the script fails'), and decision tables for filtering contributions. The 'Common Mistakes' section serves as a validation reference. | 3 / 3 |
Progressive Disclosure | Content is well-organized with clear sections, but the full Python script (~80 lines) is inline rather than referenced from a separate file. For a skill of this length (~200 lines), the script could be split out with a reference link to improve scannability. | 2 / 3 |
Total | 10 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.