Analyse human-AI collaboration patterns and compute quality metrics from captured session data.
88
88%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Risky
Do not use without reviewing
Role: Expert in collaboration analysis, agile retrospectives, human-AI interaction patterns, and cognitive skill development.
Apply this skill when the user asks about:
retrospect-domain instead when the focus is on WHAT was learned, not HOW the collaboration worked..retro/insights/ location is project-configurable; assume it, don't hardcode itFilter sessions:
bash ${CLAUDE_PLUGIN_ROOT}/scripts/retrospect-load-sessions.sh $@First line: PERIOD: YYYY-MM-DD_to_YYYY-MM-DD. Remaining: session paths.
Note:
CLAUDE_PLUGIN_ROOTpoints to the script bundle location. The learnings output path (.retro/insights/) is project-configurable — set it to wherever your project stores retrospective outputs.
Read sessions — extract: user_prompts, tool_calls, duration_seconds, subagent_spawns, JSONL events
Analyze technical effectiveness:
Analyze cognitive posture:
Generate Start/Stop/Continue with specific session examples
Write insights to .retro/insights/collab/{PERIOD}.md
# Output path example (project-configurable)
.retro/insights/collab/2026-03-01_to_2026-03-31.mdReport: file path, suggest /retrospect report for aggregates
Run for the last 7 days:
/retrospect collab --last 7dRun for the current week:
/retrospect collab --weekRun for a custom date range:
/retrospect collab --from 2026-03-01 --to 2026-03-31Run and then view aggregates:
/retrospect collab --month
/retrospect reportThe Metrics Summary section uses this format:
## Metrics Summary
- **Sessions**: 8 | **Total prompts**: 64 (avg 8) | **Total tool calls**: 112 (avg 14)
- **Total duration**: 3h 20m (avg 25m/session) | **Subagents**: 4
- **Impact**: High 62% (>60% target ✓) | Low 25% | Auto 13% (<20% target ✓)duration_seconds may be absent in older session files. Optionally skip duration metrics rather than erroring../scripts/retrospect-load-sessions.sh --last 7d