CtrlK
BlogDocsLog inGet started
Tessl Logo

try-tessl/agent-quality

Analyze agent sessions against verifier checklists, detect friction points, and create structured verifiers from skills and docs. Produces per-session verdicts and aggregated quality reports.

88

2.93x
Quality

86%

Does it follow best practices?

Impact

97%

2.93x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

SKILL.mdskills/review-friction/

name:
review-friction
description:
Detect friction points in agent sessions — errors, backtracking, user frustration, repeated failures. Dispatches haiku judges to review transcripts and produces an aggregated friction summary. Can run standalone or as part of the analyze-sessions pipeline.

Review Friction

Detect points of friction in agent coding sessions — moments where the user or agent struggled, wasted time, or encountered obstacles. Uses LLM judges (haiku) to review session transcripts and classify friction by type and impact.

Scripts

  • review_friction.py — review a single session for friction via claude -p --model haiku
  • dispatch_friction.py — dispatch friction reviewers in parallel with caching
  • merge_friction.py — aggregate friction reviews into a summary

Reference Files

FileRead when
friction-prompt.mdUnderstanding what friction reviewers evaluate

Prerequisites

  • Prepared session transcripts (from analyze-sessions pipeline's prepare step, or from local-logs collect-and-summarize)
  • python3 3.9+ available on PATH (uv not required)
  • claude CLI installed and authenticated

Standalone Usage

When running outside the main analyze-sessions pipeline, point at a directory containing prepared/ transcripts:

SCRIPTS_PATH="$(find "$(pwd)/.tessl/tiles" "$HOME/.tessl/tiles" -path "*/agent-quality/skills/review-friction/scripts/dispatch_friction.py" -print -quit 2>/dev/null | sed 's|/dispatch_friction.py||')"

# Dispatch friction reviewers
python3 "$SCRIPTS_PATH/dispatch_friction.py" \
  --dir <path-with-prepared-dir> \
  --model haiku

# Merge results
python3 "$SCRIPTS_PATH/merge_friction.py" \
  --dir <same-path>

Pipeline Integration

When run as part of analyze-sessions, friction analysis runs in parallel with verifier analysis by default. Use --no-friction to disable it:

AUDIT_SCRIPTS="$(find "$(pwd)/.tessl/tiles" "$HOME/.tessl/tiles" -path "*/agent-quality/skills/analyze-sessions/scripts/run_pipeline.py" -print -quit 2>/dev/null | sed 's|/run_pipeline.py||')"
python3 "$AUDIT_SCRIPTS/run_pipeline.py"

Results are written to friction/ alongside verdicts/ in the run directory, then correlated in the synthesis step.

Friction Types

TypeDescription
wrong_approachAgent chose wrong strategy/tool, user had to redirect
buggy_codeAgent wrote non-working code
over_investigationToo many turns exploring when answer was straightforward
misunderstood_requestAgent misinterpreted the user's request
premature_actionStarted implementing before understanding requirements
tool_misuseUsed a tool incorrectly
repeated_failureFailed at the same thing multiple times
ignored_instructionDidn't follow explicit user instruction

Impact Levels

ImpactDescription
minorResolved in 1-2 turns
moderateTook 3-5 turns to resolve
major5+ turns wasted or task derailed

Output

friction-summary.json contains:

  • Per-session friction events with type, description, turns, and impact
  • Aggregate counts by type, impact, and agent
  • Session outcome and satisfaction distributions
  • Friction rate (sessions with friction / total)

README.md

tile.json