CtrlK
BlogDocsLog inGet started
Tessl Logo

tessl-labs/audit-logs

Collect and normalize agent logs, discover installed verifiers, and dispatch LLM judges to evaluate adherence. Produces per-session verdicts and aggregated reports.

91

3.09x
Quality

90%

Does it follow best practices?

Impact

96%

3.09x

Average score across 3 eval scenarios

SecuritybySnyk

Passed

No known issues

Overview
Quality
Evals
Security
Files

SKILL.mdskills/friction-review/

name:
friction-review
description:
Detect friction points in agent sessions — errors, backtracking, user frustration, repeated failures. Dispatches haiku judges to review transcripts and produces an aggregated friction summary. Can run standalone or as part of the audit-logs pipeline.

Friction Review

Detect points of friction in agent coding sessions — moments where the user or agent struggled, wasted time, or encountered obstacles. Uses LLM judges (haiku) to review session transcripts and classify friction by type and impact.

Scripts

  • review_friction.py — review a single session for friction via claude -p --model haiku
  • dispatch_friction.py — dispatch friction reviewers in parallel with caching
  • merge_friction.py — aggregate friction reviews into a summary

Reference Files

FileRead when
friction-prompt.mdUnderstanding what friction reviewers evaluate

Prerequisites

  • Prepared session transcripts (from audit-logs pipeline's prepare step, or from local-logs collect-and-summarize)
  • claude CLI installed and authenticated

Standalone Usage

When running outside the main audit-logs pipeline, point at a directory containing prepared/ transcripts:

SCRIPTS_PATH="$(find "$(pwd)/.tessl/tiles" "$HOME/.tessl/tiles" -path "*/audit-logs/skills/friction-review/scripts/dispatch_friction.py" -print -quit 2>/dev/null | sed 's|/dispatch_friction.py||')"

# Dispatch friction reviewers
uv run python3 "$SCRIPTS_PATH/dispatch_friction.py" \
  --dir <path-with-prepared-dir> \
  --model haiku

# Merge results
uv run python3 "$SCRIPTS_PATH/merge_friction.py" \
  --dir <same-path>

Pipeline Integration

When run as part of audit-logs, friction analysis runs in parallel with verifier analysis by default. Use --no-friction to disable it:

AUDIT_SCRIPTS="$(find "$(pwd)/.tessl/tiles" "$HOME/.tessl/tiles" -path "*/audit-logs/skills/audit-logs/scripts/run_pipeline.py" -print -quit 2>/dev/null | sed 's|/run_pipeline.py||')"
uv run python3 "$AUDIT_SCRIPTS/run_pipeline.py"

Results are written to friction/ alongside verdicts/ in the run directory, then correlated in the synthesis step.

Friction Types

TypeDescription
wrong_approachAgent chose wrong strategy/tool, user had to redirect
buggy_codeAgent wrote non-working code
over_investigationToo many turns exploring when answer was straightforward
misunderstood_requestAgent misinterpreted the user's request
premature_actionStarted implementing before understanding requirements
tool_misuseUsed a tool incorrectly
repeated_failureFailed at the same thing multiple times
ignored_instructionDidn't follow explicit user instruction

Impact Levels

ImpactDescription
minorResolved in 1-2 turns
moderateTook 3-5 turns to resolve
major5+ turns wasted or task derailed

Output

friction-summary.json contains:

  • Per-session friction events with type, description, turns, and impact
  • Aggregate counts by type, impact, and agent
  • Session outcome and satisfaction distributions
  • Friction rate (sessions with friction / total)

tile.json