Assess your AI fluency using Anthropic's 4D framework (Dakan, Feller & Anthropic, 2025). Scans Claude Code sessions, runs LLM-based behavior classification on all messages, asks a self-assessment questionnaire for 6 unobservable behaviors, and generates a visual HTML report with scores and actionable feedback. Use when "assess fluency", "AI fluency", "fluency report", "fluency assessment", "4D framework", or "how AI fluent am I".
100
100%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Quality
Discovery
100%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
This is an excellent skill description that hits all the marks. It provides specific concrete actions, includes a comprehensive 'Use when' clause with natural trigger terms, references a specific named framework for distinctiveness, and uses proper third-person voice throughout. The description is detailed yet concise.
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Lists multiple specific concrete actions: 'Scans Claude Code sessions', 'runs LLM-based behavior classification on all messages', 'asks a self-assessment questionnaire for 6 unobservable behaviors', and 'generates a visual HTML report with scores and actionable feedback'. | 3 / 3 |
Completeness | Clearly answers both what (assess AI fluency using 4D framework, scan sessions, run classification, generate report) AND when (explicit 'Use when' clause with multiple trigger phrases). | 3 / 3 |
Trigger Term Quality | Includes excellent natural trigger terms users would say: 'assess fluency', 'AI fluency', 'fluency report', 'fluency assessment', '4D framework', 'how AI fluent am I'. These cover both formal and conversational phrasings. | 3 / 3 |
Distinctiveness Conflict Risk | Highly distinctive with specific domain (AI fluency assessment), named framework (Anthropic's 4D framework with citation), and unique triggers like '4D framework' and 'fluency assessment' that are unlikely to conflict with other skills. | 3 / 3 |
Total | 12 / 12 Passed |
Implementation
100%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This is a well-crafted skill that demonstrates excellent token efficiency while providing comprehensive, actionable guidance. The 5-step workflow is clearly sequenced with concrete commands and code examples. Progressive disclosure is handled well with appropriate references to supporting documentation for detailed specs.
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The content is lean and efficient, assuming Claude's competence. No unnecessary explanations of what AI fluency is or how LLMs work—it jumps straight to actionable instructions. | 3 / 3 |
Actionability | Provides fully executable bash commands with clear flags, concrete Python code for saving questionnaire responses, and specific scoring criteria. Copy-paste ready throughout. | 3 / 3 |
Workflow Clarity | Clear 5-step sequence with explicit checkpoints. Each step has defined inputs/outputs, and the workflow logically progresses from data collection through classification, questionnaire, scoring, to report generation. | 3 / 3 |
Progressive Disclosure | Excellent structure with concise overview in SKILL.md and clear one-level-deep references to REPORT-SPEC.md and FRAMEWORK.md for detailed specifications. Content is appropriately split. | 3 / 3 |
Total | 12 / 12 Passed |
Validation
100%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 11 / 11 Passed
Validation for skill structure
No warnings or errors.
Reviewed
Table of Contents