Socratic questioning protocol + user communication. MANDATORY for complex requests, new features, or unclear requirements. Includes progress reporting and error handling.
Install with Tessl CLI
npx tessl i github:lchenrique/politron-ide --skill brainstorming69
Does it follow best practices?
If you maintain this skill, you can automatically optimize it using the tessl CLI to improve its score:
npx tessl skill review --optimize ./path/to/skillValidation for skill structure
Discovery
50%Based on the skill's description, can an agent find and select it at the right time? Clear, specific descriptions lead to better discovery.
The description attempts to specify when to use the skill but fails to clearly explain what concrete actions it performs. 'Socratic questioning protocol' is jargon that doesn't help users or Claude understand the actual behavior. The triggers are reasonable but the lack of specific actions makes it difficult to know what this skill actually does.
Suggestions
Replace 'Socratic questioning protocol' with concrete actions like 'Asks clarifying questions to understand user intent, breaks down ambiguous requests into specific requirements'
Add natural trigger terms users would say, such as 'clarify', 'what do you mean', 'help me understand', 'figure out requirements'
Specify what 'user communication' and 'progress reporting' actually entail - e.g., 'Provides step-by-step status updates during long tasks, confirms understanding before proceeding'
| Dimension | Reasoning | Score |
|---|---|---|
Specificity | Names domain ('Socratic questioning protocol', 'user communication') and some actions ('progress reporting', 'error handling'), but lacks concrete specific actions - what does 'Socratic questioning' actually do? What specific communication actions are performed? | 2 / 3 |
Completeness | Has a 'when' clause ('MANDATORY for complex requests, new features, or unclear requirements') but the 'what' is vague - it doesn't clearly explain what the skill actually does beyond abstract concepts like 'Socratic questioning protocol'. | 2 / 3 |
Trigger Term Quality | Includes some relevant terms like 'complex requests', 'new features', 'unclear requirements', but 'Socratic questioning protocol' is technical jargon users wouldn't naturally say. Missing natural variations like 'clarify', 'ask questions', 'understand requirements'. | 2 / 3 |
Distinctiveness Conflict Risk | 'User communication' and 'error handling' are very broad and could overlap with many skills. However, 'Socratic questioning' and the specific triggers for 'complex requests' and 'unclear requirements' provide some distinctiveness. | 2 / 3 |
Total | 8 / 12 Passed |
Implementation
70%Reviews the quality of instructions and guidance provided to agents. Good implementation is clear, handles edge cases, and produces reliable results.
This skill provides a solid communication protocol framework with clear workflow structure and good progressive disclosure. However, it could be more actionable by including concrete filled-in examples rather than just templates, and could trim some explanatory content that Claude would already understand (like basic emoji meanings and obvious communication principles).
Suggestions
Add a concrete, filled-in example of the Question Format showing an actual brainstorming exchange (e.g., for a 'Build me a dashboard' request)
Remove or condense the Status Icons table - Claude understands emoji semantics without explicit definitions
Add a complete example dialogue showing the Socratic Gate in action from trigger through resolution
| Dimension | Reasoning | Score |
|---|---|---|
Conciseness | The skill is reasonably efficient with tables and structured formatting, but includes some redundant explanations (e.g., explaining what each status icon means when Claude already knows emoji semantics) and could be tightened in places like the 'Communication Principles' section which restates obvious concepts. | 2 / 3 |
Actionability | Provides structured guidance with clear patterns and formats, but lacks concrete executable examples. The 'Question Format' and 'Error Response Pattern' are templates rather than filled-in examples showing actual usage. References external file 'dynamic-questioning.md' for detailed content without providing inline examples. | 2 / 3 |
Workflow Clarity | The Socratic Gate workflow is clearly sequenced with explicit steps (STOP → ASK → WAIT), the question generation process has numbered steps, and the error handling pattern provides a clear 4-step sequence. The mandatory enforcement is well-signaled with visual markers. | 3 / 3 |
Progressive Disclosure | Well-organized with clear sections, appropriate use of tables for quick reference, and properly signals the external reference to 'dynamic-questioning.md' for detailed content. The skill serves as an overview with one-level-deep reference to detailed materials. | 3 / 3 |
Total | 10 / 12 Passed |
Validation
90%Checks the skill against the spec for correct structure and formatting. All validation checks must pass before discovery and implementation can be scored.
Validation — 10 / 11 Passed
Validation for skill structure
| Criteria | Description | Result |
|---|---|---|
allowed_tools_field | 'allowed-tools' contains unusual tool name(s) | Warning |
Total | 10 / 11 Passed | |
Table of Contents
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.