Collect and normalize agent logs, discover installed verifiers, and dispatch LLM judges to evaluate adherence. Produces per-session verdicts and aggregated reports.
91
90%
Does it follow best practices?
Impact
96%
3.09xAverage score across 3 eval scenarios
Passed
No known issues
Your team maintains a local tile at tiles/data-pipeline/ that helps agents manage ETL workflows. Recently the team lead noticed that agents aren't consistently following the skill's instructions — they sometimes use the wrong library, skip required steps, or produce logs in the wrong format. The tile is installed locally (its tessl.json shows "source": "file:tiles/data-pipeline/").
You've been asked to create verifiers for the ingest skill so the audit pipeline can automatically check whether agents are following the skill's rules. The skill has clear preferences around which tools to use, how to log progress, and what to avoid.
Create all verifier files in the correct location for a locally-sourced tile. The verifiers should cover every instruction in the skill below: tool preferences, workflow requirements, naming rules, and prohibitions. Include an activation verifier to track whether the skill is loaded.
Produce a verifier-summary.md file listing the verifiers you created and where they live.
The following files are provided as inputs. Extract them before beginning.
=============== FILE: tiles/data-pipeline/tile.json =============== { "name": "acme/data-pipeline", "version": "0.3.1", "skills": { "ingest": { "path": "skills/ingest/SKILL.md" } } }
Load data from external sources into the data warehouse pipeline.
Always use pandas for data loading and transformation. Do not use polars, dask, or raw Python file I/O for tabular data.
Before writing any data to the warehouse, call validate_schema(df, schema) from pipeline.validation. Never skip schema validation even for small files.
Log every ingest operation using structlog. Include both source and row_count fields in every log call. Do not use Python's built-in logging module.
Output parquet files must follow the pattern {source}_{YYYYMMDD}.parquet. Never use underscores in the source portion — use hyphens instead (e.g. user-events_20240315.parquet).
Wrap all external API calls in a try/except block. On failure, log the error with log.error() and raise a PipelineError, never silently swallow exceptions.