Validate AI/ML models and datasets for bias, fairness, and ethical concerns. Use when auditing AI systems for ethical compliance, fairness assessment, or bias detection. Trigger with phrases like "evaluate model fairness", "check for bias", or "validate AI ethics".
74
70%
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Passed
No known issues
Optimize this skill with Tessl
npx tessl skill review --optimize ./plugins/ai-ml/ai-ethics-validator/skills/validating-ai-ethics-and-fairness/SKILL.mdValidate AI/ML models and datasets for bias, fairness, and ethical compliance using quantitative fairness metrics and structured audit workflows.
pip install fairlearn)pip install aif360) for comprehensive bias analysisExponentiatedGradient from Fairlearn)| Error | Cause | Solution |
|---|---|---|
| Insufficient group sample size | Fewer than 30 observations in a demographic group | Aggregate related subgroups; use bootstrap confidence intervals; flag metric as unreliable |
| Missing sensitive attributes | Protected attribute columns absent from dataset | Apply proxy detection via correlated features; request attribute access under data governance approval |
| Conflicting fairness criteria | Demographic parity and equalized odds contradict | Document the impossibility theorem trade-off; prioritize the metric most aligned with the deployment context |
| Data quality failures | Inconsistent encoding or null values in attribute columns | Standardize categorical encodings; impute or exclude nulls; validate with schema checks before analysis |
| Model output format mismatch | Predictions not in expected probability or binary format | Convert logits to probabilities via sigmoid; binarize at the decision threshold before metric computation |
Scenario 1: Hiring Model Audit -- Validate a resume-screening classifier for gender and age bias. Compute demographic parity across male/female groups and age buckets (18-30, 31-50, 51+). Apply the four-fifths rule. Finding: female selection rate at 0.72 of male rate (critical severity). Recommend reweighting training samples and adjusting the decision threshold.
Scenario 2: Credit Scoring Fairness -- Assess a credit approval model for racial disparate impact. Calculate equalized odds (TPR and FPR) across racial groups. Finding: FPR for Group A is 2.1x Group B (high severity). Recommend in-processing constraint using ExponentiatedGradient with FalsePositiveRateParity.
Scenario 3: Healthcare Risk Prediction -- Evaluate a patient risk model for age and socioeconomic bias. Compute calibration curves per group. Finding: model overestimates risk for low-income patients by 15%. Recommend recalibration using Platt scaling per subgroup with post-deployment monitoring for fairness drift.
3a2d27d
If you maintain this skill, you can claim it as your own. Once claimed, you can manage eval scenarios, bundle related skills, attach documentation or rules, and ensure cross-agent compatibility.