Production-grade dlib face_recognition toolkit: piecewise confidence formula, enrollment quality diagnostics, and producer-side persistence for flicker suppression.
96
93%
Does it follow best practices?
Impact
100%
2.70xAverage score across 6 eval scenarios
Passed
No known issues
Use this skill any time you are mapping a face_recognition distance to a user-facing
confidence score. The textbook formula looks right on paper and looks broken on stage.
| Match quality | Distance |
|---|---|
| Strong match | 0.30 – 0.40 |
| Borderline | 0.40 – 0.55 |
| Reject | > 0.60 |
Library default tolerance = 0.6.
def confidence(distance: float) -> float:
if distance <= 0.30:
return 1.0
if distance >= 0.60:
return 0.0
return (0.60 - distance) / 0.30A strong match at d=0.38 gives 0.73 — feels right on a meter. The naive
1 - distance/tolerance at the same distance gives 0.37 and the demo looks broken.
If your runtime distances consistently land above 0.40 on what should be strong
matches, the formula isn't the problem — your enrollment is. Enrollment
taken at different framing/lighting/camera than runtime produces a loose cloud,
and distances inflate. See the face-recognition-enrollment skill in this
plugin for a quality checklist and diagnostic (intra-class distance target
0.25–0.40 mean, face coverage 60–75%, Laplacian blur ≥ 150).
Do not raise strong to 0.40 or 0.45 as a workaround for bad enrollment. You
will mask the real problem and break across subjects.
face_recognition_models still uses pkg_resources, which setuptools removed in 82+.
On Python 3.14, pin:
setuptools==75.8.0Do this in the project's requirements file before pip install face_recognition.
Otherwise the import will crash with ModuleNotFoundError: No module named 'pkg_resources'.
scripts/confidence.py over re-deriving the mapping.confidence(0.38) and verify the result is approximately 0.73. If it is not, the mapping is misconfigured or the wrong formula is in use.See the rule file face-recognition-calibration-rules for a quick reminder card.