or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

annotators.mdcoordinate-conversion.mdcore-data-structures.mddataset-management.mddetection-tools.mddrawing-colors.mdfile-utilities.mdindex.mdiou-nms.mdkeypoint-annotators.mdmetrics.mdtracking.mdvideo-processing.mdvlm-support.md

metrics.mddocs/

0

# Metrics and Evaluation

1

2

Tools for evaluating model performance including confusion matrices and mean average precision calculations.

3

4

## Capabilities

5

6

### Detection Metrics

7

8

```python { .api }

9

class ConfusionMatrix:

10

"""

11

Calculate and visualize confusion matrix for classification/detection results.

12

13

Args:

14

matrix (np.ndarray): Confusion matrix data

15

classes (list[str]): Class names for labeling

16

"""

17

18

class MeanAveragePrecision:

19

"""

20

Calculate mean Average Precision (mAP) for object detection evaluation.

21

22

Supports COCO-style evaluation with multiple IoU thresholds.

23

"""

24

25

def update(self, predictions: Detections, targets: Detections) -> None:

26

"""Update metric with prediction and ground truth pairs."""

27

28

def compute(self) -> dict:

29

"""Compute final mAP scores and per-class metrics."""

30

```

31

32

## Usage Example

33

34

```python

35

import supervision as sv

36

37

# Initialize metrics

38

confusion_matrix = sv.ConfusionMatrix()

39

map_metric = sv.MeanAveragePrecision()

40

41

# Evaluate predictions against ground truth

42

for predictions, targets in evaluation_data:

43

map_metric.update(predictions, targets)

44

45

# Get results

46

map_results = map_metric.compute()

47

print(f"mAP@0.5: {map_results['map_50']}")

48

print(f"mAP@0.5:0.95: {map_results['map']}")

49

```