0
# Nominal/Categorical Metrics
1
2
Statistical measures for analyzing associations and agreements between categorical variables, useful for evaluating classification models and understanding categorical data relationships.
3
4
## Capabilities
5
6
### Association Measures
7
8
Metrics for measuring strength of association between categorical variables.
9
10
```python { .api }
11
class CramersV(Metric):
12
def __init__(
13
self,
14
num_classes: int,
15
bias_correction: bool = True,
16
**kwargs
17
): ...
18
19
class TheilsU(Metric):
20
def __init__(
21
self,
22
num_classes: int,
23
**kwargs
24
): ...
25
26
class TschuprowsT(Metric):
27
def __init__(
28
self,
29
num_classes: int,
30
bias_correction: bool = True,
31
**kwargs
32
): ...
33
34
class PearsonsContingencyCoefficient(Metric):
35
def __init__(
36
self,
37
num_classes: int,
38
**kwargs
39
): ...
40
```
41
42
### Agreement Measures
43
44
Metrics for evaluating inter-rater agreement and consistency.
45
46
```python { .api }
47
class FleissKappa(Metric):
48
def __init__(
49
self,
50
mode: str = "counts",
51
**kwargs
52
): ...
53
```
54
55
## Usage Examples
56
57
```python
58
import torch
59
from torchmetrics.nominal import CramersV, FleissKappa, TheilsU
60
61
# Association between two categorical variables
62
cramers_v = CramersV(num_classes=3)
63
theil_u = TheilsU(num_classes=3)
64
65
# Sample categorical data
66
preds = torch.randint(0, 3, (100,)) # Predicted categories
67
target = torch.randint(0, 3, (100,)) # True categories
68
69
# Compute association measures
70
cv_score = cramers_v(preds, target)
71
tu_score = theil_u(preds, target)
72
73
print(f"Cramer's V: {cv_score:.4f}")
74
print(f"Theil's U: {tu_score:.4f}")
75
76
# Inter-rater agreement
77
fleiss_kappa = FleissKappa(mode="counts")
78
79
# Rating counts: (subjects, categories)
80
# Each row represents how many raters assigned each category to a subject
81
ratings = torch.tensor([
82
[0, 0, 0, 0, 14], # Subject 1: all 14 raters chose category 5
83
[0, 2, 6, 4, 2], # Subject 2: mixed ratings
84
[0, 0, 3, 5, 6], # Subject 3: mixed ratings
85
])
86
87
kappa_score = fleiss_kappa(ratings)
88
print(f"Fleiss' Kappa: {kappa_score:.4f}")
89
```
90
91
## Types
92
93
```python { .api }
94
CategoricalData = Tensor # Integer categorical labels
95
RatingCounts = Tensor # Inter-rater agreement count matrix
96
```