or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

tessl/pypi-azure-ai-contentsafety

Microsoft Azure AI Content Safety client library for Python providing text and image content analysis APIs with harm category detection and blocklist management

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/azure-ai-contentsafety@1.0.x

To install, run

npx @tessl/cli install tessl/pypi-azure-ai-contentsafety@1.0.0

0

# Azure AI Content Safety

1

2

A comprehensive Python client library for Azure AI Content Safety services that enables developers to detect harmful user-generated and AI-generated content in applications and services. It provides both text and image analysis APIs that classify content across four harm categories (sexual, violence, hate, and self-harm) with multi-severity levels, includes text blocklist management for screening custom terms, and supports both Azure Key Credential and Microsoft Entra ID authentication methods.

3

4

## Package Information

5

6

- **Package Name**: azure-ai-contentsafety

7

- **Package Type**: pypi

8

- **Language**: Python

9

- **Installation**: `pip install azure-ai-contentsafety`

10

11

## Core Imports

12

13

Synchronous clients:

14

```python

15

from azure.ai.contentsafety import ContentSafetyClient, BlocklistClient

16

```

17

18

Asynchronous clients:

19

```python

20

from azure.ai.contentsafety.aio import ContentSafetyClient, BlocklistClient

21

```

22

23

Authentication:

24

```python

25

from azure.core.credentials import AzureKeyCredential

26

from azure.ai.contentsafety import ContentSafetyClient

27

28

# Using API key authentication

29

client = ContentSafetyClient(

30

endpoint="https://your-resource.cognitiveservices.azure.com",

31

credential=AzureKeyCredential("your-api-key")

32

)

33

```

34

35

## Basic Usage

36

37

```python

38

from azure.ai.contentsafety import ContentSafetyClient

39

from azure.ai.contentsafety.models import AnalyzeTextOptions

40

from azure.core.credentials import AzureKeyCredential

41

42

# Initialize the client

43

client = ContentSafetyClient(

44

endpoint="https://your-resource.cognitiveservices.azure.com",

45

credential=AzureKeyCredential("your-api-key")

46

)

47

48

# Analyze text content

49

request = AnalyzeTextOptions(text="Some text to analyze")

50

response = client.analyze_text(request)

51

52

# Check results

53

for result in response.categories_analysis:

54

print(f"Category: {result.category}, Severity: {result.severity}")

55

56

# Close the client

57

client.close()

58

```

59

60

## Architecture

61

62

The Azure AI Content Safety client library is organized around two main client classes:

63

64

- **ContentSafetyClient**: Handles content analysis for text and images, providing severity scores across four harm categories

65

- **BlocklistClient**: Manages custom text blocklists for screening domain-specific prohibited terms

66

67

Both clients support:

68

- **Synchronous and Asynchronous Operations**: Full async/await support with corresponding aio clients

69

- **Flexible Authentication**: Azure Key Credential or Microsoft Entra ID token-based authentication

70

- **Enterprise Features**: Built-in error handling, logging, retry policies, and request/response pipeline customization

71

- **Context Management**: Automatic resource cleanup with Python context managers

72

73

## Capabilities

74

75

### Content Analysis

76

77

Analyze text and image content for harmful material across four categories (hate, self-harm, sexual, violence) with configurable severity levels and custom blocklist integration.

78

79

```python { .api }

80

def analyze_text(self, options: AnalyzeTextOptions, **kwargs) -> AnalyzeTextResult: ...

81

def analyze_image(self, options: AnalyzeImageOptions, **kwargs) -> AnalyzeImageResult: ...

82

```

83

84

[Content Analysis](./content-analysis.md)

85

86

### Blocklist Management

87

88

Create and manage custom text blocklists to screen for domain-specific prohibited terms, with support for adding, updating, removing, and querying blocklist items.

89

90

```python { .api }

91

def create_or_update_text_blocklist(self, blocklist_name: str, options: TextBlocklist, **kwargs) -> TextBlocklist: ...

92

def add_or_update_blocklist_items(self, blocklist_name: str, options: AddOrUpdateTextBlocklistItemsOptions, **kwargs) -> AddOrUpdateTextBlocklistItemsResult: ...

93

def list_text_blocklists(self, **kwargs) -> Iterable[TextBlocklist]: ...

94

def delete_text_blocklist(self, blocklist_name: str, **kwargs) -> None: ...

95

```

96

97

[Blocklist Management](./blocklist-management.md)

98

99

## Core Types

100

101

```python { .api }

102

class ContentSafetyClient:

103

def __init__(

104

self,

105

endpoint: str,

106

credential: Union[AzureKeyCredential, TokenCredential],

107

**kwargs

108

): ...

109

def close(self) -> None: ...

110

def __enter__(self) -> "ContentSafetyClient": ...

111

def __exit__(self, *exc_details: Any) -> None: ...

112

113

class BlocklistClient:

114

def __init__(

115

self,

116

endpoint: str,

117

credential: Union[AzureKeyCredential, TokenCredential],

118

**kwargs

119

): ...

120

def close(self) -> None: ...

121

def __enter__(self) -> "BlocklistClient": ...

122

def __exit__(self, *exc_details: Any) -> None: ...

123

124

# Content categories

125

class TextCategory(str, Enum):

126

HATE: str

127

SELF_HARM: str

128

SEXUAL: str

129

VIOLENCE: str

130

131

class ImageCategory(str, Enum):

132

HATE: str

133

SELF_HARM: str

134

SEXUAL: str

135

VIOLENCE: str

136

137

# Output severity levels

138

class AnalyzeTextOutputType(str, Enum):

139

FOUR_SEVERITY_LEVELS: str # 0, 2, 4, 6

140

EIGHT_SEVERITY_LEVELS: str # 0, 1, 2, 3, 4, 5, 6, 7

141

142

class AnalyzeImageOutputType(str, Enum):

143

FOUR_SEVERITY_LEVELS: str # 0, 2, 4, 6

144

```