or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

blocklist-management.mdcontent-analysis.mdindex.md

content-analysis.mddocs/

0

# Content Analysis

1

2

Comprehensive content analysis capabilities for detecting harmful material in text and images. The Azure AI Content Safety service analyzes content across four harm categories (hate, self-harm, sexual, violence) and returns severity scores to help applications make content moderation decisions.

3

4

## Capabilities

5

6

### Text Analysis

7

8

Analyzes text content for potentially harmful material across four categories with configurable severity levels and blocklist integration.

9

10

```python { .api }

11

def analyze_text(

12

self,

13

options: Union[AnalyzeTextOptions, dict, IO[bytes]],

14

**kwargs

15

) -> AnalyzeTextResult:

16

"""

17

Analyze text for potentially harmful content.

18

19

Parameters:

20

- options: Text analysis request containing text and analysis options

21

- content_type: Body parameter content-type (default: "application/json")

22

- stream: Whether to stream the response (default: False)

23

24

Returns:

25

AnalyzeTextResult: Analysis results with category scores and blocklist matches

26

27

Raises:

28

HttpResponseError: On analysis failure or service errors

29

"""

30

```

31

32

**Usage Example:**

33

34

```python

35

from azure.ai.contentsafety import ContentSafetyClient

36

from azure.ai.contentsafety.models import AnalyzeTextOptions, TextCategory, AnalyzeTextOutputType

37

from azure.core.credentials import AzureKeyCredential

38

39

client = ContentSafetyClient(

40

endpoint="https://your-resource.cognitiveservices.azure.com",

41

credential=AzureKeyCredential("your-api-key")

42

)

43

44

# Basic text analysis

45

request = AnalyzeTextOptions(

46

text="Text content to analyze for harmful material"

47

)

48

result = client.analyze_text(request)

49

50

# Print category results

51

for analysis in result.categories_analysis:

52

print(f"Category: {analysis.category}, Severity: {analysis.severity}")

53

54

# Advanced text analysis with custom options

55

advanced_request = AnalyzeTextOptions(

56

text="Text to analyze",

57

categories=[TextCategory.HATE, TextCategory.VIOLENCE],

58

output_type=AnalyzeTextOutputType.EIGHT_SEVERITY_LEVELS,

59

blocklist_names=["my-custom-blocklist"],

60

halt_on_blocklist_hit=True

61

)

62

result = client.analyze_text(advanced_request)

63

64

client.close()

65

```

66

67

### Image Analysis

68

69

Analyzes image content for potentially harmful visual material across the same four categories as text analysis.

70

71

```python { .api }

72

def analyze_image(

73

self,

74

options: Union[AnalyzeImageOptions, dict, IO[bytes]],

75

**kwargs

76

) -> AnalyzeImageResult:

77

"""

78

Analyze image for potentially harmful content.

79

80

Parameters:

81

- options: Image analysis request containing image data and analysis options

82

- content_type: Body parameter content-type (default: "application/json")

83

- stream: Whether to stream the response (default: False)

84

85

Returns:

86

AnalyzeImageResult: Analysis results with category scores

87

88

Raises:

89

HttpResponseError: On analysis failure or service errors

90

"""

91

```

92

93

**Usage Example:**

94

95

```python

96

from azure.ai.contentsafety import ContentSafetyClient

97

from azure.ai.contentsafety.models import AnalyzeImageOptions, ImageData, ImageCategory

98

from azure.core.credentials import AzureKeyCredential

99

import base64

100

101

client = ContentSafetyClient(

102

endpoint="https://your-resource.cognitiveservices.azure.com",

103

credential=AzureKeyCredential("your-api-key")

104

)

105

106

# Analyze image from file (base64 encoded bytes)

107

with open("image.jpg", "rb") as image_file:

108

image_content = base64.b64encode(image_file.read())

109

image_data = ImageData(content=image_content)

110

111

request = AnalyzeImageOptions(

112

image=image_data,

113

categories=[ImageCategory.SEXUAL, ImageCategory.VIOLENCE]

114

)

115

result = client.analyze_image(request)

116

117

# Print results

118

for analysis in result.categories_analysis:

119

print(f"Category: {analysis.category}, Severity: {analysis.severity}")

120

121

# Analyze image from URL

122

url_request = AnalyzeImageOptions(

123

image=ImageData(blob_url="https://example.com/image.jpg")

124

)

125

result = client.analyze_image(url_request)

126

127

client.close()

128

```

129

130

## Request Models

131

132

```python { .api }

133

class AnalyzeTextOptions:

134

"""Text analysis request parameters."""

135

def __init__(

136

self,

137

*,

138

text: str,

139

categories: Optional[List[Union[str, TextCategory]]] = None,

140

blocklist_names: Optional[List[str]] = None,

141

halt_on_blocklist_hit: Optional[bool] = None,

142

output_type: Optional[Union[str, AnalyzeTextOutputType]] = None

143

): ...

144

145

class AnalyzeImageOptions:

146

"""Image analysis request parameters."""

147

def __init__(

148

self,

149

*,

150

image: ImageData,

151

categories: Optional[List[Union[str, ImageCategory]]] = None,

152

output_type: Optional[Union[str, AnalyzeImageOutputType]] = None

153

): ...

154

155

class ImageData:

156

"""Image data container for analysis."""

157

def __init__(

158

self,

159

*,

160

content: Optional[bytes] = None, # Base64-encoded image bytes

161

blob_url: Optional[str] = None

162

): ...

163

```

164

165

## Response Models

166

167

```python { .api }

168

class AnalyzeTextResult:

169

"""Text analysis results."""

170

blocklists_match: Optional[List[TextBlocklistMatch]]

171

categories_analysis: List[TextCategoriesAnalysis]

172

173

class AnalyzeImageResult:

174

"""Image analysis results."""

175

categories_analysis: List[ImageCategoriesAnalysis]

176

177

class TextCategoriesAnalysis:

178

"""Text category analysis result."""

179

category: TextCategory

180

severity: Optional[int]

181

182

class ImageCategoriesAnalysis:

183

"""Image category analysis result."""

184

category: ImageCategory

185

severity: Optional[int]

186

187

class TextBlocklistMatch:

188

"""Blocklist match information."""

189

blocklist_name: str

190

blocklist_item_id: str

191

blocklist_item_text: str

192

```

193

194

## Error Handling

195

196

All content analysis methods can raise Azure Core exceptions:

197

198

```python

199

from azure.core.exceptions import HttpResponseError, ClientAuthenticationError

200

201

try:

202

result = client.analyze_text(request)

203

except ClientAuthenticationError:

204

print("Authentication failed - check your credentials")

205

except HttpResponseError as e:

206

print(f"Analysis failed: {e.status_code} - {e.message}")

207

```

208

209

Common error scenarios:

210

- **401 Unauthorized**: Invalid API key or expired token

211

- **403 Forbidden**: Insufficient permissions or quota exceeded

212

- **429 Too Many Requests**: Rate limit exceeded

213

- **400 Bad Request**: Invalid request parameters or content format