Google Cloud Natural Language API client library providing sentiment analysis, entity recognition, text classification, and content moderation capabilities
npx @tessl/cli install tessl/pypi-google-cloud-language@2.17.00
# Google Cloud Language
1
2
A comprehensive Python client library for Google Cloud Natural Language API that provides natural language understanding technologies to developers. The library enables developers to perform advanced text analysis operations including sentiment analysis for determining emotional tone, entity recognition and analysis for identifying people, places, organizations and other entities, content classification for categorizing text into predefined categories, syntax analysis for understanding grammatical structure, and content moderation for detecting harmful content.
3
4
## Package Information
5
6
- **Package Name**: google-cloud-language
7
- **Package Type**: Python client library
8
- **Language**: Python
9
- **Installation**: `pip install google-cloud-language`
10
- **Documentation**: https://cloud.google.com/python/docs/reference/language/latest
11
- **License**: Apache 2.0
12
13
## Core Imports
14
15
```python
16
from google.cloud import language
17
```
18
19
For specific API versions:
20
21
```python
22
from google.cloud import language_v1
23
from google.cloud import language_v1beta2
24
from google.cloud import language_v2
25
```
26
27
Import specific client classes:
28
29
```python
30
from google.cloud.language import LanguageServiceClient
31
from google.cloud.language import LanguageServiceAsyncClient
32
```
33
34
Import for type annotations:
35
36
```python
37
from typing import Optional, Union, Sequence, Tuple, MutableMapping, MutableSequence
38
from google.api_core import gapic_v1
39
from google.api_core.retry import OptionalRetry
40
```
41
42
## Basic Usage
43
44
```python
45
from google.cloud import language
46
47
# Initialize the client
48
client = language.LanguageServiceClient()
49
50
# Create a document object
51
document = language.Document(
52
content="Google Cloud Natural Language API is amazing!",
53
type_=language.Document.Type.PLAIN_TEXT
54
)
55
56
# Analyze sentiment
57
response = client.analyze_sentiment(
58
request={"document": document}
59
)
60
61
# Access results
62
sentiment = response.document_sentiment
63
print(f"Sentiment score: {sentiment.score}")
64
print(f"Sentiment magnitude: {sentiment.magnitude}")
65
66
# Analyze entities
67
entities_response = client.analyze_entities(
68
request={"document": document}
69
)
70
71
for entity in entities_response.entities:
72
print(f"Entity: {entity.name}, Type: {entity.type_.name}")
73
```
74
75
## Architecture
76
77
The Google Cloud Language library is organized around three main API versions:
78
79
- **v1 (Stable)**: Complete feature set including sentiment analysis, entity analysis, entity sentiment analysis, syntax analysis, text classification, and content moderation
80
- **v1beta2 (Beta)**: Same features as v1, may include experimental capabilities
81
- **v2 (Simplified)**: Streamlined API focusing on core NLP tasks (sentiment, entities, classification, moderation) without syntax analysis
82
83
The library provides both synchronous and asynchronous clients, multiple transport options (gRPC, REST), and comprehensive error handling with Google Cloud authentication integration.
84
85
## Capabilities
86
87
### Client Management
88
89
Core client classes for interacting with the Google Cloud Natural Language API, supporting both synchronous and asynchronous operations with configurable transport layers.
90
91
```python { .api }
92
class LanguageServiceClient:
93
def __init__(self, *, credentials=None, transport=None, client_options=None, client_info=None): ...
94
95
class LanguageServiceAsyncClient:
96
def __init__(self, *, credentials=None, transport=None, client_options=None, client_info=None): ...
97
```
98
99
[Client Management](./client-management.md)
100
101
### Sentiment Analysis
102
103
Analyzes the emotional tone and attitude in text content, providing sentiment scores and magnitude measurements to understand how positive, negative, or neutral the text is.
104
105
```python { .api }
106
def analyze_sentiment(
107
self,
108
request: Optional[Union[AnalyzeSentimentRequest, dict]] = None,
109
*,
110
document: Optional[Document] = None,
111
encoding_type: Optional[EncodingType] = None,
112
retry: OptionalRetry = gapic_v1.method.DEFAULT,
113
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
114
metadata: Sequence[Tuple[str, Union[str, bytes]]] = ()
115
) -> AnalyzeSentimentResponse: ...
116
```
117
118
[Sentiment Analysis](./sentiment-analysis.md)
119
120
### Entity Analysis
121
122
Identifies and extracts named entities (people, places, organizations, etc.) from text, providing detailed information about each entity including type, salience, and mentions within the text.
123
124
```python { .api }
125
def analyze_entities(
126
self,
127
request: Optional[Union[AnalyzeEntitiesRequest, dict]] = None,
128
*,
129
document: Optional[Document] = None,
130
encoding_type: Optional[EncodingType] = None,
131
retry: OptionalRetry = gapic_v1.method.DEFAULT,
132
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
133
metadata: Sequence[Tuple[str, Union[str, bytes]]] = ()
134
) -> AnalyzeEntitiesResponse: ...
135
```
136
137
[Entity Analysis](./entity-analysis.md)
138
139
### Entity Sentiment Analysis (v1/v1beta2 only)
140
141
Combines entity recognition with sentiment analysis to determine the sentiment associated with each identified entity, useful for understanding opinions about specific people, places, or topics.
142
143
```python { .api }
144
def analyze_entity_sentiment(
145
self,
146
request: Optional[Union[AnalyzeEntitySentimentRequest, dict]] = None,
147
*,
148
document: Optional[Document] = None,
149
encoding_type: Optional[EncodingType] = None,
150
retry: OptionalRetry = gapic_v1.method.DEFAULT,
151
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
152
metadata: Sequence[Tuple[str, Union[str, bytes]]] = ()
153
) -> AnalyzeEntitySentimentResponse: ...
154
```
155
156
[Entity Sentiment Analysis](./entity-sentiment-analysis.md)
157
158
### Syntax Analysis (v1/v1beta2 only)
159
160
Provides linguistic analysis including part-of-speech tagging, dependency parsing, and token-level information to understand the grammatical structure and linguistic properties of text.
161
162
```python { .api }
163
def analyze_syntax(
164
self,
165
request: Optional[Union[AnalyzeSyntaxRequest, dict]] = None,
166
*,
167
document: Optional[Document] = None,
168
encoding_type: Optional[EncodingType] = None,
169
retry: OptionalRetry = gapic_v1.method.DEFAULT,
170
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
171
metadata: Sequence[Tuple[str, Union[str, bytes]]] = ()
172
) -> AnalyzeSyntaxResponse: ...
173
```
174
175
[Syntax Analysis](./syntax-analysis.md)
176
177
### Text Classification
178
179
Categorizes text documents into predefined classification categories, enabling automated content organization and filtering based on subject matter and themes.
180
181
```python { .api }
182
def classify_text(
183
self,
184
request: Optional[Union[ClassifyTextRequest, dict]] = None,
185
*,
186
document: Optional[Document] = None,
187
retry: OptionalRetry = gapic_v1.method.DEFAULT,
188
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
189
metadata: Sequence[Tuple[str, Union[str, bytes]]] = ()
190
) -> ClassifyTextResponse: ...
191
```
192
193
[Text Classification](./text-classification.md)
194
195
### Content Moderation
196
197
Detects and flags potentially harmful, inappropriate, or unsafe content in text, providing moderation categories and confidence scores for content filtering applications.
198
199
```python { .api }
200
def moderate_text(
201
self,
202
request: Optional[Union[ModerateTextRequest, dict]] = None,
203
*,
204
document: Optional[Document] = None,
205
retry: OptionalRetry = gapic_v1.method.DEFAULT,
206
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
207
metadata: Sequence[Tuple[str, Union[str, bytes]]] = ()
208
) -> ModerateTextResponse: ...
209
```
210
211
[Content Moderation](./content-moderation.md)
212
213
### Combined Analysis
214
215
Performs multiple types of analysis in a single API call for efficiency, allowing you to get sentiment, entities, syntax, classification, and moderation results simultaneously.
216
217
```python { .api }
218
def annotate_text(
219
self,
220
request: Optional[Union[AnnotateTextRequest, dict]] = None,
221
*,
222
document: Optional[Document] = None,
223
features: Optional[AnnotateTextRequest.Features] = None,
224
encoding_type: Optional[EncodingType] = None,
225
retry: OptionalRetry = gapic_v1.method.DEFAULT,
226
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
227
metadata: Sequence[Tuple[str, Union[str, bytes]]] = ()
228
) -> AnnotateTextResponse: ...
229
```
230
231
[Combined Analysis](./combined-analysis.md)
232
233
## Core Types
234
235
### Request and Response Types
236
237
```python { .api }
238
class AnalyzeSentimentRequest:
239
document: Document
240
encoding_type: EncodingType
241
242
class AnalyzeSentimentResponse:
243
document_sentiment: Sentiment
244
language: str
245
sentences: MutableSequence[Sentence]
246
247
class AnalyzeEntitiesRequest:
248
document: Document
249
encoding_type: EncodingType
250
251
class AnalyzeEntitiesResponse:
252
entities: MutableSequence[Entity]
253
language: str
254
255
class AnalyzeEntitySentimentRequest:
256
document: Document
257
encoding_type: EncodingType
258
259
class AnalyzeEntitySentimentResponse:
260
entities: MutableSequence[Entity]
261
language: str
262
263
class AnalyzeSyntaxRequest:
264
document: Document
265
encoding_type: EncodingType
266
267
class AnalyzeSyntaxResponse:
268
sentences: MutableSequence[Sentence]
269
tokens: MutableSequence[Token]
270
language: str
271
272
class ClassifyTextRequest:
273
document: Document
274
classification_model_options: ClassificationModelOptions
275
276
class ClassifyTextResponse:
277
categories: MutableSequence[ClassificationCategory]
278
279
class ModerateTextRequest:
280
document: Document
281
282
class ModerateTextResponse:
283
moderation_categories: MutableSequence[ClassificationCategory]
284
285
class AnnotateTextRequest:
286
document: Document
287
features: AnnotateTextRequest.Features
288
encoding_type: EncodingType
289
290
class Features:
291
extract_syntax: bool
292
extract_entities: bool
293
extract_document_sentiment: bool
294
extract_entity_sentiment: bool
295
classify_text: bool
296
moderate_text: bool
297
298
class AnnotateTextResponse:
299
sentences: MutableSequence[Sentence]
300
tokens: MutableSequence[Token]
301
entities: MutableSequence[Entity]
302
document_sentiment: Sentiment
303
language: str
304
categories: MutableSequence[ClassificationCategory]
305
moderation_categories: MutableSequence[ClassificationCategory]
306
```
307
308
### Document
309
310
```python { .api }
311
class Document:
312
class Type(proto.Enum):
313
TYPE_UNSPECIFIED = 0
314
PLAIN_TEXT = 1
315
HTML = 2
316
317
content: str
318
gcs_content_uri: str
319
type_: Type
320
language: str
321
```
322
323
### Sentiment
324
325
```python { .api }
326
class Sentiment:
327
magnitude: float
328
score: float
329
```
330
331
### Entity
332
333
```python { .api }
334
class Entity:
335
class Type(proto.Enum):
336
UNKNOWN = 0
337
PERSON = 1
338
LOCATION = 2
339
ORGANIZATION = 3
340
EVENT = 4
341
WORK_OF_ART = 5
342
CONSUMER_GOOD = 6
343
OTHER = 7
344
PHONE_NUMBER = 9
345
ADDRESS = 10
346
DATE = 11
347
NUMBER = 12
348
PRICE = 13
349
350
name: str
351
type_: Type
352
metadata: MutableMapping[str, str]
353
salience: float
354
mentions: MutableSequence[EntityMention]
355
sentiment: Sentiment
356
```
357
358
### TextSpan
359
360
```python { .api }
361
class TextSpan:
362
content: str
363
begin_offset: int
364
```
365
366
### EntityMention
367
368
```python { .api }
369
class EntityMention:
370
class Type(proto.Enum):
371
TYPE_UNKNOWN = 0
372
PROPER = 1
373
COMMON = 2
374
375
text: TextSpan
376
type_: Type
377
sentiment: Sentiment
378
probability: float
379
```
380
381
### ClassificationCategory
382
383
```python { .api }
384
class ClassificationCategory:
385
name: str
386
confidence: float
387
```
388
389
### Token (v1/v1beta2 only)
390
391
```python { .api }
392
class Token:
393
text: TextSpan
394
part_of_speech: PartOfSpeech
395
dependency_edge: DependencyEdge
396
lemma: str
397
```
398
399
### PartOfSpeech (v1/v1beta2 only)
400
401
```python { .api }
402
class PartOfSpeech:
403
class Tag(proto.Enum):
404
UNKNOWN = 0
405
ADJ = 1
406
ADP = 2
407
ADV = 3
408
CONJ = 4
409
DET = 5
410
NOUN = 6
411
NUM = 7
412
PRON = 8
413
PRT = 9
414
PUNCT = 10
415
VERB = 11
416
X = 12
417
AFFIX = 13
418
419
tag: Tag
420
aspect: Aspect
421
case: Case
422
form: Form
423
gender: Gender
424
mood: Mood
425
number: Number
426
person: Person
427
proper: Proper
428
reciprocity: Reciprocity
429
tense: Tense
430
voice: Voice
431
```
432
433
### DependencyEdge (v1/v1beta2 only)
434
435
```python { .api }
436
class DependencyEdge:
437
class Label(proto.Enum):
438
UNKNOWN = 0
439
ROOT = 54
440
NSUBJ = 28
441
DOBJ = 18
442
# ... (additional labels available)
443
444
head_token_index: int
445
label: Label
446
```
447
448
### ClassificationModelOptions (v1/v1beta2 only)
449
450
```python { .api }
451
class ClassificationModelOptions:
452
class V1Model(proto.Message):
453
pass
454
455
class V2Model(proto.Message):
456
pass
457
458
v1_model: V1Model
459
v2_model: V2Model
460
```
461
462
### Sentence
463
464
```python { .api }
465
class Sentence:
466
text: TextSpan
467
sentiment: Sentiment
468
```
469
470
### EncodingType
471
472
```python { .api }
473
class EncodingType(proto.Enum):
474
NONE = 0
475
UTF8 = 1
476
UTF16 = 2
477
UTF32 = 3
478
```