or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

tessl/pypi-langchain-google-genai

An integration package connecting Google's genai package and LangChain

Workspace
tessl
Visibility
Public
Created
Last updated
Describes
pypipkg:pypi/langchain-google-genai@2.1.x

To install, run

npx @tessl/cli install tessl/pypi-langchain-google-genai@2.1.0

0

# LangChain Google GenAI

1

2

A comprehensive Python integration package that connects Google's Generative AI models with the LangChain framework. Provides seamless access to Google's cutting-edge AI capabilities including Gemini chat models, embeddings, vector storage, and advanced features like attributed question answering and multimodal inputs.

3

4

## Package Information

5

6

- **Package Name**: langchain-google-genai

7

- **Package Type**: pypi

8

- **Language**: Python

9

- **Installation**: `pip install langchain-google-genai`

10

11

## Core Imports

12

13

```python

14

from langchain_google_genai import ChatGoogleGenerativeAI, GoogleGenerativeAI

15

```

16

17

For embeddings:

18

```python

19

from langchain_google_genai import GoogleGenerativeAIEmbeddings

20

```

21

22

For vector store and AQA:

23

```python

24

from langchain_google_genai import GoogleVectorStore, GenAIAqa, AqaInput, AqaOutput

25

```

26

27

For safety and configuration:

28

```python

29

from langchain_google_genai import HarmBlockThreshold, HarmCategory, Modality

30

```

31

32

## Basic Usage

33

34

### Chat Model

35

```python

36

from langchain_google_genai import ChatGoogleGenerativeAI

37

38

# Initialize with API key from environment (GOOGLE_API_KEY)

39

llm = ChatGoogleGenerativeAI(model="gemini-2.5-pro")

40

41

# Simple text generation

42

response = llm.invoke("Explain quantum computing in simple terms")

43

print(response.content)

44

45

# With streaming

46

for chunk in llm.stream("Write a short story about AI"):

47

print(chunk.content, end="", flush=True)

48

```

49

50

### LLM Model

51

```python

52

from langchain_google_genai import GoogleGenerativeAI

53

54

llm = GoogleGenerativeAI(model="gemini-2.5-pro")

55

result = llm.invoke("Once upon a time in the world of AI...")

56

print(result)

57

```

58

59

### Embeddings

60

```python

61

from langchain_google_genai import GoogleGenerativeAIEmbeddings

62

63

embeddings = GoogleGenerativeAIEmbeddings(model="models/gemini-embedding-001")

64

65

# Single query embedding

66

query_vector = embeddings.embed_query("What is machine learning?")

67

68

# Batch document embeddings

69

doc_vectors = embeddings.embed_documents([

70

"Machine learning is a subset of AI",

71

"Deep learning uses neural networks",

72

"Natural language processing handles text"

73

])

74

```

75

76

## Architecture

77

78

The package provides several key components that integrate with Google's Generative AI services:

79

80

- **Chat Models & LLMs**: Direct interfaces to Google's Gemini models for conversational AI and text generation

81

- **Embeddings**: High-quality text vectorization using Google's embedding models

82

- **Vector Store**: Managed semantic search and retrieval using Google's infrastructure

83

- **AQA (Attributed Question Answering)**: Grounded question answering with source attribution

84

- **Safety Controls**: Comprehensive content filtering and safety settings

85

- **Multimodal Support**: Integration with Google's multimodal capabilities for text, images, and audio

86

87

The package maintains full compatibility with LangChain's ecosystem while providing access to Google's latest AI innovations including Gemini 2.0 Flash with advanced reasoning capabilities.

88

89

## Capabilities

90

91

### Chat Models

92

93

Advanced conversational AI with support for tool calling, structured outputs, streaming, safety controls, and multimodal inputs including text, images, and audio.

94

95

```python { .api }

96

class ChatGoogleGenerativeAI:

97

def __init__(

98

self,

99

*,

100

model: str,

101

google_api_key: Optional[SecretStr] = None,

102

temperature: float = 0.7,

103

max_output_tokens: Optional[int] = None,

104

top_p: Optional[float] = None,

105

top_k: Optional[int] = None,

106

safety_settings: Optional[Dict[HarmCategory, HarmBlockThreshold]] = None,

107

**kwargs

108

)

109

110

def invoke(self, input: LanguageModelInput, config: Optional[RunnableConfig] = None, **kwargs) -> BaseMessage

111

def stream(self, input: LanguageModelInput, config: Optional[RunnableConfig] = None, **kwargs) -> Iterator[ChatGenerationChunk]

112

def bind_tools(self, tools: Sequence[Union[Dict[str, Any], Type[BaseModel], Callable, BaseTool]], **kwargs) -> Runnable

113

def with_structured_output(self, schema: Union[Dict, Type[BaseModel]], **kwargs) -> Runnable

114

```

115

116

[Chat Models](./chat-models.md)

117

118

### LLM Models

119

120

Simple text generation interface providing direct access to Google's Gemini models for completion-style tasks.

121

122

```python { .api }

123

class GoogleGenerativeAI:

124

def __init__(

125

self,

126

*,

127

model: str,

128

google_api_key: Optional[SecretStr] = None,

129

temperature: float = 0.7,

130

max_output_tokens: Optional[int] = None,

131

**kwargs

132

)

133

134

def invoke(self, input: Union[str, List[BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs) -> str

135

def stream(self, input: Union[str, List[BaseMessage]], config: Optional[RunnableConfig] = None, **kwargs) -> Iterator[str]

136

```

137

138

[LLM Models](./llm-models.md)

139

140

### Embeddings

141

142

High-quality text embeddings for semantic search, similarity analysis, and machine learning applications with batching support and configurable task types.

143

144

```python { .api }

145

class GoogleGenerativeAIEmbeddings:

146

def __init__(

147

self,

148

*,

149

model: str,

150

task_type: Optional[str] = None,

151

google_api_key: Optional[SecretStr] = None,

152

**kwargs

153

)

154

155

def embed_query(self, text: str, **kwargs) -> List[float]

156

def embed_documents(self, texts: List[str], **kwargs) -> List[List[float]]

157

```

158

159

[Embeddings](./embeddings.md)

160

161

### Vector Store

162

163

Managed semantic search and document retrieval using Google's vector store infrastructure with support for corpus and document management.

164

165

```python { .api }

166

class GoogleVectorStore:

167

def __init__(self, *, corpus_id: str, document_id: Optional[str] = None)

168

169

def add_texts(self, texts: Iterable[str], metadatas: Optional[List[Dict]] = None, **kwargs) -> List[str]

170

def similarity_search(self, query: str, k: int = 4, **kwargs) -> List[Document]

171

def similarity_search_with_score(self, query: str, k: int = 4, **kwargs) -> List[Tuple[Document, float]]

172

173

@classmethod

174

def create_corpus(cls, corpus_id: Optional[str] = None, display_name: Optional[str] = None) -> "GoogleVectorStore"

175

176

@classmethod

177

def from_texts(cls, texts: List[str], **kwargs) -> "GoogleVectorStore"

178

```

179

180

[Vector Store](./vector-store.md)

181

182

### Attributed Question Answering (AQA)

183

184

Grounded question answering that provides responses based exclusively on provided source passages with full attribution.

185

186

```python { .api }

187

class AqaInput:

188

prompt: str

189

source_passages: List[str]

190

191

class AqaOutput:

192

answer: str

193

attributed_passages: List[str]

194

answerable_probability: float

195

196

class GenAIAqa:

197

def __init__(self, *, answer_style: int = 1)

198

def invoke(self, input: AqaInput, config: Optional[RunnableConfig] = None, **kwargs) -> AqaOutput

199

```

200

201

[Attributed Question Answering](./aqa.md)

202

203

### Safety and Configuration

204

205

Content safety controls and configuration options for responsible AI deployment with comprehensive filtering capabilities.

206

207

```python { .api }

208

# Enums from Google AI

209

HarmCategory # Categories of potentially harmful content

210

HarmBlockThreshold # Threshold levels for content filtering

211

Modality # Generation modality options

212

213

# Exception classes

214

class DoesNotExistsException(Exception): ...

215

```

216

217

[Safety and Configuration](./safety-config.md)

218

219

## Types

220

221

```python { .api }

222

# Input/Output Types

223

LanguageModelInput = Union[str, List[BaseMessage], Dict]

224

SafetySettingDict = TypedDict('SafetySettingDict', {

225

'category': HarmCategory,

226

'threshold': HarmBlockThreshold

227

})

228

229

# Authentication

230

SecretStr = pydantic.SecretStr

231

232

# LangChain Integration Types

233

BaseMessage = langchain_core.messages.BaseMessage

234

Document = langchain_core.documents.Document

235

VectorStore = langchain_core.vectorstores.VectorStore

236

Embeddings = langchain_core.embeddings.Embeddings

237

BaseChatModel = langchain_core.language_models.chat_models.BaseChatModel

238

BaseLLM = langchain_core.language_models.llms.BaseLLM

239

Runnable = langchain_core.runnables.Runnable

240

RunnableConfig = langchain_core.runnables.config.RunnableConfig

241

242

# Additional Type Helpers

243

Optional = typing.Optional

244

Union = typing.Union

245

List = typing.List

246

Dict = typing.Dict

247

Any = typing.Any

248

Sequence = typing.Sequence

249

Tuple = typing.Tuple

250

```