or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

aqa.mdchat-models.mdembeddings.mdindex.mdllm-models.mdsafety-config.mdvector-store.md

aqa.mddocs/

0

# Attributed Question Answering (AQA)

1

2

Grounded question answering service that provides responses based exclusively on provided source passages with full attribution. Google's AQA service ensures answers are grounded in the provided context and identifies which passages were used to construct the response.

3

4

## Capabilities

5

6

### AqaInput

7

8

Input data structure for AQA operations.

9

10

```python { .api }

11

class AqaInput:

12

prompt: str

13

source_passages: List[str]

14

```

15

16

**Fields:**

17

- `prompt` (str): The user's question or inquiry

18

- `source_passages` (List[str]): List of text passages that should be used to answer the question

19

20

### AqaOutput

21

22

Output data structure containing the grounded answer and attribution information.

23

24

```python { .api }

25

class AqaOutput:

26

answer: str

27

attributed_passages: List[str]

28

answerable_probability: float

29

```

30

31

**Fields:**

32

- `answer` (str): The generated answer based on the source passages

33

- `attributed_passages` (List[str]): Specific passages that were used to construct the answer

34

- `answerable_probability` (float): Probability [0.0, 1.0] that the question can be answered from the provided passages

35

36

### GenAIAqa

37

38

Primary AQA service interface that extends LangChain's `RunnableSerializable` for pipeline integration.

39

40

```python { .api }

41

class GenAIAqa:

42

def __init__(

43

self,

44

*,

45

answer_style: int = 1,

46

safety_settings: List[SafetySetting] = [],

47

temperature: Optional[float] = None

48

)

49

```

50

51

**Parameters:**

52

- `answer_style` (int): Answer generation style (1 = ABSTRACTIVE, default)

53

- `safety_settings` (List[SafetySetting]): Content safety configuration

54

- `temperature` (Optional[float]): Generation temperature for answer variability

55

56

#### Core Method

57

58

```python { .api }

59

def invoke(

60

self,

61

input: AqaInput,

62

config: Optional[RunnableConfig] = None,

63

**kwargs: Any

64

) -> AqaOutput

65

```

66

67

Generate an attributed answer based on the input question and source passages.

68

69

**Parameters:**

70

- `input` (AqaInput): Question and source passages

71

- `config` (Optional[RunnableConfig]): Run configuration

72

- `**kwargs`: Additional parameters

73

74

**Returns:** AqaOutput with answer, attribution, and confidence

75

76

## Usage Examples

77

78

### Basic AQA Usage

79

80

```python

81

from langchain_google_genai import GenAIAqa, AqaInput

82

83

# Initialize AQA service

84

aqa = GenAIAqa()

85

86

# Prepare source passages

87

passages = [

88

"Machine learning is a subset of artificial intelligence that enables computers to learn and improve from experience without being explicitly programmed.",

89

"Deep learning is a specialized form of machine learning that uses neural networks with multiple layers to model and understand complex patterns.",

90

"Natural language processing (NLP) is a branch of AI that helps computers understand, interpret, and manipulate human language."

91

]

92

93

# Create input with question and passages

94

input_data = AqaInput(

95

prompt="What is machine learning and how does it relate to AI?",

96

source_passages=passages

97

)

98

99

# Generate attributed answer

100

result = aqa.invoke(input_data)

101

102

print(f"Answer: {result.answer}")

103

print(f"Confidence: {result.answerable_probability:.2f}")

104

print("Sources used:")

105

for i, passage in enumerate(result.attributed_passages, 1):

106

print(f"{i}. {passage}")

107

```

108

109

### Document-Based Q&A

110

111

```python

112

# Simulate document content

113

document_sections = [

114

"Python was created by Guido van Rossum and first released in 1991. It emphasizes code readability with its notable use of significant whitespace.",

115

"Python supports multiple programming paradigms, including procedural, object-oriented, and functional programming.",

116

"Python's design philosophy emphasizes code readability and a syntax that allows programmers to express concepts in fewer lines of code.",

117

"Python has a large standard library, which is often cited as one of its greatest strengths, providing tools for many tasks."

118

]

119

120

# Ask specific questions about the document

121

questions = [

122

"Who created Python and when?",

123

"What programming paradigms does Python support?",

124

"What are Python's main strengths?"

125

]

126

127

aqa = GenAIAqa()

128

129

for question in questions:

130

input_data = AqaInput(

131

prompt=question,

132

source_passages=document_sections

133

)

134

135

result = aqa.invoke(input_data)

136

137

print(f"\nQ: {question}")

138

print(f"A: {result.answer}")

139

print(f"Confidence: {result.answerable_probability:.2f}")

140

```

141

142

### Research Assistant

143

144

```python

145

# Research passages about climate change

146

research_passages = [

147

"Climate change refers to long-term shifts in global temperatures and weather patterns, primarily caused by human activities since the 1800s.",

148

"The greenhouse effect occurs when certain gases in Earth's atmosphere trap heat from the sun, warming the planet.",

149

"Carbon dioxide levels have increased by over 40% since pre-industrial times, primarily due to fossil fuel burning.",

150

"Renewable energy sources like solar, wind, and hydroelectric power produce electricity without releasing greenhouse gases.",

151

"Climate adaptation strategies include building sea walls, developing drought-resistant crops, and creating early warning systems."

152

]

153

154

# Research questions

155

research_questions = [

156

"What causes climate change?",

157

"How do renewable energy sources help with climate change?",

158

"What are some climate adaptation strategies?"

159

]

160

161

aqa = GenAIAqa()

162

163

research_results = {}

164

for question in research_questions:

165

input_data = AqaInput(

166

prompt=question,

167

source_passages=research_passages

168

)

169

170

result = aqa.invoke(input_data)

171

research_results[question] = result

172

173

# Generate research report

174

print("=== Climate Change Research Report ===\n")

175

for question, result in research_results.items():

176

print(f"Question: {question}")

177

print(f"Answer: {result.answer}")

178

print(f"Source Attribution: {len(result.attributed_passages)} passages used")

179

print(f"Confidence: {result.answerable_probability:.1%}")

180

print("-" * 50)

181

```

182

183

### Integration with Vector Store

184

185

```python

186

from langchain_google_genai import GoogleVectorStore

187

188

# Assume we have a populated vector store

189

vector_store = GoogleVectorStore(corpus_id="knowledge-base")

190

191

def aqa_with_retrieval(question: str, num_passages: int = 5):

192

"""Combine vector search with AQA for grounded answers."""

193

194

# Retrieve relevant passages

195

docs = vector_store.similarity_search(question, k=num_passages)

196

passages = [doc.page_content for doc in docs]

197

198

# Generate attributed answer

199

aqa = GenAIAqa()

200

input_data = AqaInput(

201

prompt=question,

202

source_passages=passages

203

)

204

205

result = aqa.invoke(input_data)

206

207

return result, docs # Return both answer and source documents

208

209

# Use the combined approach

210

question = "How do neural networks work?"

211

aqa_result, source_docs = aqa_with_retrieval(question)

212

213

print(f"Question: {question}")

214

print(f"Answer: {aqa_result.answer}")

215

print(f"Attribution confidence: {aqa_result.answerable_probability:.2f}")

216

217

print("\nSource documents:")

218

for i, doc in enumerate(source_docs, 1):

219

print(f"{i}. {doc.page_content[:100]}...")

220

```

221

222

### Custom Answer Styles

223

224

```python

225

import google.ai.generativelanguage as genai

226

227

# Abstractive answering (default)

228

abstractive_aqa = GenAIAqa(

229

answer_style=genai.GenerateAnswerRequest.AnswerStyle.ABSTRACTIVE

230

)

231

232

# Extractive answering (if available)

233

extractive_aqa = GenAIAqa(

234

answer_style=genai.GenerateAnswerRequest.AnswerStyle.EXTRACTIVE

235

)

236

237

passages = [

238

"Photosynthesis is the process by which plants convert sunlight into energy.",

239

"During photosynthesis, plants absorb carbon dioxide from the air and water from the soil.",

240

"The chlorophyll in plant leaves captures solar energy to power the photosynthesis reaction."

241

]

242

243

question = "How do plants convert sunlight into energy?"

244

245

# Compare answer styles

246

for name, aqa_instance in [("Abstractive", abstractive_aqa), ("Extractive", extractive_aqa)]:

247

try:

248

input_data = AqaInput(prompt=question, source_passages=passages)

249

result = aqa_instance.invoke(input_data)

250

251

print(f"\n{name} Answer:")

252

print(f"Response: {result.answer}")

253

print(f"Attribution: {len(result.attributed_passages)} passages")

254

except Exception as e:

255

print(f"{name} style not available: {e}")

256

```

257

258

### Temperature Control

259

260

```python

261

# Conservative answers (lower temperature)

262

conservative_aqa = GenAIAqa(temperature=0.1)

263

264

# Creative answers (higher temperature)

265

creative_aqa = GenAIAqa(temperature=0.9)

266

267

passages = [

268

"Artificial intelligence can be applied to healthcare for diagnostic assistance.",

269

"AI helps doctors analyze medical images like X-rays and MRIs more accurately.",

270

"Machine learning algorithms can predict patient outcomes based on medical data."

271

]

272

273

question = "How is AI used in healthcare?"

274

275

print("Conservative Answer:")

276

result1 = conservative_aqa.invoke(AqaInput(prompt=question, source_passages=passages))

277

print(result1.answer)

278

279

print("\nCreative Answer:")

280

result2 = creative_aqa.invoke(AqaInput(prompt=question, source_passages=passages))

281

print(result2.answer)

282

```

283

284

### Chain Integration

285

286

```python

287

from langchain_core.runnables import RunnableLambda

288

289

def prepare_aqa_input(data):

290

"""Helper function to prepare AQA input from chain data."""

291

return AqaInput(

292

prompt=data["question"],

293

source_passages=data["passages"]

294

)

295

296

def extract_answer(aqa_output):

297

"""Extract just the answer from AQA output."""

298

return aqa_output.answer

299

300

# Create a chain that processes questions with AQA

301

aqa = GenAIAqa()

302

303

aqa_chain = (

304

RunnableLambda(prepare_aqa_input)

305

| aqa

306

| RunnableLambda(extract_answer)

307

)

308

309

# Use the chain

310

chain_input = {

311

"question": "What is quantum computing?",

312

"passages": [

313

"Quantum computing uses quantum mechanical phenomena like superposition and entanglement.",

314

"Unlike classical bits, quantum bits (qubits) can exist in multiple states simultaneously.",

315

"Quantum computers could solve certain problems exponentially faster than classical computers."

316

]

317

}

318

319

answer = aqa_chain.invoke(chain_input)

320

print(f"Chain Answer: {answer}")

321

```

322

323

### Quality Assessment

324

325

```python

326

def assess_aqa_quality(question: str, passages: List[str], expected_keywords: List[str]):

327

"""Assess the quality of AQA responses."""

328

329

aqa = GenAIAqa()

330

input_data = AqaInput(prompt=question, source_passages=passages)

331

result = aqa.invoke(input_data)

332

333

# Check confidence

334

confidence_score = result.answerable_probability

335

336

# Check if answer contains expected keywords

337

answer_lower = result.answer.lower()

338

keyword_matches = sum(1 for keyword in expected_keywords

339

if keyword.lower() in answer_lower)

340

keyword_coverage = keyword_matches / len(expected_keywords)

341

342

# Check attribution quality

343

attribution_ratio = len(result.attributed_passages) / len(passages)

344

345

return {

346

"answer": result.answer,

347

"confidence": confidence_score,

348

"keyword_coverage": keyword_coverage,

349

"attribution_ratio": attribution_ratio,

350

"quality_score": (confidence_score + keyword_coverage) / 2

351

}

352

353

# Test quality assessment

354

passages = [

355

"Machine learning algorithms learn patterns from data to make predictions.",

356

"Supervised learning uses labeled examples to train models.",

357

"Unsupervised learning finds hidden patterns in unlabeled data."

358

]

359

360

assessment = assess_aqa_quality(

361

question="What is machine learning?",

362

passages=passages,

363

expected_keywords=["algorithms", "data", "patterns", "predictions"]

364

)

365

366

print(f"Answer: {assessment['answer']}")

367

print(f"Confidence: {assessment['confidence']:.2f}")

368

print(f"Keyword Coverage: {assessment['keyword_coverage']:.2f}")

369

print(f"Overall Quality: {assessment['quality_score']:.2f}")

370

```

371

372

## Best Practices

373

374

1. **Provide relevant passages**: Ensure source passages contain information relevant to the question

375

2. **Use sufficient context**: Include enough passages to provide comprehensive coverage of the topic

376

3. **Check confidence scores**: Use `answerable_probability` to determine if the answer is reliable

377

4. **Review attributions**: Examine `attributed_passages` to understand which sources influenced the answer

378

5. **Handle low confidence**: Consider providing additional or different passages if confidence is low

379

6. **Combine with retrieval**: Use vector search to find relevant passages before applying AQA

380

7. **Monitor passage quality**: Ensure source passages are accurate and up-to-date

381

8. **Consider answer style**: Choose between abstractive and extractive based on your use case