or run

npx @tessl/cli init
Log in

Version

Tile

Overview

Evals

Files

Files

docs

agents.mdcore-schema.mddocument-stores.mdevaluation-utilities.mdfile-processing.mdgenerators.mdindex.mdpipelines.mdreaders.mdretrievers.md

readers.mddocs/

0

# Reader Components

1

2

Reading comprehension components for extractive question answering using FARM, Transformers, and specialized table readers.

3

4

## Core Imports

5

6

```python

7

from haystack.nodes.reader import FARMReader, TransformersReader, TableReader

8

from haystack.nodes.reader.base import BaseReader

9

```

10

11

## Base Reader

12

13

```python { .api }

14

from haystack.nodes.reader.base import BaseReader

15

from haystack.schema import Document, Answer

16

from typing import List, Optional, Dict, Any

17

18

class BaseReader:

19

def predict(self, query: str, documents: List[Document], top_k: Optional[int] = None) -> List[Answer]:

20

"""

21

Extract answers from documents for the given query.

22

23

Args:

24

query: Question text

25

documents: List of documents to search for answers

26

top_k: Maximum number of answers to return

27

28

Returns:

29

List of Answer objects with extracted text and confidence scores

30

"""

31

```

32

33

## FARM Reader

34

35

```python { .api }

36

from haystack.nodes.reader.farm import FARMReader

37

38

class FARMReader(BaseReader):

39

def __init__(self, model_name_or_path: str = "deepset/roberta-base-squad2",

40

use_gpu: bool = True, no_ans_boost: float = 0.0,

41

return_no_answer: bool = False, top_k: int = 10,

42

max_seq_len: int = 256, doc_stride: int = 128):

43

"""

44

Initialize FARM-based QA reader.

45

46

Args:

47

model_name_or_path: HuggingFace model name or local path

48

use_gpu: Whether to use GPU acceleration

49

no_ans_boost: Boost for "no answer" predictions

50

return_no_answer: Whether to return "no answer" predictions

51

top_k: Number of answers to return per document

52

max_seq_len: Maximum sequence length for input

53

doc_stride: Stride for sliding window over long documents

54

"""

55

```

56

57

## Transformers Reader

58

59

```python { .api }

60

from haystack.nodes.reader.transformers import TransformersReader

61

62

class TransformersReader(BaseReader):

63

def __init__(self, model_name_or_path: str = "deepset/roberta-base-squad2",

64

tokenizer: Optional[str] = None, use_gpu: bool = True,

65

top_k: int = 10, max_seq_len: int = 256, doc_stride: int = 128):

66

"""

67

Initialize Transformers-based QA reader.

68

69

Args:

70

model_name_or_path: HuggingFace model name or local path

71

tokenizer: Tokenizer name (defaults to model tokenizer)

72

use_gpu: Whether to use GPU acceleration

73

top_k: Number of answers to return per document

74

max_seq_len: Maximum sequence length for input

75

doc_stride: Stride for sliding window over long documents

76

"""

77

```

78

79

## Table Reader

80

81

```python { .api }

82

from haystack.nodes.reader.table import TableReader

83

84

class TableReader(BaseReader):

85

def __init__(self, model_name_or_path: str = "google/tapas-base-finetuned-wtq",

86

use_gpu: bool = True, top_k: int = 10):

87

"""

88

Initialize table-based QA reader for structured data.

89

90

Args:

91

model_name_or_path: TAPAS model name or local path

92

use_gpu: Whether to use GPU acceleration

93

top_k: Number of answers to return

94

"""

95

```