Data framework for your LLM application
—
Pending
Does it follow best practices?
Impact
Pending
No eval scenarios have been run
Pending
The risk profile of this skill
Response generation and synthesis strategies for combining retrieved information into coherent answers in LlamaIndex.TS.
import { ResponseSynthesizer } from "llamaindex";Response synthesis combines retrieved information from multiple sources into coherent, comprehensive answers using various strategies optimized for different use cases.
interface BaseSynthesizer {
synthesize(query: string, nodes: BaseNode[]): Promise<EngineResponse>;
}class ResponseSynthesizer implements BaseSynthesizer {
constructor(options?: {
responseMode?: ResponseMode;
serviceContext?: ServiceContext;
});
synthesize(query: string, nodes: BaseNode[]): Promise<EngineResponse>;
responseMode: ResponseMode;
}
type ResponseMode = "refine" | "compact" | "tree_summarize" | "simple_summarize";Best for comprehensive answers from multiple sources.
const treeSynthesizer = new ResponseSynthesizer({
responseMode: "tree_summarize",
});Iteratively refines answers with each chunk.
const refineSynthesizer = new ResponseSynthesizer({
responseMode: "refine",
});Combines chunks to maximize context usage.
const compactSynthesizer = new ResponseSynthesizer({
responseMode: "compact",
});import { RetrieverQueryEngine, ResponseSynthesizer } from "llamaindex";
const queryEngine = new RetrieverQueryEngine({
retriever: index.asRetriever(),
responseSynthesizer: new ResponseSynthesizer({
responseMode: "tree_summarize",
}),
});function createResponseSynthesizer(mode: ResponseMode): BaseSynthesizer;// Choose mode based on use case
const synthesizer = createResponseSynthesizer(
documents.length > 10 ? "tree_summarize" : "refine"
);