Java integration library enabling LangChain4j applications to use Ollama's local language models with support for chat, streaming, embeddings, and advanced reasoning features
Language models provide simple text completion without conversation context, suitable for single-turn prompts and text generation tasks.
Synchronous language model for blocking text completion.
package dev.langchain4j.model.ollama;
public class OllamaLanguageModel implements LanguageModelThread Safety: Immutable after build(); safe for concurrent requests
Nullability: Instance never null after successful build()
public Response<String> generate(String prompt)Generates text completion from a prompt.
Parameters:
prompt - The input text prompt (must not be null or empty)Returns: Response<String> containing generated text and token usage metadata
Throws:
IllegalArgumentException - If prompt is null or emptyHttpTimeoutException - If request exceeds configured timeoutIOException - If network connectivity failsRuntimeException - If Ollama server returns errorThread Safety: Safe for concurrent calls
Retry Behavior: Automatically retries on transient failures up to maxRetries times
Example:
import dev.langchain4j.model.output.Response;
import java.io.IOException;
try {
Response<String> response = model.generate("Once upon a time");
String text = response.content();
TokenUsage usage = response.tokenUsage();
System.out.println("Generated: " + text);
} catch (IOException e) {
System.err.println("Network error: " + e.getMessage());
}Streaming language model for real-time token-by-token text completion.
package dev.langchain4j.model.ollama;
public class OllamaStreamingLanguageModel implements StreamingLanguageModelThread Safety: Immutable after build(); safe for concurrent requests
Streaming Threading: Callbacks invoked on HTTP client thread
public void generate(String prompt, StreamingResponseHandler<String> handler)Generates text completion from a prompt with streaming output.
Parameters:
prompt - The input text prompt (must not be null or empty)handler - Handler for streaming response callbacks (must not be null)Returns: void - Method returns immediately; response arrives via callbacks
Throws:
IllegalArgumentException - If prompt or handler is nullError Handling: Errors during streaming trigger handler.onError(Throwable)
No Retry: Streaming operations do not automatically retry
Thread Safety: Handler must be thread-safe if shared across calls
Install with Tessl CLI
npx tessl i tessl/maven-dev-langchain4j--langchain4j-ollama