Multi-module test support framework for Embabel Agent applications providing integration testing, mock AI services, and test configuration utilities
Complete API reference for stubbing LLM operations in tests.
Stubbing methods control what the mocked LLM returns in your tests. All methods return OngoingStubbing<T> which allows chaining .thenReturn(), .thenThrow(), etc.
Module: embabel-agent-test
Class: EmbabelMockitoIntegrationTest
Import: import com.embabel.agent.test.integration.EmbabelMockitoIntegrationTest;
Stub LLM text generation with prompt matching only.
protected OngoingStubbing<String> whenGenerateText(
Predicate<String> promptMatcher
)Parameters:
promptMatcher - Predicate to match against the prompt textReturns: OngoingStubbing<String> for configuring the mock response
Usage:
whenGenerateText(prompt -> prompt.contains("summarize"))
.thenReturn("This is a summary");When to use: Most common case - stub based on prompt content only.
Stub LLM text generation with both prompt and interaction matching.
protected OngoingStubbing<String> whenGenerateText(
Predicate<String> promptMatcher,
Predicate<LlmInteraction> llmInteractionMatcher
)Parameters:
promptMatcher - Predicate to match against the prompt textllmInteractionMatcher - Predicate to match LlmInteraction details (model, temperature, etc.)Returns: OngoingStubbing<String> for configuring the mock response
Usage:
whenGenerateText(
prompt -> prompt.contains("analyze"),
interaction -> interaction.getModel().equals("gpt-4")
).thenReturn("Analysis complete");When to use: When you need to match on both prompt and model configuration.
Stub LLM object creation with prompt and output class matching.
protected <T> OngoingStubbing<T> whenCreateObject(
Predicate<String> promptMatcher,
Class<T> outputClass
)Type Parameters:
T - The type of object being createdParameters:
promptMatcher - Predicate to match against the prompt textoutputClass - Expected output class typeReturns: OngoingStubbing<T> for configuring the mock response
Usage:
whenCreateObject(
prompt -> prompt.contains("extract"),
Person.class
).thenReturn(new Person("Alice", 30));When to use: Most common case - stub based on prompt and output type.
Stub LLM object creation with prompt, output class, and interaction matching.
protected <T> OngoingStubbing<T> whenCreateObject(
Predicate<String> promptMatcher,
Class<T> outputClass,
Predicate<LlmInteraction> llmInteractionPredicate
)Type Parameters:
T - The type of object being createdParameters:
promptMatcher - Predicate to match against the prompt textoutputClass - Expected output class typellmInteractionPredicate - Predicate to match LlmInteraction detailsReturns: OngoingStubbing<T> for configuring the mock response
Usage:
whenCreateObject(
prompt -> prompt.contains("extract"),
Person.class,
interaction -> interaction.getTemperature() == 0.0
).thenReturn(new Person("Bob", 25));When to use: When you need to match on prompt, type, and model configuration.
All stubbing methods return OngoingStubbing<T> which supports chaining:
OngoingStubbing<T> thenReturn(T value)Return a specific value.
Usage:
whenGenerateText(p -> true).thenReturn("Fixed response");OngoingStubbing<T> thenThrow(Throwable... throwables)Throw an exception.
Usage:
whenGenerateText(p -> true).thenThrow(new RuntimeException("LLM error"));OngoingStubbing<T> thenAnswer(Answer<?> answer)Provide custom answer logic.
Usage:
whenGenerateText(p -> true).thenAnswer(invocation -> {
String prompt = invocation.getArgument(0);
return "Response based on: " + prompt;
});You can create multiple stubs that match different predicates:
// First stub - matches specific case
whenGenerateText(p -> p.contains("greeting"))
.thenReturn("Hello!");
// Second stub - matches different case
whenGenerateText(p -> p.contains("farewell"))
.thenReturn("Goodbye!");Mockito will match in order, returning the first matching stub.
Use complex logic in predicates:
whenGenerateText(prompt ->
prompt.contains("analyze") &&
prompt.length() > 100 &&
!prompt.contains("skip")
).thenReturn("Complex analysis result");Match on model configuration:
whenGenerateText(
prompt -> true,
interaction ->
interaction.getModel().equals("gpt-4") &&
interaction.getTemperature() > 0.5 &&
interaction.getMaxTokens() != null
).thenReturn("Result from specific configuration");@Test
void testSimpleStub() {
whenGenerateText(p -> p.contains("hello"))
.thenReturn("Hello, world!");
String result = myAgent.greet();
assertEquals("Hello, world!", result);
}@Test
void testObjectStub() {
Person expected = new Person("Alice", 30);
whenCreateObject(
p -> p.contains("extract person"),
Person.class
).thenReturn(expected);
Person result = myAgent.extractPerson("Alice is 30");
assertEquals(expected, result);
}@Test
void testMultipleStubs() {
whenGenerateText(p -> p.contains("step1")).thenReturn("Result 1");
whenGenerateText(p -> p.contains("step2")).thenReturn("Result 2");
String r1 = myAgent.step1();
String r2 = myAgent.step2();
assertEquals("Result 1", r1);
assertEquals("Result 2", r2);
}@Test
void testInteractionStub() {
whenGenerateText(
p -> p.contains("analyze"),
i -> i.getModel().equals("gpt-4") && i.getTemperature() == 0.0
).thenReturn("Precise analysis");
String result = myAgent.preciseAnalysis("data");
assertEquals("Precise analysis", result);
}@Test
void testErrorHandling() {
whenGenerateText(p -> p.contains("error"))
.thenThrow(new RuntimeException("LLM service unavailable"));
assertThrows(RuntimeException.class, () -> {
myAgent.processWithError();
});
}@FunctionalInterface
interface Predicate<T> {
boolean test(T t);
}Functional interface for matching conditions.
interface OngoingStubbing<T> {
OngoingStubbing<T> thenReturn(T value);
OngoingStubbing<T> thenThrow(Throwable... throwables);
OngoingStubbing<T> thenAnswer(Answer<?> answer);
}Mockito interface for configuring stub behavior.
class LlmInteraction {
String getModel();
Double getTemperature();
Integer getMaxTokens();
List<String> getToolGroups();
}Configuration for LLM interaction including model and parameters.