Deterministic Tests
Get the same response every time - no flaky tests due to LLM variability
@llmist/testing provides utilities to mock LLM responses in your test suite. Write deterministic tests for AI-powered applications without making real API calls.
Deterministic Tests
Get the same response every time - no flaky tests due to LLM variability
Fast Execution
No network calls means tests run in milliseconds, not seconds
No API Costs
Run thousands of tests without spending on API calls
Edge Cases
Test error handling, timeouts, and unusual responses easily
import { describe, it, expect } from 'vitest';import { mockLLM, createMockClient } from '@llmist/testing';
describe('My AI Feature', () => { it('should process user requests', async () => { // Set up mock response mockLLM() .forAnyModel() .whenMessageContains('hello') .returns('Hello! How can I help you today?') .register();
// Create client with mock provider const client = createMockClient();
// Run your agent const response = await client.createAgent() .withModel('sonnet') .askAndCollect('hello');
expect(response).toContain('Hello'); });});| Scenario | Tool |
|---|---|
| Agent responses | mockLLM() + createMockClient() |
| Gadget execution | testGadget() |
| Gadget call verification | createMockGadget() |
| Multi-turn conversations | mockLLM().times(n) |
| Error handling | mockLLM().throwsError() |
| Streaming behavior | collectStream() |