Skip to content

Testing Introduction

@llmist/testing provides utilities to mock LLM responses in your test suite. Write deterministic tests for AI-powered applications without making real API calls.

Deterministic Tests

Get the same response every time - no flaky tests due to LLM variability

Fast Execution

No network calls means tests run in milliseconds, not seconds

No API Costs

Run thousands of tests without spending on API calls

Edge Cases

Test error handling, timeouts, and unusual responses easily

  • MockBuilder - Fluent API to define mock responses based on model and message content
  • testGadget() - Test gadgets in isolation without an agent
  • createMockGadget() - Create spy gadgets to verify calls
  • Stream Helpers - Collect and inspect streaming responses
  • CLI Helpers - Test CLI applications with mocked stdin/stdout
import { describe, it, expect } from 'vitest';
import { mockLLM, createMockClient } from '@llmist/testing';
describe('My AI Feature', () => {
it('should process user requests', async () => {
// Set up mock response
mockLLM()
.forAnyModel()
.whenMessageContains('hello')
.returns('Hello! How can I help you today?')
.register();
// Create client with mock provider
const client = createMockClient();
// Run your agent
const response = await client.createAgent()
.withModel('sonnet')
.askAndCollect('hello');
expect(response).toContain('Hello');
});
});
ScenarioTool
Agent responsesmockLLM() + createMockClient()
Gadget executiontestGadget()
Gadget call verificationcreateMockGadget()
Multi-turn conversationsmockLLM().times(n)
Error handlingmockLLM().throwsError()
Streaming behaviorcollectStream()