Skip to content

Why llmist?

llmist is a streaming-first multi-provider LLM client in TypeScript with a home-made tool calling system.

Rather than waiting for a complete LLM response before parsing tool calls, llmist executes tools while the LLM is still streaming. The moment a tool call block is fully parsed, execution begins.

// Tools execute AS the LLM streams
for await (const event of agent.run()) {
if (event.type === 'gadget_start') {
console.log(`Starting: ${event.gadgetName}...`);
}
if (event.type === 'gadget_complete') {
console.log(`Result: ${event.result}`);
}
if (event.type === 'text') {
process.stdout.write(event.text);
}
}

Home-Made Tool Calling

llmist uses its own block format for tool calls—no JSON mode or native function calling required. Works with any model that can follow instructions.

Multi-Provider Support

First-class support for OpenAI, Anthropic, and Gemini. Auto-discovery from environment variables. Model shortcuts like sonnet, gpt4o, flash.

Full TypeScript Inference

Gadget parameters are fully typed from Zod schemas. No type assertions needed. Your IDE knows the exact shape of every parameter.

Powerful Hook System

Three-layer architecture: Observers (read-only monitoring), Interceptors (synchronous transforms), Controllers (async lifecycle control).

import { LLMist, Gadget, z } from 'llmist';
class ScreenSaver extends Gadget({
description: 'Selects a Windows 98 screensaver',
schema: z.object({
style: z.enum(['pipes', 'starfield', 'maze', 'flying-toasters']),
}),
}) {
execute(params: this['params']): string {
const { style } = params;
return `Activating ${style} screensaver. Move mouse to deactivate.`;
}
}
const answer = await LLMist.createAgent()
.withModel('sonnet')
.withGadgets(ScreenSaver)
.askAndCollect("I'm bored. Start the 3D pipes screensaver.");

Building Real-Time AI UX

If you need responsive, streaming AI experiences where users see tool execution in real-time.

Multi-Provider Applications

If you want to switch between OpenAI, Anthropic, and Gemini without code changes.

Type-Safe AI Development

If you want full TypeScript inference for tool parameters and responses.

Command-Line AI Workflows

If you need a powerful CLI for AI automation and scripting.

Choose your path based on how you want to use llmist:

  • Library - Integrate into your TypeScript/JavaScript application
  • CLI - Run agents from the command line
  • Testing - Mock LLM responses in your tests