Skip to content

Introduction

llmist is a TypeScript LLM client with streaming tool execution. Most LLM libraries buffer the entire response before parsing tool calls. llmist parses incrementally.

Your gadgets (tools) fire the instant they’re complete in the stream—giving your users immediate feedback.

Gadgets execute the moment their block is parsed—not after the response completes. Real-time UX without buffering.

for await (const event of agent.run()) {
if (event.type === 'gadget_result')
updateUI(event.result); // Immediate
}

llmist implements its own tool calling via a simple block format. No response_format: json. No native tool support needed. Works with any model from supported providers.

!!!GADGET_START:FloppyDisk
!!!ARG:filename
DOOM.ZIP
!!!ARG:megabytes
50
!!!GADGET_END

Markers are fully configurable.

OpenAI, Anthropic, and Gemini out of the box—extensible to any provider. Just set API keys as environment variables.

.withModel('sonnet') // Anthropic Claude
.withModel('gpt-5') // OpenAI
.withModel('flash') // Google Gemini

Fluent builder, async iterators, full TypeScript inference. Hook into any lifecycle point. Your code stays readable.

const answer = await LLMist.createAgent()
.withModel('sonnet')
.withGadgets(FloppyDisk, DialUpModem)
.withHooks(HookPresets.monitoring())
.askAndCollect('How many floppies for DOOM.ZIP?');
PackageDescription
llmistCore library with agents, gadgets, and providers
@llmist/cliCommand-line interface
@llmist/testingTesting utilities and mocks