Introduction
llmist is a TypeScript LLM client with streaming tool execution. Most LLM libraries buffer the entire response before parsing tool calls. llmist parses incrementally.
Your gadgets (tools) fire the instant they’re complete in the stream—giving your users immediate feedback.
Key Features
Section titled “Key Features”Streaming Tool Execution
Section titled “Streaming Tool Execution”Gadgets execute the moment their block is parsed—not after the response completes. Real-time UX without buffering.
for await (const event of agent.run()) { if (event.type === 'gadget_result') updateUI(event.result); // Immediate}Built-in Function Calling
Section titled “Built-in Function Calling”llmist implements its own tool calling via a simple block format. No response_format: json. No native tool support needed. Works with any model from supported providers.
!!!GADGET_START:FloppyDisk!!!ARG:filenameDOOM.ZIP!!!ARG:megabytes50!!!GADGET_ENDMarkers are fully configurable.
Multi-Provider Support
Section titled “Multi-Provider Support”OpenAI, Anthropic, and Gemini out of the box—extensible to any provider. Just set API keys as environment variables.
.withModel('sonnet') // Anthropic Claude.withModel('gpt-5') // OpenAI.withModel('flash') // Google GeminiComposable Agent API
Section titled “Composable Agent API”Fluent builder, async iterators, full TypeScript inference. Hook into any lifecycle point. Your code stays readable.
const answer = await LLMist.createAgent() .withModel('sonnet') .withGadgets(FloppyDisk, DialUpModem) .withHooks(HookPresets.monitoring()) .askAndCollect('How many floppies for DOOM.ZIP?');Packages
Section titled “Packages”| Package | Description |
|---|---|
llmist | Core library with agents, gadgets, and providers |
@llmist/cli | Command-line interface |
@llmist/testing | Testing utilities and mocks |
Next Steps
Section titled “Next Steps”- Installation - Get llmist set up
- Quick Start - Build your first agent