Quick Methods
Simple APIs for basic LLM interactions without agent setup.
Overview
Section titled “Overview”For simple prompts without tools, use quick methods:
import { LLMist } from 'llmist';
// One-shot completionconst answer = await LLMist.complete('What is 2+2?');
// Streamingfor await (const chunk of LLMist.stream('Tell me a story')) { process.stdout.write(chunk);}Static Methods
Section titled “Static Methods”LLMist.complete()
Section titled “LLMist.complete()”Get a complete response as a string:
// Basicconst answer = await LLMist.complete('Explain quantum computing');
// With optionsconst answer = await LLMist.complete('Write a haiku', { model: 'sonnet', temperature: 0.9, systemPrompt: 'You are a poet', maxTokens: 100,});LLMist.stream()
Section titled “LLMist.stream()”Stream text chunks in real-time:
// Basicfor await (const chunk of LLMist.stream('Tell me a story')) { process.stdout.write(chunk);}
// With optionsfor await (const chunk of LLMist.stream('Write code', { model: 'gpt4o', systemPrompt: 'You are a coding assistant',})) { process.stdout.write(chunk);}Instance Methods
Section titled “Instance Methods”Use with a configured client:
const client = new LLMist({ defaultProvider: 'anthropic',});
// Completeconst answer = await client.complete('Hello');
// Streamfor await (const chunk of client.streamText('Hello')) { process.stdout.write(chunk);}Options
Section titled “Options”interface TextGenerationOptions { model?: string; // Model name or alias (default: 'gpt-5-mini') temperature?: number; // 0-1 (default: provider default) systemPrompt?: string; // System prompt (default: none) maxTokens?: number; // Max tokens (default: provider default)}Model Shortcuts
Section titled “Model Shortcuts”Works with all model shortcuts:
await LLMist.complete('Hello', { model: 'haiku' });await LLMist.complete('Hello', { model: 'sonnet' });await LLMist.complete('Hello', { model: 'gpt4o' });await LLMist.complete('Hello', { model: 'flash' });When to Use Quick Methods
Section titled “When to Use Quick Methods”Use quick methods when:
- Simple prompts without tools
- No conversation history needed
- No need for event handling
- Just want text output
Use agents when:
- Need tools (gadgets)
- Want streaming events
- Need conversation history
- Want lifecycle hooks
Comparison
Section titled “Comparison”// Quick method (simple)const answer = await LLMist.complete('What is 2+2?');
// Agent (same result, more verbose)const answer = await LLMist.createAgent() .withModel('gpt-5-mini') .askAndCollect('What is 2+2?');See Also
Section titled “See Also”- Quick Start - Full guide
- Streaming Guide - Agent streaming
- Models & Aliases - All available models