Skip to content

LLMist

Defined in: core/client.ts:59

new LLMist(): LLMist

Defined in: core/client.ts:71

LLMist

new LLMist(adapters): LLMist

Defined in: core/client.ts:72

ProviderAdapter[]

LLMist

new LLMist(adapters, defaultProvider): LLMist

Defined in: core/client.ts:73

ProviderAdapter[]

string

LLMist

new LLMist(options): LLMist

Defined in: core/client.ts:74

LLMistOptions

LLMist

readonly image: ImageNamespace

Defined in: core/client.ts:67


readonly modelRegistry: ModelRegistry

Defined in: core/client.ts:62


readonly speech: SpeechNamespace

Defined in: core/client.ts:68


readonly text: TextNamespace

Defined in: core/client.ts:66


readonly vision: VisionNamespace

Defined in: core/client.ts:69

complete(prompt, options?): Promise<string>

Defined in: core/client.ts:264

Instance method: Quick completion using this client instance.

string

User prompt

TextGenerationOptions

Optional configuration

Promise<string>

Complete text response


countTokens(model, messages): Promise<number>

Defined in: core/client.ts:181

Count tokens in messages for a given model.

Uses provider-specific token counting methods for accurate estimation:

  • OpenAI: tiktoken library with model-specific encodings
  • Anthropic: Native messages.countTokens() API
  • Gemini: SDK’s countTokens() method

Falls back to character-based estimation (4 chars/token) if the provider doesn’t support native token counting or if counting fails.

This is useful for:

  • Pre-request cost estimation
  • Context window management
  • Request batching optimization

string

Model identifier (e.g., “openai:gpt-4”, “anthropic:claude-3-5-sonnet-20241022”)

LLMMessage[]

Array of messages to count tokens for

Promise<number>

Promise resolving to the estimated input token count

const client = new LLMist();
const messages = [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' }
];
const tokenCount = await client.countTokens('openai:gpt-4', messages);
console.log(`Estimated tokens: ${tokenCount}`);

createAgent(): AgentBuilder

Defined in: core/client.ts:325

Create agent builder with this client instance. Useful when you want to reuse a configured client.

AgentBuilder

AgentBuilder instance using this client

const client = new LLMist({ ... });
const agent = client.createAgent()
.withModel("sonnet")
.ask("Hello");

stream(options): LLMStream

Defined in: core/client.ts:142

LLMGenerationOptions

LLMStream


streamText(prompt, options?): AsyncGenerator<string>

Defined in: core/client.ts:275

Instance method: Quick streaming using this client instance.

string

User prompt

TextGenerationOptions

Optional configuration

AsyncGenerator<string>

Async generator yielding text chunks


static complete(prompt, options?): Promise<string>

Defined in: core/client.ts:224

Quick completion - returns final text response. Convenient for simple queries without needing agent setup.

string

User prompt

TextGenerationOptions

Optional configuration

Promise<string>

Complete text response

const answer = await LLMist.complete("What is 2+2?");
console.log(answer); // "4" or "2+2 equals 4"
const answer = await LLMist.complete("Tell me a joke", {
model: "sonnet",
temperature: 0.9
});

static createAgent(): AgentBuilder

Defined in: core/client.ts:306

Create a fluent agent builder. Provides a chainable API for configuring and creating agents.

AgentBuilder

AgentBuilder instance for chaining

const agent = LLMist.createAgent()
.withModel("sonnet")
.withSystem("You are a helpful assistant")
.withGadgets(Calculator, Weather)
.ask("What's the weather in Paris?");
for await (const event of agent.run()) {
// handle events
}
// Quick one-liner for simple queries
const answer = await LLMist.createAgent()
.withModel("gpt4-mini")
.askAndCollect("What is 2+2?");

static stream(prompt, options?): AsyncGenerator<string>

Defined in: core/client.ts:252

Quick streaming - returns async generator of text chunks. Convenient for streaming responses without needing agent setup.

string

User prompt

TextGenerationOptions

Optional configuration

AsyncGenerator<string>

Async generator yielding text chunks

for await (const chunk of LLMist.stream("Tell me a story")) {
process.stdout.write(chunk);
}
// With options
for await (const chunk of LLMist.stream("Generate code", {
model: "gpt4",
systemPrompt: "You are a coding assistant"
})) {
process.stdout.write(chunk);
}