LLMist
Defined in: core/client.ts:59
Constructors
Section titled “Constructors”Constructor
Section titled “Constructor”new LLMist():
LLMist
Defined in: core/client.ts:71
Returns
Section titled “Returns”LLMist
Constructor
Section titled “Constructor”new LLMist(
adapters):LLMist
Defined in: core/client.ts:72
Parameters
Section titled “Parameters”adapters
Section titled “adapters”Returns
Section titled “Returns”LLMist
Constructor
Section titled “Constructor”new LLMist(
adapters,defaultProvider):LLMist
Defined in: core/client.ts:73
Parameters
Section titled “Parameters”adapters
Section titled “adapters”defaultProvider
Section titled “defaultProvider”string
Returns
Section titled “Returns”LLMist
Constructor
Section titled “Constructor”new LLMist(
options):LLMist
Defined in: core/client.ts:74
Parameters
Section titled “Parameters”options
Section titled “options”Returns
Section titled “Returns”LLMist
Properties
Section titled “Properties”
readonlyimage:ImageNamespace
Defined in: core/client.ts:67
modelRegistry
Section titled “modelRegistry”
readonlymodelRegistry:ModelRegistry
Defined in: core/client.ts:62
speech
Section titled “speech”
readonlyspeech:SpeechNamespace
Defined in: core/client.ts:68
readonlytext:TextNamespace
Defined in: core/client.ts:66
vision
Section titled “vision”
readonlyvision:VisionNamespace
Defined in: core/client.ts:69
Methods
Section titled “Methods”complete()
Section titled “complete()”complete(
prompt,options?):Promise<string>
Defined in: core/client.ts:264
Instance method: Quick completion using this client instance.
Parameters
Section titled “Parameters”prompt
Section titled “prompt”string
User prompt
options?
Section titled “options?”Optional configuration
Returns
Section titled “Returns”Promise<string>
Complete text response
countTokens()
Section titled “countTokens()”countTokens(
model,messages):Promise<number>
Defined in: core/client.ts:181
Count tokens in messages for a given model.
Uses provider-specific token counting methods for accurate estimation:
- OpenAI: tiktoken library with model-specific encodings
- Anthropic: Native messages.countTokens() API
- Gemini: SDK’s countTokens() method
Falls back to character-based estimation (4 chars/token) if the provider doesn’t support native token counting or if counting fails.
This is useful for:
- Pre-request cost estimation
- Context window management
- Request batching optimization
Parameters
Section titled “Parameters”string
Model identifier (e.g., “openai:gpt-4”, “anthropic:claude-3-5-sonnet-20241022”)
messages
Section titled “messages”Array of messages to count tokens for
Returns
Section titled “Returns”Promise<number>
Promise resolving to the estimated input token count
Example
Section titled “Example”const client = new LLMist();const messages = [ { role: 'system', content: 'You are a helpful assistant.' }, { role: 'user', content: 'Hello!' }];
const tokenCount = await client.countTokens('openai:gpt-4', messages);console.log(`Estimated tokens: ${tokenCount}`);createAgent()
Section titled “createAgent()”createAgent():
AgentBuilder
Defined in: core/client.ts:325
Create agent builder with this client instance. Useful when you want to reuse a configured client.
Returns
Section titled “Returns”AgentBuilder instance using this client
Example
Section titled “Example”const client = new LLMist({ ... });
const agent = client.createAgent() .withModel("sonnet") .ask("Hello");stream()
Section titled “stream()”stream(
options):LLMStream
Defined in: core/client.ts:142
Parameters
Section titled “Parameters”options
Section titled “options”Returns
Section titled “Returns”streamText()
Section titled “streamText()”streamText(
prompt,options?):AsyncGenerator<string>
Defined in: core/client.ts:275
Instance method: Quick streaming using this client instance.
Parameters
Section titled “Parameters”prompt
Section titled “prompt”string
User prompt
options?
Section titled “options?”Optional configuration
Returns
Section titled “Returns”AsyncGenerator<string>
Async generator yielding text chunks
complete()
Section titled “complete()”
staticcomplete(prompt,options?):Promise<string>
Defined in: core/client.ts:224
Quick completion - returns final text response. Convenient for simple queries without needing agent setup.
Parameters
Section titled “Parameters”prompt
Section titled “prompt”string
User prompt
options?
Section titled “options?”Optional configuration
Returns
Section titled “Returns”Promise<string>
Complete text response
Example
Section titled “Example”const answer = await LLMist.complete("What is 2+2?");console.log(answer); // "4" or "2+2 equals 4"
const answer = await LLMist.complete("Tell me a joke", { model: "sonnet", temperature: 0.9});createAgent()
Section titled “createAgent()”
staticcreateAgent():AgentBuilder
Defined in: core/client.ts:306
Create a fluent agent builder. Provides a chainable API for configuring and creating agents.
Returns
Section titled “Returns”AgentBuilder instance for chaining
Examples
Section titled “Examples”const agent = LLMist.createAgent() .withModel("sonnet") .withSystem("You are a helpful assistant") .withGadgets(Calculator, Weather) .ask("What's the weather in Paris?");
for await (const event of agent.run()) { // handle events}// Quick one-liner for simple queriesconst answer = await LLMist.createAgent() .withModel("gpt4-mini") .askAndCollect("What is 2+2?");stream()
Section titled “stream()”
staticstream(prompt,options?):AsyncGenerator<string>
Defined in: core/client.ts:252
Quick streaming - returns async generator of text chunks. Convenient for streaming responses without needing agent setup.
Parameters
Section titled “Parameters”prompt
Section titled “prompt”string
User prompt
options?
Section titled “options?”Optional configuration
Returns
Section titled “Returns”AsyncGenerator<string>
Async generator yielding text chunks
Example
Section titled “Example”for await (const chunk of LLMist.stream("Tell me a story")) { process.stdout.write(chunk);}
// With optionsfor await (const chunk of LLMist.stream("Generate code", { model: "gpt4", systemPrompt: "You are a coding assistant"})) { process.stdout.write(chunk);}