All configuration options for LLMist client and agents.
const client = new LLMist (options);
Option Type Default Description adaptersProviderAdapter[][]Manual provider adapters defaultProviderstringFirst adapter Default provider prefix autoDiscoverProvidersbooleantrueAuto-discover from env vars customModelsModelSpec[][]Custom model specifications
const client = new LLMist ( {
autoDiscoverProviders: true ,
defaultProvider: ' anthropic ' ,
modelId: ' ft:gpt-5:my-org ' ,
pricing: { input: 5.0 , output: 15.0 },
knowledgeCutoff: ' 2024-08 ' ,
features: { streaming: true , functionCalling: true , vision: true },
Method Type Default Description .withModel(model)stringopenai:gpt-5.2Model name or alias .withSystem(prompt)stringnone System prompt .withTemperature(temp)numberProvider default Temperature (0-1) .withMaxIterations(n)number10 Max agent loop iterations
Method Type Description .withGadgets(...gadgets)GadgetOrClass[]Register gadgets (classes or instances) .withDefaultGadgetTimeout(ms)numberDefault timeout for all gadgets
Method Type Default Description .withStopOnGadgetError(stop)booleantrueStop on first gadget error .withErrorHandler(handler)Function none Custom error handling .withRetry(config)RetryConfigEnabled, 3 retries Configure retry with exponential backoff .withoutRetry()- - Disable automatic retry
. withErrorHandler ( ( ctx ) => {
// Return true to continue, false to stop
return ctx . errorType !== ' execution ' ;
// Configure retry behavior for rate limits and transient errors
retries: 5 , // Max retry attempts
minTimeout: 2000 , // Initial delay (ms)
maxTimeout: 60000 , // Max delay (ms)
onRetry : ( error , attempt ) => console . log ( ` Retry ${ attempt } ` ),
// Disable retry entirely
Method Type Description .withHistory(messages)HistoryMessage[]Add conversation history .addMessage(message)HistoryMessageAdd single message
{ assistant: ' Hi there! ' },
Method Type Description .withHooks(hooks)AgentHooksLifecycle hooks .withLogger(logger)LoggerCustom tslog logger .onHumanInput(handler)Function Human input handler .withTrailingMessage(message)string | FunctionEphemeral message appended to each request
Add an ephemeral message that appears at the end of each LLM request but is not persisted to conversation history. This is useful for:
Reminders : Instructions that need to be reinforced on every turn
Context injection : Current state or status that changes independently
Format enforcement : “Always respond in JSON format”
. withTrailingMessage ( " Always respond in JSON format. " )
// Dynamic message based on iteration
. withTrailingMessage ( ( ctx ) =>
` [Iteration ${ ctx . iteration } / ${ ctx . maxIterations } ] Focus on completing the current task. `
// Inject current status/state
let taskStatus = " pending " ;
. withTrailingMessage ( () =>
` [Current task status: ${ taskStatus } ] Adjust your approach based on this status. `
Key behavior:
Message is ephemeral - only appears in the current LLM request
Not persisted to conversation history
Composes with existing beforeLLMCall hooks
Respects “skip” action from existing controllers
Method Type Description .withPromptConfig(config)PromptTemplateConfigCustom prompt templates .withGadgetStartPrefix(prefix)stringCustom gadget marker start (default: !!!GADGET_START:) .withGadgetEndPrefix(prefix)stringCustom gadget marker end (default: !!!GADGET_END) .withGadgetArgPrefix(prefix)stringCustom argument prefix for block format (default: !!!ARG:) .withTextOnlyHandler(handler)TextOnlyHandlerHandle text-only responses .withTextWithGadgetsHandler(handler)objectWrap text alongside gadget calls
All three marker prefixes can be customized if you need to avoid conflicts with your content or match existing systems:
. withGadgetStartPrefix ( " <<GADGET_START>> " )
. withGadgetEndPrefix ( " <<GADGET_END>> " )
. withGadgetArgPrefix ( " <<ARG>> " )
Or in CLI config (~/.llmist/cli.toml):
gadget-start-prefix = " <<GADGET_START>> "
gadget-end-prefix = " <<GADGET_END>> "
gadget-arg-prefix = " <<ARG>> "
Control how text responses are handled in the agent loop:
// Handle text-only responses (when LLM doesn't call any gadgets)
. withTextOnlyHandler ( " acknowledge " ) // Continue loop
. withTextOnlyHandler ( " terminate " ) // End loop (default)
. withTextOnlyHandler ( " wait_for_input " ) // Ask for human input
// Wrap text that accompanies gadget calls as synthetic gadget calls
// This keeps conversation history consistent and gadget-oriented
. withTextWithGadgetsHandler ({
parameterMapping : ( text ) => ({ message: text, done: false , type: " info " }),
resultMapping : ( text ) => ` ℹ️ ${ text } ` , // Optional: format the result
The textWithGadgetsHandler is useful when you want text that appears alongside gadget calls to also appear in the conversation history as an explicit gadget call. This helps LLMs maintain a consistent “gadget invocation” mindset.
Variable Provider Description OPENAI_API_KEYOpenAI OpenAI API key ANTHROPIC_API_KEYAnthropic Anthropic API key GEMINI_API_KEYGemini Google Gemini API key
LLMist . complete (prompt, options);
LLMist . stream (prompt, options);
Option Type Default Description modelstringgpt-5.2Model name or alias temperaturenumberProvider default Temperature (0-1) systemPromptstringnone System prompt maxTokensnumberProvider default Max tokens to generate
Method Returns Description .ask(prompt)AgentCreate agent (don’t run) .askAndCollect(prompt)Promise<string>Run and collect text .askWith(prompt, handlers)Promise<void>Run with event handlers