Skip to content

Configuration

All configuration options for LLMist client and agents.

const client = new LLMist(options);
OptionTypeDefaultDescription
adaptersProviderAdapter[][]Manual provider adapters
defaultProviderstringFirst adapterDefault provider prefix
autoDiscoverProvidersbooleantrueAuto-discover from env vars
customModelsModelSpec[][]Custom model specifications
// Full example
const client = new LLMist({
autoDiscoverProviders: true,
defaultProvider: 'anthropic',
customModels: [{
provider: 'openai',
modelId: 'ft:gpt-5:my-org',
displayName: 'My Model',
contextWindow: 128_000,
maxOutputTokens: 16_384,
pricing: { input: 5.0, output: 15.0 },
knowledgeCutoff: '2024-08',
features: { streaming: true, functionCalling: true, vision: true },
}],
});
LLMist.createAgent()
.withModel(model)
.withSystem(prompt)
// ... etc
MethodTypeDefaultDescription
.withModel(model)stringopenai:gpt-5.2Model name or alias
.withSystem(prompt)stringnoneSystem prompt
.withTemperature(temp)numberProvider defaultTemperature (0-1)
.withMaxIterations(n)number10Max agent loop iterations
MethodTypeDescription
.withGadgets(...gadgets)GadgetOrClass[]Register gadgets (classes or instances)
.withDefaultGadgetTimeout(ms)numberDefault timeout for all gadgets
MethodTypeDefaultDescription
.withStopOnGadgetError(stop)booleantrueStop on first gadget error
.withErrorHandler(handler)FunctionnoneCustom error handling
.withRetry(config)RetryConfigEnabled, 3 retriesConfigure retry with exponential backoff
.withoutRetry()--Disable automatic retry
.withErrorHandler((ctx) => {
// Return true to continue, false to stop
return ctx.errorType !== 'execution';
})
// Configure retry behavior for rate limits and transient errors
.withRetry({
retries: 5, // Max retry attempts
minTimeout: 2000, // Initial delay (ms)
maxTimeout: 60000, // Max delay (ms)
onRetry: (error, attempt) => console.log(`Retry ${attempt}`),
})
// Disable retry entirely
.withoutRetry()
MethodTypeDescription
.withHistory(messages)HistoryMessage[]Add conversation history
.addMessage(message)HistoryMessageAdd single message
.withSyntheticGadgetCall(...)See belowPre-seed gadget results
.withHistory([
{ user: 'Hello' },
{ assistant: 'Hi there!' },
])

Inject synthetic gadget calls into conversation history so the agent starts with context already visible. This is useful for:

  • Codebase context: Show directory structure before the agent starts
  • In-context learning: Demonstrate expected gadget call patterns
  • Workflow bootstrapping: Pre-populate known data
// SDK: Pre-seed a ListDirectory result
LLMist.createAgent()
.withModel('sonnet')
.withGadgets(ListDirectory, ReadFile)
.withSyntheticGadgetCall(
'ListDirectory', // Gadget name
{ directoryPath: '.', maxDepth: 2 }, // Parameters "used"
'./src\n./package.json\n./README.md', // Pre-filled result
'gc_init_1' // Invocation ID
)
.ask('Analyze this project');

Or in CLI config (~/.llmist/cli.toml):

[my-profile]
inherits = "agent"
system = "You are a code analyst."
initial-gadgets = [
{ gadget = "ListDirectory", parameters = { directoryPath = ".", maxDepth = 2 }, result = """
./src
./src/index.ts
./package.json
./README.md
""" }
]

The agent will see this as if it already called ListDirectory and received the result, providing immediate context without using an iteration.

MethodTypeDescription
.withHooks(hooks)AgentHooksLifecycle hooks
.withLogger(logger)LoggerCustom tslog logger
.onHumanInput(handler)FunctionHuman input handler
.withTrailingMessage(message)string | FunctionEphemeral message appended to each request

Add an ephemeral message that appears at the end of each LLM request but is not persisted to conversation history. This is useful for:

  • Reminders: Instructions that need to be reinforced on every turn
  • Context injection: Current state or status that changes independently
  • Format enforcement: “Always respond in JSON format”
// Static message
LLMist.createAgent()
.withTrailingMessage("Always respond in JSON format.")
.ask("List users");
// Dynamic message based on iteration
LLMist.createAgent()
.withTrailingMessage((ctx) =>
`[Iteration ${ctx.iteration}/${ctx.maxIterations}] Focus on completing the current task.`
)
.ask("Build a web app");
// Inject current status/state
let taskStatus = "pending";
LLMist.createAgent()
.withTrailingMessage(() =>
`[Current task status: ${taskStatus}] Adjust your approach based on this status.`
)
.ask("Process tasks");

Key behavior:

  • Message is ephemeral - only appears in the current LLM request
  • Not persisted to conversation history
  • Composes with existing beforeLLMCall hooks
  • Respects “skip” action from existing controllers
MethodTypeDescription
.withPromptConfig(config)PromptTemplateConfigCustom prompt templates
.withGadgetStartPrefix(prefix)stringCustom gadget marker start (default: !!!GADGET_START:)
.withGadgetEndPrefix(prefix)stringCustom gadget marker end (default: !!!GADGET_END)
.withGadgetArgPrefix(prefix)stringCustom argument prefix for block format (default: !!!ARG:)
.withTextOnlyHandler(handler)TextOnlyHandlerHandle text-only responses
.withTextWithGadgetsHandler(handler)objectWrap text alongside gadget calls

All three marker prefixes can be customized if you need to avoid conflicts with your content or match existing systems:

LLMist.createAgent()
.withGadgetStartPrefix("<<GADGET_START>>")
.withGadgetEndPrefix("<<GADGET_END>>")
.withGadgetArgPrefix("<<ARG>>")
// ...

Or in CLI config (~/.llmist/cli.toml):

[agent]
gadget-start-prefix = "<<GADGET_START>>"
gadget-end-prefix = "<<GADGET_END>>"
gadget-arg-prefix = "<<ARG>>"

Control how text responses are handled in the agent loop:

// Handle text-only responses (when LLM doesn't call any gadgets)
.withTextOnlyHandler("acknowledge") // Continue loop
.withTextOnlyHandler("terminate") // End loop (default)
.withTextOnlyHandler("wait_for_input") // Ask for human input
// Wrap text that accompanies gadget calls as synthetic gadget calls
// This keeps conversation history consistent and gadget-oriented
.withTextWithGadgetsHandler({
gadgetName: "TellUser",
parameterMapping: (text) => ({ message: text, done: false, type: "info" }),
resultMapping: (text) => `ℹ️ ${text}`, // Optional: format the result
})

The textWithGadgetsHandler is useful when you want text that appears alongside gadget calls to also appear in the conversation history as an explicit gadget call. This helps LLMs maintain a consistent “gadget invocation” mindset.

VariableProviderDescription
OPENAI_API_KEYOpenAIOpenAI API key
ANTHROPIC_API_KEYAnthropicAnthropic API key
GEMINI_API_KEYGeminiGoogle Gemini API key
LLMist.complete(prompt, options);
LLMist.stream(prompt, options);
OptionTypeDefaultDescription
modelstringgpt-5.2Model name or alias
temperaturenumberProvider defaultTemperature (0-1)
systemPromptstringnoneSystem prompt
maxTokensnumberProvider defaultMax tokens to generate
MethodReturnsDescription
.ask(prompt)AgentCreate agent (don’t run)
.askAndCollect(prompt)Promise<string>Run and collect text
.askWith(prompt, handlers)Promise<void>Run with event handlers