Skip to content

Agent Configuration

This guide shows how to configure llmist agents for common use cases, from simple completions to complex multi-tool workflows.

Quick responses without tools:

const answer = await LLMist.createAgent()
.withModel('haiku') // Fast, cheap model
.withSystem('You are a helpful assistant.')
.askAndCollect('What is the capital of France?');

Agent with gadgets for specific capabilities:

const result = await LLMist.createAgent()
.withModel('sonnet')
.withSystem('You are an arcade historian.')
.withGadgets(ArcadeHighScore, FloppyDisk)
.withMaxIterations(5)
.askAndCollect('What were the top Pac-Man scores and how many floppies to back them up?');

For complex multi-step tasks:

const agent = LLMist.createAgent()
.withModel('sonnet')
.withGadgets(FileReader, FileWriter, ShellCommand)
.withMaxIterations(50)
.withCompaction({
triggerThresholdPercent: 70,
preserveRecentTurns: 15,
})
.withHooks(HookPresets.monitoring());
for await (const event of agent.ask('Refactor the auth module').run()) {
// Handle events
}
Use CaseRecommended ModelWhy
Quick Q&AhaikuFast, cheap
Code generationsonnetGood balance
Complex reasoningopusBest quality
Bulk processingflashCost-effective
Simple extractiongpt-5-nanoAffordable
const selectModel = (task: string) => {
if (task.includes('simple')) return 'haiku';
if (task.includes('code')) return 'sonnet';
return 'gpt-5-nano';
};
const agent = LLMist.createAgent()
.withModel(selectModel(userTask))
.ask(userTask);

Stop on first error:

.withStopOnGadgetError(true)

Continue despite errors:

.withStopOnGadgetError(false)
.withErrorHandler((ctx) => {
logger.warn(`Gadget ${ctx.gadgetName} failed:`, ctx.error);
return true; // Continue
})
.withErrorHandler((ctx) => {
// Stop on critical errors
if (ctx.gadgetName === 'DatabaseWrite') {
return false; // Stop
}
// Continue on non-critical errors
return true;
})

Retry on rate limits and transient errors:

.withRetry({
retries: 5,
minTimeout: 2000,
maxTimeout: 60000,
onRetry: (error, attempt) => {
console.log(`Retry ${attempt}: ${error.message}`);
},
})
// Or disable retry
.withoutRetry()

Continue from previous conversation:

const agent = LLMist.createAgent()
.withModel('sonnet')
.withHistory([
{ user: 'My name is Alice' },
{ assistant: 'Nice to meet you, Alice!' },
])
.askAndCollect('What is my name?');
// "Your name is Alice"
const conversation = [];
const agent = LLMist.createAgent()
.withModel('sonnet')
.withGadgets(FloppyDisk);
// First turn
conversation.push({ user: 'How many floppies for a 10MB file?' });
const response1 = await agent
.withHistory(conversation)
.askAndCollect('How many floppies for a 10MB file?');
conversation.push({ assistant: response1 });
// Second turn
conversation.push({ user: 'What about a 50MB file?' });
const response2 = await agent
.withHistory(conversation)
.askAndCollect('What about a 50MB file?');

Add ephemeral context to each request:

// Static reminder
.withTrailingMessage('Always respond in JSON format.')
// Dynamic context
.withTrailingMessage((ctx) =>
`[Iteration ${ctx.iteration}/${ctx.maxIterations}]`
)
// Inject current state
let status = 'pending';
.withTrailingMessage(() => `Current status: ${status}`)

Full visibility for debugging:

const devAgent = LLMist.createAgent()
.withModel('sonnet')
.withGadgets(MyGadgets)
.withHooks(HookPresets.monitoring({ verbose: true }))
.withLogger(createLogger({ minLevel: 'debug' }));

Minimal overhead, error tracking:

const prodAgent = LLMist.createAgent()
.withModel('sonnet')
.withGadgets(MyGadgets)
.withHooks(HookPresets.merge(
HookPresets.errorLogging(),
HookPresets.tokenTracking(),
))
.withRetry({ retries: 3 })
.withCompaction({ enabled: true });

Minimize API costs:

const cheapAgent = LLMist.createAgent()
.withModel('haiku') // Cheapest model
.withMaxIterations(5) // Limit iterations
.withCompaction({
strategy: 'sliding-window', // No summarization cost
triggerThresholdPercent: 60,
})
.withHooks(HookPresets.tokenTracking()); // Monitor costs

Maximum reliability for critical tasks:

const reliableAgent = LLMist.createAgent()
.withModel('sonnet')
.withGadgets(MyGadgets)
.withRetry({
retries: 5,
minTimeout: 5000,
maxTimeout: 120000,
})
.withDefaultGadgetTimeout(60000)
.withErrorHandler((ctx) => {
alertOps(`Gadget error: ${ctx.gadgetName}`, ctx.error);
return ctx.errorType !== 'timeout';
});
const codeAgent = LLMist.createAgent()
.withModel('sonnet')
.withSystem(`You are an expert programmer.
- Write clean, tested code
- Follow best practices
- Explain your reasoning`)
.withGadgets(ReadFile, WriteFile, RunTests, ShellCommand)
.withMaxIterations(20)
.withTemperature(0.3); // More deterministic
const researchAgent = LLMist.createAgent()
.withModel('opus') // Best reasoning
.withSystem(`You are a research assistant.
- Verify information from multiple sources
- Cite your sources
- Distinguish facts from opinions`)
.withGadgets(WebSearch, ReadURL, TakeNotes)
.withMaxIterations(30)
.withCompaction({
strategy: 'summarization', // Preserve research context
preserveRecentTurns: 20,
});
const dataAgent = LLMist.createAgent()
.withModel('flash') // Fast and cheap
.withSystem('Process data accurately. Report errors clearly.')
.withGadgets(ReadCSV, WriteCSV, Transform)
.withMaxIterations(100)
.withStopOnGadgetError(false) // Continue on individual errors
.withCompaction({
strategy: 'sliding-window',
preserveRecentTurns: 5,
});
const interactiveAgent = LLMist.createAgent()
.withModel('sonnet')
.withGadgets(AskUser, TellUser, FloppyDisk)
.onHumanInput(async (question) => {
return await showPrompt(question);
})
.withTextOnlyHandler('wait_for_input');

Change gadget markers if needed:

.withGadgetStartPrefix('<<TOOL_START>>')
.withGadgetEndPrefix('<<TOOL_END>>')
.withGadgetArgPrefix('<<PARAM>>')

Or in CLI config:

[agent]
gadget-start-prefix = "<<TOOL_START>>"
gadget-end-prefix = "<<TOOL_END>>"
gadget-arg-prefix = "<<PARAM>>"
await agent.askWith('Process this task', {
onText: (text) => updateUI(text),
onGadgetCall: (call) => showSpinner(call.gadgetName),
onGadgetResult: (result) => hideSpinner(),
onError: (error) => showError(error),
});
for await (const event of agent.ask('Task').run()) {
switch (event.type) {
case 'text':
appendText(event.content);
break;
case 'gadget_call':
logGadgetCall(event);
break;
case 'gadget_result':
logGadgetResult(event);
break;
case 'compaction':
logCompaction(event);
break;
case 'iteration_complete':
updateProgress(event.iteration);
break;
}
}
const config = {
model: process.env.LLM_MODEL || 'sonnet',
maxIterations: parseInt(process.env.MAX_ITERATIONS || '10'),
logLevel: process.env.LOG_LEVEL || 'warn',
};
const agent = LLMist.createAgent()
.withModel(config.model)
.withMaxIterations(config.maxIterations)
.withLogger(createLogger({ minLevel: config.logLevel }))
.withHooks(
process.env.NODE_ENV === 'production'
? HookPresets.errorLogging()
: HookPresets.monitoring({ verbose: true })
);