Skip to content

HookPresets

Defined in: agent/hook-presets.ts:162

Common hook presets.

new HookPresets(): HookPresets

HookPresets

static compactionTracking(): AgentHooks

Defined in: agent/hook-presets.ts:730

Tracks context compaction events.

Output:

  • Compaction events with 🗜️ emoji
  • Strategy name, tokens before/after, and savings
  • Cumulative statistics

Use cases:

  • Monitoring long-running conversations
  • Understanding when and how compaction occurs
  • Debugging context management issues

Performance: Minimal overhead. Simple console output.

AgentHooks

Hook configuration that can be passed to .withHooks()

await LLMist.createAgent()
.withHooks(HookPresets.compactionTracking())
.ask("Your prompt");

static errorLogging(): AgentHooks

Defined in: agent/hook-presets.ts:687

Logs detailed error information for debugging and troubleshooting.

Output:

  • LLM errors with ❌ emoji, including model and recovery status
  • Gadget errors with full context (parameters, error message)
  • Separate logging for LLM and gadget failures

Use cases:

  • Troubleshooting production issues
  • Understanding error patterns and frequency
  • Debugging error recovery behavior
  • Collecting error metrics for monitoring

Performance: Minimal overhead. Only logs when errors occur.

AgentHooks

Hook configuration that can be passed to .withHooks()

// Basic error logging
await LLMist.createAgent()
.withHooks(HookPresets.errorLogging())
.withGadgets(Database)
.ask("Fetch user data");
// Output (on LLM error): ❌ LLM Error (iteration 1): Rate limit exceeded
// Model: gpt-5-nano
// Recovered: true
// Output (on gadget error): ❌ Gadget Error: Database
// Error: Connection timeout
// Parameters: {...}
// Combine with monitoring for full context
.withHooks(HookPresets.merge(
HookPresets.monitoring(), // Includes errorLogging
customErrorAnalytics
))
// Error analytics collection
const errors: any[] = [];
.withHooks(HookPresets.merge(
HookPresets.errorLogging(),
{
observers: {
onLLMCallError: async (ctx) => {
errors.push({ type: 'llm', error: ctx.error, recovered: ctx.recovered });
},
},
}
))

Full documentation


static logging(options): AgentHooks

Defined in: agent/hook-presets.ts:218

Logs LLM calls and gadget execution to console with optional verbosity.

Output (basic mode):

  • LLM call start/complete events with iteration numbers
  • Gadget execution start/complete with gadget names
  • Token counts when available

Output (verbose mode):

  • All basic mode output
  • Full gadget parameters (formatted JSON)
  • Full gadget results
  • Complete LLM response text

Use cases:

  • Basic development debugging and execution flow visibility
  • Understanding agent decision-making and tool usage
  • Troubleshooting gadget invocations

Performance: Minimal overhead. Console writes are synchronous but fast.

LoggingOptions = {}

Logging options

AgentHooks

Hook configuration that can be passed to .withHooks()

// Basic logging
await LLMist.createAgent()
.withHooks(HookPresets.logging())
.ask("Calculate 15 * 23");
// Output: [LLM] Starting call (iteration 0)
// [GADGET] Executing Calculator
// [GADGET] Completed Calculator
// [LLM] Completed (tokens: 245)
// Verbose logging with full details
await LLMist.createAgent()
.withHooks(HookPresets.logging({ verbose: true }))
.ask("Calculate 15 * 23");
// Output includes: parameters, results, and full responses
// Environment-based verbosity
const isDev = process.env.NODE_ENV === 'development';
.withHooks(HookPresets.logging({ verbose: isDev }))

Full documentation


static merge(…hookSets): AgentHooks

Defined in: agent/hook-presets.ts:878

Combines multiple hook configurations into one.

Merge allows you to compose preset and custom hooks for modular monitoring configurations. Understanding merge behavior is crucial for proper composition.

Merge behavior:

  • Observers: Composed - all handlers run sequentially in order
  • Interceptors: Last one wins - only the last interceptor applies
  • Controllers: Last one wins - only the last controller applies

Why interceptors/controllers don’t compose:

  • Interceptors have different signatures per method, making composition impractical
  • Controllers return specific actions that can’t be meaningfully combined
  • Only observers support composition because they’re read-only and independent

Use cases:

  • Combining multiple presets (logging + timing + tokens)
  • Adding custom hooks to presets
  • Building modular, reusable monitoring configurations
  • Environment-specific hook composition

Performance: Minimal overhead for merging. Runtime performance depends on merged hooks.

AgentHooks[]

Variable number of hook configurations to merge

AgentHooks

Single merged hook configuration with composed/overridden handlers

// Combine multiple presets
.withHooks(HookPresets.merge(
HookPresets.logging(),
HookPresets.timing(),
HookPresets.tokenTracking()
))
// All observers from all three presets will run
// Add custom observer to preset (both run)
.withHooks(HookPresets.merge(
HookPresets.timing(),
{
observers: {
onLLMCallComplete: async (ctx) => {
await saveMetrics({ tokens: ctx.usage?.totalTokens });
},
},
}
))
// Multiple interceptors (last wins!)
.withHooks(HookPresets.merge(
{
interceptors: {
interceptTextChunk: (chunk) => chunk.toUpperCase(), // Ignored
},
},
{
interceptors: {
interceptTextChunk: (chunk) => chunk.toLowerCase(), // This wins
},
}
))
// Result: text will be lowercase
// Modular environment-based configuration
const baseHooks = HookPresets.errorLogging();
const devHooks = HookPresets.merge(baseHooks, HookPresets.monitoring({ verbose: true }));
const prodHooks = HookPresets.merge(baseHooks, HookPresets.tokenTracking());
const hooks = process.env.NODE_ENV === 'production' ? prodHooks : devHooks;
.withHooks(hooks)

Full documentation


static monitoring(options): AgentHooks

Defined in: agent/hook-presets.ts:977

Composite preset combining logging, timing, tokenTracking, and errorLogging.

This is the recommended preset for development and initial production deployments, providing comprehensive observability with a single method call.

Includes:

  • All output from logging() preset (with optional verbosity)
  • All output from timing() preset (execution times)
  • All output from tokenTracking() preset (token usage)
  • All output from errorLogging() preset (error details)

Output format:

  • Event logging: [LLM]/[GADGET] messages
  • Timing: ⏱️ emoji with milliseconds
  • Tokens: 📊 emoji with per-call and cumulative counts
  • Errors: ❌ emoji with full error details

Use cases:

  • Full observability during development
  • Comprehensive monitoring in production
  • One-liner for complete agent visibility
  • Troubleshooting and debugging with full context

Performance: Combined overhead of all four presets, but still minimal in practice.

LoggingOptions = {}

Monitoring options

AgentHooks

Merged hook configuration combining all monitoring presets

// Basic monitoring (recommended for development)
await LLMist.createAgent()
.withHooks(HookPresets.monitoring())
.withGadgets(Calculator, Weather)
.ask("What is 15 times 23, and what's the weather in NYC?");
// Output: All events, timing, tokens, and errors in one place
// Verbose monitoring with full details
await LLMist.createAgent()
.withHooks(HookPresets.monitoring({ verbose: true }))
.ask("Your prompt");
// Output includes: parameters, results, and complete responses
// Environment-based monitoring
const isDev = process.env.NODE_ENV === 'development';
.withHooks(HookPresets.monitoring({ verbose: isDev }))

Full documentation


static progressTracking(options?): AgentHooks

Defined in: agent/hook-presets.ts:528

Tracks comprehensive progress metrics including iterations, tokens, cost, and timing.

This preset showcases llmist’s core capabilities by demonstrating:

  • Observer pattern for non-intrusive monitoring
  • Integration with ModelRegistry for cost estimation
  • Callback-based architecture for flexible UI updates
  • Provider-agnostic token and cost tracking

Unlike tokenTracking() which only logs to console, this preset provides structured data through callbacks, making it perfect for building custom UIs, dashboards, or progress indicators (like the llmist CLI).

Output (when logProgress: true):

  • Iteration number and call count
  • Cumulative token usage (input + output)
  • Cumulative cost in USD (requires modelRegistry)
  • Elapsed time in seconds

Use cases:

  • Building CLI progress indicators with live updates
  • Creating web dashboards with real-time metrics
  • Budget monitoring and cost alerts
  • Performance tracking and optimization
  • Custom logging to external systems (Datadog, CloudWatch, etc.)

Performance: Minimal overhead. Uses Date.now() for timing and optional ModelRegistry.estimateCost() which is O(1) lookup. Callback invocation is synchronous and fast.

ProgressTrackingOptions

Progress tracking options

AgentHooks

Hook configuration with progress tracking observers

// Basic usage with callback (RECOMMENDED - used by llmist CLI)
import { LLMist, HookPresets } from 'llmist';
const client = LLMist.create();
await client.agent()
.withHooks(HookPresets.progressTracking({
modelRegistry: client.modelRegistry,
onProgress: (stats) => {
// Update your UI with stats
console.log(`#${stats.currentIteration} | ${stats.totalTokens} tokens | $${stats.totalCost.toFixed(4)}`);
}
}))
.withGadgets(Calculator)
.ask("Calculate 15 * 23");
// Output: #1 | 245 tokens | $0.0012
// Console logging mode (quick debugging)
await client.agent()
.withHooks(HookPresets.progressTracking({
modelRegistry: client.modelRegistry,
logProgress: true // Simple console output
}))
.ask("Your prompt");
// Output: 📊 Progress: Iteration #1 | 245 tokens | $0.0012 | 1.2s
// Budget monitoring with alerts
const BUDGET_USD = 0.10;
await client.agent()
.withHooks(HookPresets.progressTracking({
modelRegistry: client.modelRegistry,
onProgress: (stats) => {
if (stats.totalCost > BUDGET_USD) {
throw new Error(`Budget exceeded: $${stats.totalCost.toFixed(4)}`);
}
}
}))
.ask("Long running task...");
// Web dashboard integration
let progressBar: HTMLElement;
await client.agent()
.withHooks(HookPresets.progressTracking({
modelRegistry: client.modelRegistry,
onProgress: (stats) => {
// Update web UI in real-time
progressBar.textContent = `Iteration ${stats.currentIteration}`;
progressBar.dataset.cost = stats.totalCost.toFixed(4);
progressBar.dataset.tokens = stats.totalTokens.toString();
}
}))
.ask("Your prompt");
// External logging (Datadog, CloudWatch, etc.)
await client.agent()
.withHooks(HookPresets.progressTracking({
modelRegistry: client.modelRegistry,
onProgress: async (stats) => {
await metrics.gauge('llm.iteration', stats.currentIteration);
await metrics.gauge('llm.cost', stats.totalCost);
await metrics.gauge('llm.tokens', stats.totalTokens);
}
}))
.ask("Your prompt");
  • Full documentation
  • ProgressTrackingOptions for detailed options
  • ProgressStats for the callback data structure

static silent(): AgentHooks

Defined in: agent/hook-presets.ts:790

Returns empty hook configuration for clean output without any logging.

Output:

  • None. Returns {} (empty object).

Use cases:

  • Clean test output without console noise
  • Production environments where logging is handled externally
  • Baseline for custom hook development
  • Temporary disable of all hook output

Performance: Zero overhead. No-op hook configuration.

AgentHooks

Empty hook configuration

// Clean test output
describe('Agent tests', () => {
it('should calculate correctly', async () => {
const result = await LLMist.createAgent()
.withHooks(HookPresets.silent()) // No console output
.withGadgets(Calculator)
.askAndCollect("What is 15 times 23?");
expect(result).toContain("345");
});
});
// Conditional silence based on environment
const isTesting = process.env.NODE_ENV === 'test';
.withHooks(isTesting ? HookPresets.silent() : HookPresets.monitoring())

Full documentation


static timing(): AgentHooks

Defined in: agent/hook-presets.ts:298

Measures and logs execution time for LLM calls and gadgets.

Output:

  • Duration in milliseconds with ⏱️ emoji for each operation
  • Separate timing for each LLM iteration
  • Separate timing for each gadget execution

Use cases:

  • Performance profiling and optimization
  • Identifying slow operations (LLM calls vs gadget execution)
  • Monitoring response times in production
  • Capacity planning and SLA tracking

Performance: Negligible overhead. Uses Date.now() for timing measurements.

AgentHooks

Hook configuration that can be passed to .withHooks()

// Basic timing
await LLMist.createAgent()
.withHooks(HookPresets.timing())
.withGadgets(Weather, Database)
.ask("What's the weather in NYC?");
// Output: ⏱️ LLM call took 1234ms
// ⏱️ Gadget Weather took 567ms
// ⏱️ LLM call took 890ms
// Combined with logging for full context
.withHooks(HookPresets.merge(
HookPresets.logging(),
HookPresets.timing()
))
// Correlate performance with cost
.withHooks(HookPresets.merge(
HookPresets.timing(),
HookPresets.tokenTracking()
))

Full documentation


static tokenTracking(): AgentHooks

Defined in: agent/hook-presets.ts:388

Tracks cumulative token usage across all LLM calls.

Output:

  • Per-call token count with 📊 emoji
  • Cumulative total across all calls
  • Call count for average calculations

Use cases:

  • Cost monitoring and budget tracking
  • Optimizing prompts to reduce token usage
  • Comparing token efficiency across different approaches
  • Real-time cost estimation

Performance: Minimal overhead. Simple counter increments.

Note: Token counts depend on the provider’s response. Some providers may not include usage data, in which case counts won’t be logged.

AgentHooks

Hook configuration that can be passed to .withHooks()

// Basic token tracking
await LLMist.createAgent()
.withHooks(HookPresets.tokenTracking())
.ask("Summarize this document...");
// Output: 📊 Tokens this call: 1,234
// 📊 Total tokens: 1,234 (across 1 calls)
// 📊 Tokens this call: 567
// 📊 Total tokens: 1,801 (across 2 calls)
// Cost calculation with custom hook
let totalTokens = 0;
.withHooks(HookPresets.merge(
HookPresets.tokenTracking(),
{
observers: {
onLLMCallComplete: async (ctx) => {
totalTokens += ctx.usage?.totalTokens ?? 0;
const cost = (totalTokens / 1_000_000) * 3.0; // $3 per 1M tokens
console.log(`💰 Estimated cost: $${cost.toFixed(4)}`);
},
},
}
))

Full documentation