Skip to content

LLMCallInfo

Defined in: gadgets/types.ts:264

Information about an LLM call within a subagent. Used by parent agents to display real-time progress of subagent LLM calls.

This interface provides full context about subagent LLM calls, enabling first-class display with the same metrics as top-level agents (cached tokens, cost, etc.).

optional cost: number

Defined in: gadgets/types.ts:296

Cost of this LLM call in USD. Calculated by the subagent if it has access to model registry.


optional elapsedMs: number

Defined in: gadgets/types.ts:276

Elapsed time in milliseconds


optional finishReason: string

Defined in: gadgets/types.ts:274

Reason the LLM stopped generating (e.g., “stop”, “tool_use”)


optional inputTokens: number

Defined in: gadgets/types.ts:270

Input tokens sent to the LLM (for backward compat, prefer usage.inputTokens)


iteration: number

Defined in: gadgets/types.ts:266

Iteration number within the subagent loop


model: string

Defined in: gadgets/types.ts:268

Model identifier (e.g., “sonnet”, “gpt-4o”)


optional outputTokens: number

Defined in: gadgets/types.ts:272

Output tokens received from the LLM (for backward compat, prefer usage.outputTokens)


optional usage: object

Defined in: gadgets/types.ts:282

Full token usage including cached token counts. This provides the same level of detail as top-level agent calls.

optional cacheCreationInputTokens: number

Number of input tokens written to cache (subset of inputTokens, Anthropic only)

optional cachedInputTokens: number

Number of input tokens served from cache (subset of inputTokens)

inputTokens: number

outputTokens: number

totalTokens: number