Skip to content

Anthropic Provider

Set your Anthropic API key:

Terminal window
export ANTHROPIC_API_KEY=sk-ant-...

llmist will automatically discover and use Anthropic.

ModelAliasBest For
claude-opus-4-5opusComplex reasoning, creative tasks
claude-sonnet-4-5sonnetBalanced performance (recommended)
claude-haiku-4-5haikuFast, cost-effective tasks
import { LLMist } from 'llmist';
const answer = await LLMist.createAgent()
.withModel('sonnet')
.askAndCollect('Write a haiku about TypeScript');

Claude models support image input:

import { LLMist, imageFromBuffer } from 'llmist';
import { readFileSync } from 'fs';
const imageBuffer = readFileSync('diagram.png');
const answer = await LLMist.createAgent()
.withModel('sonnet')
.askWithImage(
'Describe this diagram',
imageFromBuffer(imageBuffer, 'image/png')
)
.askAndCollect();

Supported image formats: JPEG, PNG, GIF, WebP

Claude Opus 4.5 Most Capable

Section titled “Claude Opus 4.5 ”
  • Best for complex reasoning and creative tasks
  • Highest quality outputs
  • Higher latency and cost
  • 200K context window

Claude Sonnet 4.5 Recommended

Section titled “Claude Sonnet 4.5 ”
  • Best balance of capability and speed
  • Great for coding, analysis, and general tasks
  • Good cost efficiency
  • 200K context window
  • Fastest response times
  • Lowest cost
  • Good for simple tasks and high-volume use
  • 200K context window
import { LLMist, AnthropicMessagesProvider } from 'llmist';
const client = new LLMist({
autoDiscoverProviders: false,
adapters: [
new AnthropicMessagesProvider({
apiKey: process.env.ANTHROPIC_API_KEY,
baseUrl: 'https://api.anthropic.com', // Custom endpoint
}),
],
});
  1. Use Sonnet as default - Best balance for most tasks
  2. Reserve Opus for complex reasoning - When quality matters more than speed
  3. Use Haiku for high-volume - Simple tasks, classification, summarization
  4. Leverage long context - Claude handles 200K tokens well
for await (const event of agent.run()) {
if (event.type === 'llm_call_complete') {
console.log('Input tokens:', event.usage?.promptTokens);
console.log('Output tokens:', event.usage?.completionTokens);
console.log('Estimated cost:', event.cost);
}
}