Skip to content

Providers

llmist supports multiple LLM providers out of the box.

ProviderEnv VariablePrefixReasoningPricing
OpenAIOPENAI_API_KEYopenai:reasoning.effortPaid
AnthropicANTHROPIC_API_KEYanthropic:✓ Extended thinkingPaid
Google GeminiGEMINI_API_KEYgemini:✓ Thinking configPaid
OpenRouterOPENROUTER_API_KEYopenrouter: or or:✓ (model-dependent)Paid
HuggingFaceHF_TOKENhuggingface: or hf:Free

llmist automatically discovers providers based on environment variables:

Terminal window
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
export HF_TOKEN="hf_..."
const client = new LLMist();
.withModel('gpt-5') // OpenAI (auto-detected)
.withModel('claude-sonnet-4-5') // Anthropic (auto-detected)
.withModel('gemini-2.5-flash') // Gemini (auto-detected)
.withModel('meta-llama/Llama-3.1-8B-Instruct') // HuggingFace (auto-detected)

Use provider:model format:

.withModel('openai:gpt-5')
.withModel('anthropic:claude-sonnet-4-5-20250929')
.withModel('gemini:gemini-2.5-flash')
.withModel('huggingface:deepseek-ai/DeepSeek-V3.2')
.withModel('hf:Qwen/Qwen2.5-72B-Instruct:fastest') // With routing
import { LLMist, OpenAIChatProvider, AnthropicMessagesProvider, OpenRouterProvider } from 'llmist';
import OpenAI from 'openai';
const client = new LLMist({
autoDiscoverProviders: false,
adapters: [
new OpenAIChatProvider({ apiKey: 'sk-...' }),
new AnthropicMessagesProvider({ apiKey: 'sk-ant-...' }),
],
defaultProvider: 'openai',
});

OpenRouterProvider provides access to 400+ models from dozens of providers through a single unified gateway. It supports prompt caching, model routing strategies, and reasoning models.

import { LLMist, OpenRouterProvider } from 'llmist';
import OpenAI from 'openai';
const openrouterClient = new OpenAI({
apiKey: 'sk-or-...',
baseURL: 'https://openrouter.ai/api/v1',
defaultHeaders: {
'HTTP-Referer': 'https://myapp.com', // Optional: for analytics
'X-Title': 'My App', // Optional: for analytics
},
});
const client = new LLMist({
autoDiscoverProviders: false,
adapters: [
new OpenRouterProvider(openrouterClient, {
siteUrl: 'https://myapp.com', // Optional
appName: 'My App', // Optional
}),
],
});

Use the openrouter: or or: prefix to route to specific models, with optional routing strategy:

.withModel('openrouter:anthropic/claude-sonnet-4-5')
.withModel('or:meta-llama/llama-3.1-70b-instruct:fastest') // Route to fastest provider
.withModel('or:mistralai/mistral-large:cheapest') // Route to cheapest provider
interface ProviderAdapter {
readonly providerId: string;
readonly priority?: number;
supports(model: ModelDescriptor): boolean;
stream(options: LLMGenerationOptions, descriptor: ModelDescriptor): LLMStream;
getModelSpecs?(): ModelSpec[];
countTokens?(messages: LLMMessage[], descriptor: ModelDescriptor): Promise<number>;
}