Skip to content

Providers

llmist supports multiple LLM providers out of the box.

ProviderEnv VariablePrefixReasoningPricing
OpenAIOPENAI_API_KEYopenai:reasoning.effortPaid
AnthropicANTHROPIC_API_KEYanthropic:✓ Extended thinkingPaid
Google GeminiGEMINI_API_KEYgemini:✓ Thinking configPaid
HuggingFaceHF_TOKENhuggingface: or hf:Free

llmist automatically discovers providers based on environment variables:

Terminal window
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GEMINI_API_KEY="..."
export HF_TOKEN="hf_..."
const client = new LLMist();
.withModel('gpt-5') // OpenAI (auto-detected)
.withModel('claude-sonnet-4-5') // Anthropic (auto-detected)
.withModel('gemini-2.5-flash') // Gemini (auto-detected)
.withModel('meta-llama/Llama-3.1-8B-Instruct') // HuggingFace (auto-detected)

Use provider:model format:

.withModel('openai:gpt-5')
.withModel('anthropic:claude-sonnet-4-5-20250929')
.withModel('gemini:gemini-2.5-flash')
.withModel('huggingface:deepseek-ai/DeepSeek-V3.2')
.withModel('hf:Qwen/Qwen2.5-72B-Instruct:fastest') // With routing
import { LLMist, OpenAIChatProvider, AnthropicMessagesProvider } from 'llmist';
const client = new LLMist({
autoDiscoverProviders: false,
adapters: [
new OpenAIChatProvider({ apiKey: 'sk-...' }),
new AnthropicMessagesProvider({ apiKey: 'sk-ant-...' }),
],
defaultProvider: 'openai',
});
interface ProviderAdapter {
readonly providerId: string;
readonly priority?: number;
supports(model: ModelDescriptor): boolean;
stream(options: LLMGenerationOptions, descriptor: ModelDescriptor): LLMStream;
getModelSpecs?(): ModelSpec[];
countTokens?(messages: LLMMessage[], descriptor: ModelDescriptor): Promise<number>;
}