Gemini Provider
Set your Gemini API key:
export GEMINI_API_KEY=...llmist will automatically discover and use Gemini.
Available Models
Section titled “Available Models”Text Models
Section titled “Text Models”| Model | Alias | Best For |
|---|---|---|
gemini-2.5-flash | flash | Fast, cost-effective (recommended) |
gemini-3-pro-preview | pro | Complex reasoning |
gemini-2.0-flash-thinking | - | Step-by-step reasoning |
Image Models
Section titled “Image Models”| Model | Description |
|---|---|
imagen-3 | High-quality image generation |
Usage Examples
Section titled “Usage Examples”import { LLMist } from 'llmist';
const answer = await LLMist.createAgent() .withModel('flash') .askAndCollect('What is the speed of light?');import { LLMist } from 'llmist';
const answer = await LLMist.createAgent() .withModel('flash') .withTemperature(0.9) // More creative .askAndCollect('Write a creative story about AI');import { LLMist } from 'llmist';
const client = new LLMist();const result = await client.image.generate({ prompt: 'A futuristic city at night', model: 'imagen-3',});
console.log(result.url);Vision (Image Input)
Section titled “Vision (Image Input)”Gemini models have excellent vision capabilities:
import { LLMist, imageFromUrl } from 'llmist';
const answer = await LLMist.createAgent() .withModel('flash') .askWithImage( 'What objects are in this image?', imageFromUrl('https://example.com/photo.jpg') ) .askAndCollect();Gemini supports multiple images in a single request:
const answer = await LLMist.createAgent() .withModel('flash') .askWithImage( 'Compare these two images', imageFromUrl('https://example.com/image1.jpg'), imageFromUrl('https://example.com/image2.jpg') ) .askAndCollect();Model Characteristics
Section titled “Model Characteristics”Gemini Flash Fast & Cheap
Section titled “Gemini Flash ”- Extremely fast responses
- Very cost-effective
- 1M token context window
- Great for high-volume tasks
Gemini Pro Most Capable
Section titled “Gemini Pro ”- Best reasoning capabilities
- Higher latency
- 1M token context window
- Best for complex analysis
Gemini Flash Thinking
Section titled “Gemini Flash Thinking”- Shows step-by-step reasoning
- Good for math and logic problems
- Outputs thinking process
Configuration Options
Section titled “Configuration Options”import { LLMist, GeminiGenerativeProvider } from 'llmist';
const client = new LLMist({ autoDiscoverProviders: false, adapters: [ new GeminiGenerativeProvider({ apiKey: process.env.GEMINI_API_KEY, }), ],});Unique Features
Section titled “Unique Features”Grounding with Google Search
Section titled “Grounding with Google Search”Gemini can ground responses with real-time Google Search:
// Note: Grounding is configured at the model level// Check Google AI Studio for grounding optionsLong Context
Section titled “Long Context”Gemini has a 1M token context window—great for:
- Analyzing entire codebases
- Processing long documents
- Multi-document reasoning
Cost Tracking
Section titled “Cost Tracking”for await (const event of agent.run()) { if (event.type === 'llm_call_complete') { console.log('Tokens:', event.usage); console.log('Cost:', event.cost); }}Best Practices
Section titled “Best Practices”- Use Flash for speed - Fastest and cheapest option
- Use Pro for reasoning - Complex analysis and coding
- Leverage 1M context - Gemini handles very long inputs well
- Multi-image support - Send multiple images for comparison tasks