Skip to content

GeminiGenerativeProvider

Defined in: providers/gemini.ts:122

  • BaseProviderAdapter

new GeminiGenerativeProvider(client): GeminiGenerativeProvider

Defined in: providers/base-provider.ts:23

unknown

GeminiGenerativeProvider

BaseProviderAdapter.constructor

readonly providerId: "gemini"

Defined in: providers/gemini.ts:123

BaseProviderAdapter.providerId

countTokens(messages, descriptor, _spec?): Promise<number>

Defined in: providers/gemini.ts:570

Count tokens in messages using Gemini’s native token counting API.

This method provides accurate token estimation for Gemini models by:

  • Using the SDK’s countTokens() method
  • Converting system messages to user+model exchanges (same as in generation)
  • This gives perfect token counting accuracy (0% error vs actual usage)

LLMMessage[]

The messages to count tokens for

ModelDescriptor

Model descriptor containing the model name

ModelSpec

Optional model specification (currently unused)

Promise<number>

Promise resolving to the estimated input token count

Never throws - falls back to character-based estimation (4 chars/token) on error

const count = await provider.countTokens(
[{ role: "user", content: "Hello!" }],
{ provider: "gemini", name: "gemini-1.5-pro" }
);

generateImage(options): Promise<ImageGenerationResult>

Defined in: providers/gemini.ts:145

ImageGenerationOptions

Promise<ImageGenerationResult>


generateSpeech(options): Promise<SpeechGenerationResult>

Defined in: providers/gemini.ts:233

SpeechGenerationOptions

Promise<SpeechGenerationResult>


getImageModelSpecs(): ImageModelSpec[]

Defined in: providers/gemini.ts:137

ImageModelSpec[]


getModelSpecs(): ModelSpec[]

Defined in: providers/gemini.ts:129

Optionally provide model specifications for this provider. This allows the model registry to discover available models and their capabilities.

ModelSpec[]

BaseProviderAdapter.getModelSpecs


getSpeechModelSpecs(): SpeechModelSpec[]

Defined in: providers/gemini.ts:225

SpeechModelSpec[]


stream(options, descriptor, spec?): LLMStream

Defined in: providers/base-provider.ts:37

Template method that defines the skeleton of the streaming algorithm. This orchestrates the four-step process without dictating provider-specific details.

LLMGenerationOptions

ModelDescriptor

ModelSpec

LLMStream

BaseProviderAdapter.stream


supports(descriptor): boolean

Defined in: providers/gemini.ts:125

ModelDescriptor

boolean

BaseProviderAdapter.supports


supportsImageGeneration(modelId): boolean

Defined in: providers/gemini.ts:141

string

boolean


supportsSpeechGeneration(modelId): boolean

Defined in: providers/gemini.ts:229

string

boolean