Skip to content

OpenAIChatProvider

Defined in: providers/openai.ts:67

  • BaseProviderAdapter

new OpenAIChatProvider(client): OpenAIChatProvider

Defined in: providers/base-provider.ts:23

unknown

OpenAIChatProvider

BaseProviderAdapter.constructor

readonly providerId: "openai"

Defined in: providers/openai.ts:68

BaseProviderAdapter.providerId

countTokens(messages, descriptor, _spec?): Promise<number>

Defined in: providers/openai.ts:372

Count tokens in messages using OpenAI’s tiktoken library.

This method provides accurate token estimation for OpenAI models by:

  • Using the model-specific tokenizer encoding
  • Accounting for message formatting overhead
  • Falling back to gpt-4o encoding for unknown models

LLMMessage[]

The messages to count tokens for

ModelDescriptor

Model descriptor containing the model name

ModelSpec

Optional model specification (currently unused)

Promise<number>

Promise resolving to the estimated input token count

Never throws - falls back to character-based estimation (4 chars/token) on error

const count = await provider.countTokens(
[{ role: "user", content: "Hello!" }],
{ provider: "openai", name: "gpt-4" }
);

generateImage(options): Promise<ImageGenerationResult>

Defined in: providers/openai.ts:90

ImageGenerationOptions

Promise<ImageGenerationResult>


generateSpeech(options): Promise<SpeechGenerationResult>

Defined in: providers/openai.ts:163

SpeechGenerationOptions

Promise<SpeechGenerationResult>


getImageModelSpecs(): ImageModelSpec[]

Defined in: providers/openai.ts:82

ImageModelSpec[]


getModelSpecs(): ModelSpec[]

Defined in: providers/openai.ts:74

Optionally provide model specifications for this provider. This allows the model registry to discover available models and their capabilities.

ModelSpec[]

BaseProviderAdapter.getModelSpecs


getSpeechModelSpecs(): SpeechModelSpec[]

Defined in: providers/openai.ts:155

SpeechModelSpec[]


stream(options, descriptor, spec?): LLMStream

Defined in: providers/base-provider.ts:37

Template method that defines the skeleton of the streaming algorithm. This orchestrates the four-step process without dictating provider-specific details.

LLMGenerationOptions

ModelDescriptor

ModelSpec

LLMStream

BaseProviderAdapter.stream


supports(descriptor): boolean

Defined in: providers/openai.ts:70

ModelDescriptor

boolean

BaseProviderAdapter.supports


supportsImageGeneration(modelId): boolean

Defined in: providers/openai.ts:86

string

boolean


supportsSpeechGeneration(modelId): boolean

Defined in: providers/openai.ts:159

string

boolean