OpenAIChatProvider
Defined in: providers/openai.ts:67
Extends
Section titled “Extends”BaseProviderAdapter
Constructors
Section titled “Constructors”Constructor
Section titled “Constructor”new OpenAIChatProvider(
client):OpenAIChatProvider
Defined in: providers/base-provider.ts:23
Parameters
Section titled “Parameters”client
Section titled “client”unknown
Returns
Section titled “Returns”OpenAIChatProvider
Inherited from
Section titled “Inherited from”BaseProviderAdapter.constructor
Properties
Section titled “Properties”providerId
Section titled “providerId”
readonlyproviderId:"openai"
Defined in: providers/openai.ts:68
Overrides
Section titled “Overrides”BaseProviderAdapter.providerId
Methods
Section titled “Methods”countTokens()
Section titled “countTokens()”countTokens(
messages,descriptor,_spec?):Promise<number>
Defined in: providers/openai.ts:372
Count tokens in messages using OpenAI’s tiktoken library.
This method provides accurate token estimation for OpenAI models by:
- Using the model-specific tokenizer encoding
- Accounting for message formatting overhead
- Falling back to gpt-4o encoding for unknown models
Parameters
Section titled “Parameters”messages
Section titled “messages”The messages to count tokens for
descriptor
Section titled “descriptor”Model descriptor containing the model name
_spec?
Section titled “_spec?”Optional model specification (currently unused)
Returns
Section titled “Returns”Promise<number>
Promise resolving to the estimated input token count
Throws
Section titled “Throws”Never throws - falls back to character-based estimation (4 chars/token) on error
Example
Section titled “Example”const count = await provider.countTokens( [{ role: "user", content: "Hello!" }], { provider: "openai", name: "gpt-4" });generateImage()
Section titled “generateImage()”generateImage(
options):Promise<ImageGenerationResult>
Defined in: providers/openai.ts:90
Parameters
Section titled “Parameters”options
Section titled “options”Returns
Section titled “Returns”Promise<ImageGenerationResult>
generateSpeech()
Section titled “generateSpeech()”generateSpeech(
options):Promise<SpeechGenerationResult>
Defined in: providers/openai.ts:163
Parameters
Section titled “Parameters”options
Section titled “options”Returns
Section titled “Returns”Promise<SpeechGenerationResult>
getImageModelSpecs()
Section titled “getImageModelSpecs()”getImageModelSpecs():
ImageModelSpec[]
Defined in: providers/openai.ts:82
Returns
Section titled “Returns”getModelSpecs()
Section titled “getModelSpecs()”getModelSpecs():
ModelSpec[]
Defined in: providers/openai.ts:74
Optionally provide model specifications for this provider. This allows the model registry to discover available models and their capabilities.
Returns
Section titled “Returns”Overrides
Section titled “Overrides”BaseProviderAdapter.getModelSpecs
getSpeechModelSpecs()
Section titled “getSpeechModelSpecs()”getSpeechModelSpecs():
SpeechModelSpec[]
Defined in: providers/openai.ts:155
Returns
Section titled “Returns”stream()
Section titled “stream()”stream(
options,descriptor,spec?):LLMStream
Defined in: providers/base-provider.ts:37
Template method that defines the skeleton of the streaming algorithm. This orchestrates the four-step process without dictating provider-specific details.
Parameters
Section titled “Parameters”options
Section titled “options”descriptor
Section titled “descriptor”Returns
Section titled “Returns”Inherited from
Section titled “Inherited from”BaseProviderAdapter.stream
supports()
Section titled “supports()”supports(
descriptor):boolean
Defined in: providers/openai.ts:70
Parameters
Section titled “Parameters”descriptor
Section titled “descriptor”Returns
Section titled “Returns”boolean
Overrides
Section titled “Overrides”BaseProviderAdapter.supports
supportsImageGeneration()
Section titled “supportsImageGeneration()”supportsImageGeneration(
modelId):boolean
Defined in: providers/openai.ts:86
Parameters
Section titled “Parameters”modelId
Section titled “modelId”string
Returns
Section titled “Returns”boolean
supportsSpeechGeneration()
Section titled “supportsSpeechGeneration()”supportsSpeechGeneration(
modelId):boolean
Defined in: providers/openai.ts:159
Parameters
Section titled “Parameters”modelId
Section titled “modelId”string
Returns
Section titled “Returns”boolean