openai.assistants
Module openai.assistants
API
Definitions
ballerinax/openai.assistants Ballerina library
Overview
OpenAI, an AI research organization focused on creating friendly AI for humanity, offers the OpenAI API to access its powerful AI models for tasks like natural language processing and image generation.
The ballarinax/openai.Assistants
connector allows developers to seamlessly integrate OpenAI's advanced language models into their applications by interacting with OpenAI REST API v1. This connector provides tools to build powerful OpenAI Assistants capable of performing a wide range of tasks, such as generating human-like text, managing conversations with persistent threads, and utilizing multiple tools in parallel. OpenAI has recently announced a variety of new features and improvements to the Assistants API, moving their Beta to a new API version, OpenAI-Beta: assistants=v2
. The users can interact with both the API v1 and v2 by passing the respective API version header with the request.
Setup guide
To use the OpenAI Connector, you must have access to the OpenAI API through a OpenAI Platform account and a project under it. If you do not have a OpenAI Platform account, you can sign up for one here.
Create a OpenAI API Key
-
Open the OpenAI Platform Dashboard.
-
Navigate to Dashboard -> API keys
-
Click on the "Create new secret key" button
-
Fill the details and click on Create secret key
-
Store the API key securely to use in your application
Quickstart
To use the OpenAI Assistants
connector in your Ballerina application, update the .bal
file as follows:
Step 1: Import the module
Import the openai.assistants
module.
import ballerinax/openai.assistants;
Step 2: Instantiate a new connector
Create a assistants:ConnectionConfig
with the obtained access token and initialize the connector with it.
configurable string token = ?; final assistants:Client openAIAssistant = check new ({ auth: { token } });
Setting HTTP Headers in Ballerina
Calls to the Assistants API require that you pass a beta HTTP header. In Ballerina, you can define the header as follows:
final map<string|string[]> headers = { "OpenAI-Beta": ["assistants=v2"] };
Step 3: Invoke the connector operations
Now, utilize the available connector operations.
public function main() returns error? { // define the required tool assistants:AssistantToolsCode tool = { type: "code_interpreter" }; // define the assistant request object assistants:CreateAssistantRequest request = { model: "gpt-3.5-turbo", name: "Math Tutor", description: "An Assistant for personal math tutoring", instructions: "You are a personal math tutor. Help the user with their math questions.", tools: [tool] }; // call the `post assistants` resource to create an Assistant assistants:AssistantObject assistantResponse = check openAIAssistant->/assistants.post(request, headers); }
Step 4: Run the Ballerina application
bal run
Examples
The OpenAI Assistants
connector provides practical examples illustrating usage in various scenarios. Explore these examples, covering the following use cases:
-
Math tutor bot - Create an assistant to solve mathematical problems with step-by-step solutions and interactive guidance.
-
Weather assistant - Develop an assistant to provide weather information by leveraging function calls for temperature and rain probability.
Clients
openai.assistants: Client
The OpenAI REST API. Please see https://platform.openai.com/docs/api-reference for more details.
Constructor
Gets invoked to initialize the connector
.
init (ConnectionConfig config, string serviceUrl)
- config ConnectionConfig - The configurations to be used when initializing the
connector
- serviceUrl string "https://api.openai.com/v1" - URL of the target service
delete assistants/[string assistant_id]
function delete assistants/[string assistant_id](map<string|string[]> headers) returns DeleteAssistantResponse|error
Delete an assistant.
Return Type
delete threads/[string thread_id]
function delete threads/[string thread_id](map<string|string[]> headers) returns DeleteThreadResponse|error
Delete a thread.
Return Type
delete threads/[string thread_id]/messages/[string message_id]
function delete threads/[string thread_id]/messages/[string message_id](map<string|string[]> headers) returns DeleteMessageResponse|error
Deletes a message.
Return Type
get assistants
function get assistants(map<string|string[]> headers, *ListAssistantsQueries queries) returns ListAssistantsResponse|error
Returns a list of assistants.
Parameters
- queries *ListAssistantsQueries - Queries to be sent with the request
Return Type
get assistants/[string assistant_id]
function get assistants/[string assistant_id](map<string|string[]> headers) returns AssistantObject|error
Retrieves an assistant.
Return Type
- AssistantObject|error - OK
get threads/[string thread_id]
function get threads/[string thread_id](map<string|string[]> headers) returns ThreadObject|error
Retrieves a thread.
Return Type
- ThreadObject|error - OK
get threads/[string thread_id]/messages
function get threads/[string thread_id]/messages(map<string|string[]> headers, *ListMessagesQueries queries) returns ListMessagesResponse|error
Returns a list of messages for a given thread.
Parameters
- queries *ListMessagesQueries - Queries to be sent with the request
Return Type
get threads/[string thread_id]/messages/[string message_id]
function get threads/[string thread_id]/messages/[string message_id](map<string|string[]> headers) returns MessageObject|error
Retrieve a message.
Return Type
- MessageObject|error - OK
get threads/[string thread_id]/runs
function get threads/[string thread_id]/runs(map<string|string[]> headers, *ListRunsQueries queries) returns ListRunsResponse|error
Returns a list of runs belonging to a thread.
Parameters
- queries *ListRunsQueries - Queries to be sent with the request
Return Type
- ListRunsResponse|error - OK
get threads/[string thread_id]/runs/[string run_id]
function get threads/[string thread_id]/runs/[string run_id](map<string|string[]> headers) returns RunObject|error
Retrieves a run.
get threads/[string thread_id]/runs/[string run_id]/steps
function get threads/[string thread_id]/runs/[string run_id]/steps(map<string|string[]> headers, *ListRunStepsQueries queries) returns ListRunStepsResponse|error
Returns a list of run steps belonging to a run.
Parameters
- queries *ListRunStepsQueries - Queries to be sent with the request
Return Type
get threads/[string thread_id]/runs/[string run_id]/steps/[string step_id]
function get threads/[string thread_id]/runs/[string run_id]/steps/[string step_id](map<string|string[]> headers) returns RunStepObject|error
Retrieves a run step.
Return Type
- RunStepObject|error - OK
post assistants
function post assistants(CreateAssistantRequest payload, map<string|string[]> headers) returns AssistantObject|error
Create an assistant with a model and instructions.
Parameters
- payload CreateAssistantRequest -
Return Type
- AssistantObject|error - OK
post assistants/[string assistant_id]
function post assistants/[string assistant_id](ModifyAssistantRequest payload, map<string|string[]> headers) returns AssistantObject|error
Modifies an assistant.
Parameters
- payload ModifyAssistantRequest -
Return Type
- AssistantObject|error - OK
post threads
function post threads(CreateThreadRequest payload, map<string|string[]> headers) returns ThreadObject|error
Create a thread.
Parameters
- payload CreateThreadRequest -
Return Type
- ThreadObject|error - OK
post threads/[string thread_id]
function post threads/[string thread_id](ModifyThreadRequest payload, map<string|string[]> headers) returns ThreadObject|error
Modifies a thread.
Parameters
- payload ModifyThreadRequest -
Return Type
- ThreadObject|error - OK
post threads/[string thread_id]/messages
function post threads/[string thread_id]/messages(CreateMessageRequest payload, map<string|string[]> headers) returns MessageObject|error
Create a message.
Parameters
- payload CreateMessageRequest -
Return Type
- MessageObject|error - OK
post threads/[string thread_id]/messages/[string message_id]
function post threads/[string thread_id]/messages/[string message_id](ModifyMessageRequest payload, map<string|string[]> headers) returns MessageObject|error
Modifies a message.
Parameters
- payload ModifyMessageRequest -
Return Type
- MessageObject|error - OK
post threads/[string thread_id]/runs
function post threads/[string thread_id]/runs(CreateRunRequest payload, map<string|string[]> headers) returns RunObject|error
Create a run.
Parameters
- payload CreateRunRequest -
post threads/[string thread_id]/runs/[string run_id]
function post threads/[string thread_id]/runs/[string run_id](ModifyRunRequest payload, map<string|string[]> headers) returns RunObject|error
Modifies a run.
Parameters
- payload ModifyRunRequest -
post threads/[string thread_id]/runs/[string run_id]/cancel
function post threads/[string thread_id]/runs/[string run_id]/cancel(map<string|string[]> headers) returns RunObject|error
Cancels a run that is in_progress
.
post threads/[string thread_id]/runs/[string run_id]/submit_tool_outputs
function post threads/[string thread_id]/runs/[string run_id]/submit_tool_outputs(SubmitToolOutputsRunRequest payload, map<string|string[]> headers) returns RunObject|error
When a run has the status: "requires_action"
and required_action.type
is submit_tool_outputs
, this endpoint can be used to submit the outputs from the tool calls once they're all completed. All outputs must be submitted in a single request.
Parameters
- payload SubmitToolOutputsRunRequest -
post threads/runs
function post threads/runs(CreateThreadAndRunRequest payload, map<string|string[]> headers) returns RunObject|error
Create a thread and run it in one request.
Parameters
- payload CreateThreadAndRunRequest -
Records
openai.assistants: AssistantObject
Represents an assistant
that can call the model and use tools.
Fields
- id string - The identifier, which can be referenced in API endpoints.
- 'object "assistant" - The object type, which is always
assistant
.
- created_at int - The Unix timestamp (in seconds) for when the assistant was created.
- name string - The name of the assistant. The maximum length is 256 characters.
- description string - The description of the assistant. The maximum length is 512 characters.
- model string - ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
- instructions string - The system instructions that the assistant uses. The maximum length is 256,000 characters.
- tools (AssistantToolsCode|AssistantToolsFileSearch|AssistantToolsFunction)[] - A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter
,file_search
, orfunction
.
- tool_resources AssistantObject_tool_resources? -
- metadata record {} - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- temperature decimal?(default 1) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
- top_p decimal?(default 1) - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
- response_format AssistantsApiResponseFormatOption? -
openai.assistants: AssistantObject_tool_resources
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter
tool requires a list of file IDs, while the file_search
tool requires a list of vector store IDs.
Fields
- code_interpreter AssistantObject_tool_resources_code_interpreter? -
- file_search AssistantObject_tool_resources_file_search? -
openai.assistants: AssistantObject_tool_resources_code_interpreter
Fields
openai.assistants: AssistantObject_tool_resources_file_search
Fields
- vector_store_ids string[]? - The ID of the vector store attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
openai.assistants: AssistantsApiResponseFormat
An object describing the expected output of the model. If json_object
only function
type tools
are allowed to be passed to the Run. If text
the model can return text or any value needed.
Fields
- 'type "text"|"json_object" (default "text") - Must be one of
text
orjson_object
.
openai.assistants: AssistantsNamedToolChoice
Specifies a tool the model should use. Use to force the model to call a specific tool.
Fields
- 'type "function"|"code_interpreter"|"file_search" - The type of the tool. If type is
function
, the function name must be set
- 'function ChatCompletionNamedToolChoice_function? -
openai.assistants: AssistantToolsCode
Fields
- 'type "code_interpreter" - The type of tool being defined:
code_interpreter
openai.assistants: AssistantToolsFileSearch
Fields
- 'type "file_search" - The type of tool being defined:
file_search
- file_search AssistantToolsFileSearch_file_search? -
openai.assistants: AssistantToolsFileSearch_file_search
Overrides for the file search tool.
Fields
- max_num_results int? - The maximum number of results the file search tool should output. The default is 20 for gpt-4* models and 5 for gpt-3.5-turbo. This number should be between 1 and 50 inclusive.
Note that the file search tool may output fewer than
max_num_results
results. See the file search tool documentation for more information.
openai.assistants: AssistantToolsFileSearchTypeOnly
Fields
- 'type "file_search" - The type of tool being defined:
file_search
openai.assistants: AssistantToolsFunction
Fields
- 'type "function" - The type of tool being defined:
function
- 'function FunctionObject -
openai.assistants: ChatCompletionNamedToolChoice_function
Fields
- name string - The name of the function to call.
openai.assistants: ClientHttp1Settings
Provides settings related to HTTP/1.x protocol.
Fields
- keepAlive KeepAlive(default http:KEEPALIVE_AUTO) - Specifies whether to reuse a connection for multiple requests
- chunking Chunking(default http:CHUNKING_AUTO) - The chunking behaviour of the request
- proxy ProxyConfig? - Proxy server related options
openai.assistants: ConnectionConfig
Provides a set of configurations for controlling the behaviours when communicating with a remote HTTP endpoint.
Fields
- auth BearerTokenConfig - Configurations related to client authentication
- httpVersion HttpVersion(default http:HTTP_2_0) - The HTTP version understood by the client
- http1Settings ClientHttp1Settings? - Configurations related to HTTP/1.x protocol
- http2Settings ClientHttp2Settings? - Configurations related to HTTP/2 protocol
- timeout decimal(default 60) - The maximum time to wait (in seconds) for a response before closing the connection
- forwarded string(default "disable") - The choice of setting
forwarded
/x-forwarded
header
- poolConfig PoolConfiguration? - Configurations associated with request pooling
- cache CacheConfig? - HTTP caching related configurations
- compression Compression(default http:COMPRESSION_AUTO) - Specifies the way of handling compression (
accept-encoding
) header
- circuitBreaker CircuitBreakerConfig? - Configurations associated with the behaviour of the Circuit Breaker
- retryConfig RetryConfig? - Configurations associated with retrying
- responseLimits ResponseLimitConfigs? - Configurations associated with inbound response size limits
- secureSocket ClientSecureSocket? - SSL/TLS-related options
- proxy ProxyConfig? - Proxy server related options
- validation boolean(default true) - Enables the inbound payload validation functionality which provided by the constraint package. Enabled by default
openai.assistants: CreateAssistantRequest
Fields
- model string|"gpt-4o"|"gpt-4o-2024-05-13"|"gpt-4o-mini"|"gpt-4o-mini-2024-07-18"|"gpt-4-turbo"|"gpt-4-turbo-2024-04-09"|"gpt-4-0125-preview"|"gpt-4-turbo-preview"|"gpt-4-1106-preview"|"gpt-4-vision-preview"|"gpt-4"|"gpt-4-0314"|"gpt-4-0613"|"gpt-4-32k"|"gpt-4-32k-0314"|"gpt-4-32k-0613"|"gpt-3.5-turbo"|"gpt-3.5-turbo-16k"|"gpt-3.5-turbo-0613"|"gpt-3.5-turbo-1106"|"gpt-3.5-turbo-0125"|"gpt-3.5-turbo-16k-0613" - ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
- name string? - The name of the assistant. The maximum length is 256 characters.
- description string? - The description of the assistant. The maximum length is 512 characters.
- instructions string? - The system instructions that the assistant uses. The maximum length is 256,000 characters.
- tools (AssistantToolsCode|AssistantToolsFileSearch|AssistantToolsFunction)[](default []) - A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter
,file_search
, orfunction
.
- tool_resources CreateAssistantRequest_tool_resources? -
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- temperature decimal?(default 1) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
- top_p decimal?(default 1) - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
- response_format AssistantsApiResponseFormatOption? -
openai.assistants: CreateAssistantRequest_tool_resources
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter
tool requires a list of file IDs, while the file_search
tool requires a list of vector store IDs.
Fields
- code_interpreter CreateAssistantRequest_tool_resources_code_interpreter? -
- file_search CreateAssistantRequest_tool_resources_file_search? -
openai.assistants: CreateAssistantRequest_tool_resources_code_interpreter
Fields
openai.assistants: CreateAssistantRequest_tool_resources_file_search
Fields
- vector_store_ids string[]? - The vector store attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
- vector_stores CreateAssistantRequest_tool_resources_file_search_vector_stores[]? - A helper to create a vector store with file_ids and attach it to this assistant. There can be a maximum of 1 vector store attached to the assistant.
openai.assistants: CreateAssistantRequest_tool_resources_file_search_vector_stores
Fields
- metadata record {}? - Set of 16 key-value pairs that can be attached to a vector store. This can be useful for storing additional information about the vector store in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: CreateMessageRequest
Fields
- role "user"|"assistant" - The role of the entity that is creating the message. Allowed values include:
user
: Indicates the message is sent by an actual user and should be used in most cases to represent user-generated messages.assistant
: Indicates the message is generated by the assistant. Use this value to insert messages from the assistant into the conversation.
- attachments MessageObject_attachments[]? - A list of files attached to the message, and the tools they should be added to.
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: CreateRunRequest
Fields
- model string|"gpt-4o"|"gpt-4o-2024-05-13"|"gpt-4o-mini"|"gpt-4o-mini-2024-07-18"|"gpt-4-turbo"|"gpt-4-turbo-2024-04-09"|"gpt-4-0125-preview"|"gpt-4-turbo-preview"|"gpt-4-1106-preview"|"gpt-4-vision-preview"|"gpt-4"|"gpt-4-0314"|"gpt-4-0613"|"gpt-4-32k"|"gpt-4-32k-0314"|"gpt-4-32k-0613"|"gpt-3.5-turbo"|"gpt-3.5-turbo-16k"|"gpt-3.5-turbo-0613"|"gpt-3.5-turbo-1106"|"gpt-3.5-turbo-0125"|"gpt-3.5-turbo-16k-0613"?? - The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
- instructions string? - Overrides the instructions of the assistant. This is useful for modifying the behavior on a per-run basis.
- additional_instructions string? - Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.
- additional_messages CreateMessageRequest[]? - Adds additional messages to the thread before creating the run.
- tools (AssistantToolsCode|AssistantToolsFileSearch|AssistantToolsFunction)[]? - Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- temperature decimal?(default 1) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
- top_p decimal?(default 1) - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
- max_prompt_tokens int? - The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status
incomplete
. Seeincomplete_details
for more info.
- max_completion_tokens int? - The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status
incomplete
. Seeincomplete_details
for more info.
- truncation_strategy TruncationObject? -
- tool_choice AssistantsApiToolChoiceOption? -
- parallel_tool_calls ParallelToolCalls? -
- response_format AssistantsApiResponseFormatOption? -
openai.assistants: CreateThreadAndRunRequest
Fields
- thread CreateThreadRequest? -
- model string|"gpt-4o"|"gpt-4o-2024-05-13"|"gpt-4o-mini"|"gpt-4o-mini-2024-07-18"|"gpt-4-turbo"|"gpt-4-turbo-2024-04-09"|"gpt-4-0125-preview"|"gpt-4-turbo-preview"|"gpt-4-1106-preview"|"gpt-4-vision-preview"|"gpt-4"|"gpt-4-0314"|"gpt-4-0613"|"gpt-4-32k"|"gpt-4-32k-0314"|"gpt-4-32k-0613"|"gpt-3.5-turbo"|"gpt-3.5-turbo-16k"|"gpt-3.5-turbo-0613"|"gpt-3.5-turbo-1106"|"gpt-3.5-turbo-0125"|"gpt-3.5-turbo-16k-0613"?? - The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
- instructions string? - Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.
- tools (AssistantToolsCode|AssistantToolsFileSearch|AssistantToolsFunction)[]? - Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.
- tool_resources CreateThreadAndRunRequest_tool_resources? -
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- temperature decimal?(default 1) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
- top_p decimal?(default 1) - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
- max_prompt_tokens int? - The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status
incomplete
. Seeincomplete_details
for more info.
- max_completion_tokens int? - The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status
incomplete
. Seeincomplete_details
for more info.
- truncation_strategy TruncationObject? -
- tool_choice AssistantsApiToolChoiceOption? -
- parallel_tool_calls ParallelToolCalls? -
- response_format AssistantsApiResponseFormatOption? -
openai.assistants: CreateThreadAndRunRequest_tool_resources
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter
tool requires a list of file IDs, while the file_search
tool requires a list of vector store IDs.
Fields
- code_interpreter CreateAssistantRequest_tool_resources_code_interpreter? -
- file_search AssistantObject_tool_resources_file_search? -
openai.assistants: CreateThreadRequest
Fields
- messages CreateMessageRequest[]? - A list of messages to start the thread with.
- tool_resources CreateThreadRequest_tool_resources? -
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: CreateThreadRequest_tool_resources
A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter
tool requires a list of file IDs, while the file_search
tool requires a list of vector store IDs.
Fields
- code_interpreter CreateAssistantRequest_tool_resources_code_interpreter? -
- file_search CreateThreadRequest_tool_resources_file_search? -
openai.assistants: CreateThreadRequest_tool_resources_file_search
Fields
- vector_store_ids string[]? - The vector store attached to this thread. There can be a maximum of 1 vector store attached to the thread.
- vector_stores CreateAssistantRequest_tool_resources_file_search_vector_stores[]? - A helper to create a vector store with file_ids and attach it to this thread. There can be a maximum of 1 vector store attached to the thread.
openai.assistants: DeleteAssistantResponse
Fields
- id string -
- deleted boolean -
- 'object "assistant.deleted" -
openai.assistants: DeleteMessageResponse
Fields
- id string -
- deleted boolean -
- 'object "thread.message.deleted" -
openai.assistants: DeleteThreadResponse
Fields
- id string -
- deleted boolean -
- 'object "thread.deleted" -
openai.assistants: FunctionObject
Fields
- description string? - A description of what the function does, used by the model to choose when and how to call the function.
- name string - The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
- parameters FunctionParameters? -
openai.assistants: FunctionParameters
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
Omitting parameters
defines a function with an empty parameter list.
openai.assistants: ListAssistantsQueries
Represents the Queries record for the operation: listAssistants
Fields
- before string? - A cursor for use in pagination.
before
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.
- 'limit int(default 20) - A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
- after string? - A cursor for use in pagination.
after
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
- 'order "asc"|"desc" (default "desc") - Sort order by the
created_at
timestamp of the objects.asc
for ascending order anddesc
for descending order.
openai.assistants: ListAssistantsResponse
Fields
- 'object string -
- data AssistantObject[] -
- first_id string -
- last_id string -
- has_more boolean -
openai.assistants: ListMessagesQueries
Represents the Queries record for the operation: listMessages
Fields
- run_id string? - Filter messages by the run ID that generated them.
- before string? - A cursor for use in pagination.
before
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.
- 'limit int(default 20) - A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
- after string? - A cursor for use in pagination.
after
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
- 'order "asc"|"desc" (default "desc") - Sort order by the
created_at
timestamp of the objects.asc
for ascending order anddesc
for descending order.
openai.assistants: ListMessagesResponse
Fields
- 'object string -
- data MessageObject[] -
- first_id string -
- last_id string -
- has_more boolean -
openai.assistants: ListRunsQueries
Represents the Queries record for the operation: listRuns
Fields
- before string? - A cursor for use in pagination.
before
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.
- 'limit int(default 20) - A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
- after string? - A cursor for use in pagination.
after
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
- 'order "asc"|"desc" (default "desc") - Sort order by the
created_at
timestamp of the objects.asc
for ascending order anddesc
for descending order.
openai.assistants: ListRunsResponse
Fields
- 'object string -
- data RunObject[] -
- first_id string -
- last_id string -
- has_more boolean -
openai.assistants: ListRunStepsQueries
Represents the Queries record for the operation: listRunSteps
Fields
- before string? - A cursor for use in pagination.
before
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include before=obj_foo in order to fetch the previous page of the list.
- 'limit int(default 20) - A limit on the number of objects to be returned. Limit can range between 1 and 100, and the default is 20.
- after string? - A cursor for use in pagination.
after
is an object ID that defines your place in the list. For instance, if you make a list request and receive 100 objects, ending with obj_foo, your subsequent call can include after=obj_foo in order to fetch the next page of the list.
- 'order "asc"|"desc" (default "desc") - Sort order by the
created_at
timestamp of the objects.asc
for ascending order anddesc
for descending order.
openai.assistants: ListRunStepsResponse
Fields
- 'object string -
- data RunStepObject[] -
- first_id string -
- last_id string -
- has_more boolean -
openai.assistants: MessageContentImageFileObject
References an image File in the content of a message.
Fields
- 'type "image_file" - Always
image_file
.
- image_file MessageContentImageFileObject_image_file -
openai.assistants: MessageContentImageFileObject_image_file
Fields
- detail "auto"|"low"|"high" (default "auto") - Specifies the detail level of the image if specified by the user.
low
uses fewer tokens, you can opt in to high resolution usinghigh
.
openai.assistants: MessageContentImageUrlObject
References an image URL in the content of a message.
Fields
- 'type "image_url" - The type of the content part.
- image_url MessageContentImageUrlObject_image_url -
openai.assistants: MessageContentImageUrlObject_image_url
Fields
- url string - The external URL of the image, must be a supported image types: jpeg, jpg, png, gif, webp.
- detail "auto"|"low"|"high" (default "auto") - Specifies the detail level of the image.
low
uses fewer tokens, you can opt in to high resolution usinghigh
. Default value isauto
openai.assistants: MessageContentTextAnnotationsFileCitationObject
A citation within the message that points to a specific quote from a specific File associated with the assistant or the message. Generated when the assistant uses the "file_search" tool to search files.
Fields
- 'type "file_citation" - Always
file_citation
.
- text string - The text in the message content that needs to be replaced.
- file_citation MessageContentTextAnnotationsFileCitationObject_file_citation -
- start_index int -
- end_index int -
openai.assistants: MessageContentTextAnnotationsFileCitationObject_file_citation
Fields
- file_id string - The ID of the specific File the citation is from.
openai.assistants: MessageContentTextAnnotationsFilePathObject
A URL for the file that's generated when the assistant used the code_interpreter
tool to generate a file.
Fields
- 'type "file_path" - Always
file_path
.
- text string - The text in the message content that needs to be replaced.
- start_index int -
- end_index int -
openai.assistants: MessageContentTextAnnotationsFilePathObject_file_path
Fields
- file_id string - The ID of the file that was generated.
openai.assistants: MessageContentTextObject
The text content that is part of a message.
Fields
- 'type "text" - Always
text
.
openai.assistants: MessageContentTextObject_text
Fields
- value string - The data that makes up the text.
openai.assistants: MessageObject
Represents a message within a thread.
Fields
- id string - The identifier, which can be referenced in API endpoints.
- 'object "thread.message" - The object type, which is always
thread.message
.
- created_at int - The Unix timestamp (in seconds) for when the message was created.
- status "in_progress"|"incomplete"|"completed" ? - The status of the message, which can be either
in_progress
,incomplete
, orcompleted
.
- incomplete_details MessageObject_incomplete_details? -
- completed_at int? - The Unix timestamp (in seconds) for when the message was completed.
- incomplete_at int? - The Unix timestamp (in seconds) for when the message was marked as incomplete.
- role "user"|"assistant" - The entity that produced the message. One of
user
orassistant
.
- content (MessageContentImageFileObject|MessageContentImageUrlObject|MessageContentTextObject)[] - The content of the message in array of text and/or images.
- attachments MessageObject_attachments[]? - A list of files attached to the message, and the tools they were added to.
- metadata record {} - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: MessageObject_attachments
Fields
- file_id string? - The ID of the file to attach to the message.
- tools (AssistantToolsCode|AssistantToolsFileSearchTypeOnly)[]? - The tools to add this file to.
openai.assistants: MessageObject_incomplete_details
On an incomplete message, details about why the message is incomplete.
Fields
- reason "content_filter"|"max_tokens"|"run_cancelled"|"run_expired"|"run_failed" - The reason the message is incomplete.
openai.assistants: MessageRequestContentTextObject
The text content that is part of a message.
Fields
- 'type "text" - Always
text
.
- text string - Text content to be sent to the model
openai.assistants: ModifyAssistantRequest
Fields
- model string? - ID of the model to use. You can use the List models API to see all of your available models, or see our Model overview for descriptions of them.
- name string? - The name of the assistant. The maximum length is 256 characters.
- description string? - The description of the assistant. The maximum length is 512 characters.
- instructions string? - The system instructions that the assistant uses. The maximum length is 256,000 characters.
- tools (AssistantToolsCode|AssistantToolsFileSearch|AssistantToolsFunction)[](default []) - A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types
code_interpreter
,file_search
, orfunction
.
- tool_resources ModifyAssistantRequest_tool_resources? -
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- temperature decimal?(default 1) - What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
- top_p decimal?(default 1) - An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
- response_format AssistantsApiResponseFormatOption? -
openai.assistants: ModifyAssistantRequest_tool_resources
A set of resources that are used by the assistant's tools. The resources are specific to the type of tool. For example, the code_interpreter
tool requires a list of file IDs, while the file_search
tool requires a list of vector store IDs.
Fields
- code_interpreter ModifyAssistantRequest_tool_resources_code_interpreter? -
- file_search ModifyAssistantRequest_tool_resources_file_search? -
openai.assistants: ModifyAssistantRequest_tool_resources_code_interpreter
Fields
openai.assistants: ModifyAssistantRequest_tool_resources_file_search
Fields
- vector_store_ids string[]? - Overrides the vector store attached to this assistant. There can be a maximum of 1 vector store attached to the assistant.
openai.assistants: ModifyMessageRequest
Fields
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: ModifyRunRequest
Fields
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: ModifyThreadRequest
Fields
- tool_resources ThreadObject_tool_resources? -
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: ProxyConfig
Proxy server configurations to be used with the HTTP client endpoint.
Fields
- host string(default "") - Host name of the proxy server
- port int(default 0) - Proxy server port
- userName string(default "") - Proxy server username
- password string(default "") - Proxy server password
openai.assistants: RunCompletionUsage
Usage statistics related to the run. This value will be null
if the run is not in a terminal state (i.e. in_progress
, queued
, etc.).
Fields
- completion_tokens int - Number of completion tokens used over the course of the run.
- prompt_tokens int - Number of prompt tokens used over the course of the run.
- total_tokens int - Total number of tokens used (prompt + completion).
openai.assistants: RunObject
Represents an execution run on a thread.
Fields
- id string - The identifier, which can be referenced in API endpoints.
- 'object "thread.run" - The object type, which is always
thread.run
.
- created_at int - The Unix timestamp (in seconds) for when the run was created.
- status "queued"|"in_progress"|"requires_action"|"cancelling"|"cancelled"|"failed"|"completed"|"incomplete"|"expired" - The status of the run, which can be either
queued
,in_progress
,requires_action
,cancelling
,cancelled
,failed
,completed
,incomplete
, orexpired
.
- required_action RunObject_required_action -
- last_error RunObject_last_error -
- expires_at int - The Unix timestamp (in seconds) for when the run will expire.
- started_at int - The Unix timestamp (in seconds) for when the run was started.
- cancelled_at int - The Unix timestamp (in seconds) for when the run was cancelled.
- failed_at int - The Unix timestamp (in seconds) for when the run failed.
- completed_at int - The Unix timestamp (in seconds) for when the run was completed.
- incomplete_details RunObject_incomplete_details -
- tools (AssistantToolsCode|AssistantToolsFileSearch|AssistantToolsFunction)[] - The list of tools that the assistant used for this run.
- metadata record {} - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- usage RunCompletionUsage -
- temperature decimal? - The sampling temperature used for this run. If not set, defaults to 1.
- top_p decimal? - The nucleus sampling value used for this run. If not set, defaults to 1.
- max_prompt_tokens int - The maximum number of prompt tokens specified to have been used over the course of the run.
- max_completion_tokens int - The maximum number of completion tokens specified to have been used over the course of the run.
- truncation_strategy TruncationObject -
- tool_choice AssistantsApiToolChoiceOption -
- parallel_tool_calls ParallelToolCalls -
- response_format AssistantsApiResponseFormatOption -
openai.assistants: RunObject_incomplete_details
Details on why the run is incomplete. Will be null
if the run is not incomplete.
Fields
- reason "max_completion_tokens"|"max_prompt_tokens" ? - The reason why the run is incomplete. This will point to which specific token limit was reached over the course of the run.
openai.assistants: RunObject_last_error
The last error associated with this run. Will be null
if there are no errors.
Fields
- code "server_error"|"rate_limit_exceeded"|"invalid_prompt" - One of
server_error
,rate_limit_exceeded
, orinvalid_prompt
.
- message string - A human-readable description of the error.
openai.assistants: RunObject_required_action
Details on the action required to continue the run. Will be null
if no action is required.
Fields
- 'type "submit_tool_outputs" - For now, this is always
submit_tool_outputs
.
- submit_tool_outputs RunObject_required_action_submit_tool_outputs -
openai.assistants: RunObject_required_action_submit_tool_outputs
Details on the tool outputs needed for this run to continue.
Fields
- tool_calls RunToolCallObject[] - A list of the relevant tool calls.
openai.assistants: RunStepCompletionUsage
Usage statistics related to the run step. This value will be null
while the run step's status is in_progress
.
Fields
- completion_tokens int - Number of completion tokens used over the course of the run step.
- prompt_tokens int - Number of prompt tokens used over the course of the run step.
- total_tokens int - Total number of tokens used (prompt + completion).
openai.assistants: RunStepDetailsMessageCreationObject
Details of the message creation by the run step.
Fields
- 'type "message_creation" - Always
message_creation
.
- message_creation RunStepDetailsMessageCreationObject_message_creation -
openai.assistants: RunStepDetailsMessageCreationObject_message_creation
Fields
- message_id string - The ID of the message that was created by this run step.
openai.assistants: RunStepDetailsToolCallsCodeObject
Details of the Code Interpreter tool call the run step was involved in.
Fields
- id string - The ID of the tool call.
- 'type "code_interpreter" - The type of tool call. This is always going to be
code_interpreter
for this type of tool call.
- code_interpreter RunStepDetailsToolCallsCodeObject_code_interpreter -
openai.assistants: RunStepDetailsToolCallsCodeObject_code_interpreter
The Code Interpreter tool call definition.
Fields
- input string - The input to the Code Interpreter tool call.
- outputs (RunStepDetailsToolCallsCodeOutputLogsObject|RunStepDetailsToolCallsCodeOutputImageObject)[] - The outputs from the Code Interpreter tool call. Code Interpreter can output one or more items, including text (
logs
) or images (image
). Each of these are represented by a different object type.
openai.assistants: RunStepDetailsToolCallsCodeOutputImageObject
Fields
- 'type "image" - Always
image
.
openai.assistants: RunStepDetailsToolCallsCodeOutputImageObject_image
Fields
openai.assistants: RunStepDetailsToolCallsCodeOutputLogsObject
Text output from the Code Interpreter tool call as part of a run step.
Fields
- 'type "logs" - Always
logs
.
- logs string - The text output from the Code Interpreter tool call.
openai.assistants: RunStepDetailsToolCallsFileSearchObject
Fields
- id string - The ID of the tool call object.
- 'type "file_search" - The type of tool call. This is always going to be
file_search
for this type of tool call.
- file_search record {} - For now, this is always going to be an empty object.
openai.assistants: RunStepDetailsToolCallsFunctionObject
Fields
- id string - The ID of the tool call object.
- 'type "function" - The type of tool call. This is always going to be
function
for this type of tool call.
- 'function RunStepDetailsToolCallsFunctionObject_function -
openai.assistants: RunStepDetailsToolCallsFunctionObject_function
The definition of the function that was called.
Fields
- name string - The name of the function.
- arguments string - The arguments passed to the function.
openai.assistants: RunStepDetailsToolCallsObject
Details of the tool call.
Fields
- 'type "tool_calls" - Always
tool_calls
.
- tool_calls (RunStepDetailsToolCallsCodeObject|RunStepDetailsToolCallsFileSearchObject|RunStepDetailsToolCallsFunctionObject)[] - An array of tool calls the run step was involved in. These can be associated with one of three types of tools:
code_interpreter
,file_search
, orfunction
.
openai.assistants: RunStepObject
Represents a step in execution of a run.
Fields
- id string - The identifier of the run step, which can be referenced in API endpoints.
- 'object "thread.run.step" - The object type, which is always
thread.run.step
.
- created_at int - The Unix timestamp (in seconds) for when the run step was created.
- 'type "message_creation"|"tool_calls" - The type of run step, which can be either
message_creation
ortool_calls
.
- status "in_progress"|"cancelled"|"failed"|"completed"|"expired" - The status of the run step, which can be either
in_progress
,cancelled
,failed
,completed
, orexpired
.
- step_details RunStepDetailsMessageCreationObject|RunStepDetailsToolCallsObject - The details of the run step.
- last_error RunStepObject_last_error -
- expired_at int? - The Unix timestamp (in seconds) for when the run step expired. A step is considered expired if the parent run is expired.
- cancelled_at int - The Unix timestamp (in seconds) for when the run step was cancelled.
- failed_at int - The Unix timestamp (in seconds) for when the run step failed.
- completed_at int - The Unix timestamp (in seconds) for when the run step completed.
- metadata record {}? - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
- usage RunStepCompletionUsage -
openai.assistants: RunStepObject_last_error
The last error associated with this run step. Will be null
if there are no errors.
Fields
- code "server_error"|"rate_limit_exceeded" - One of
server_error
orrate_limit_exceeded
.
- message string - A human-readable description of the error.
openai.assistants: RunToolCallObject
Tool call objects
Fields
- id string - The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint.
- 'type "function" - The type of tool call the output is required for. For now, this is always
function
.
- 'function RunToolCallObject_function -
openai.assistants: RunToolCallObject_function
The function definition.
Fields
- name string - The name of the function.
- arguments string - The arguments that the model expects you to pass to the function.
openai.assistants: SubmitToolOutputsRunRequest
Fields
- tool_outputs SubmitToolOutputsRunRequest_tool_outputs[] - A list of tools for which the outputs are being submitted.
openai.assistants: SubmitToolOutputsRunRequest_tool_outputs
Fields
- tool_call_id string? - The ID of the tool call in the
required_action
object within the run object the output is being submitted for.
- output string? - The output of the tool call to be submitted to continue the run.
openai.assistants: ThreadObject
Represents a thread that contains messages.
Fields
- id string - The identifier, which can be referenced in API endpoints.
- 'object "thread" - The object type, which is always
thread
.
- created_at int - The Unix timestamp (in seconds) for when the thread was created.
- tool_resources ThreadObject_tool_resources -
- metadata record {} - Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format. Keys can be a maximum of 64 characters long and values can be a maxium of 512 characters long.
openai.assistants: ThreadObject_tool_resources
A set of resources that are made available to the assistant's tools in this thread. The resources are specific to the type of tool. For example, the code_interpreter
tool requires a list of file IDs, while the file_search
tool requires a list of vector store IDs.
Fields
- code_interpreter CreateAssistantRequest_tool_resources_code_interpreter? -
- file_search ThreadObject_tool_resources_file_search? -
openai.assistants: ThreadObject_tool_resources_file_search
Fields
- vector_store_ids string[]? - The vector store attached to this thread. There can be a maximum of 1 vector store attached to the thread.
openai.assistants: TruncationObject
Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.
Fields
- 'type "auto"|"last_messages" - The truncation strategy to use for the thread. The default is
auto
. If set tolast_messages
, the thread will be truncated to the n most recent messages in the thread. When set toauto
, messages in the middle of the thread will be dropped to fit the context length of the model,max_prompt_tokens
.
- last_messages int? - The number of most recent messages from the thread when constructing the context for the run.
Union types
openai.assistants: AssistantsApiResponseFormatOption
AssistantsApiResponseFormatOption
Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since gpt-3.5-turbo-1106
.
Setting to { "type": "json_object" }
enables JSON mode, which guarantees the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if finish_reason="length"
, which indicates the generation exceeded max_tokens
or the conversation exceeded the max context length.
openai.assistants: AssistantsApiToolChoiceOption
AssistantsApiToolChoiceOption
Controls which (if any) tool is called by the model.
none
means the model will not call any tools and instead generates a message.
auto
is the default value and means the model can pick between generating a message or calling one or more tools.
required
means the model must call one or more tools before responding to the user.
Specifying a particular tool like {"type": "file_search"}
or {"type": "function", "function": {"name": "my_function"}}
forces the model to call that tool.
Import
import ballerinax/openai.assistants;
Metadata
Released date: about 1 month ago
Version: 1.0.0
License: Apache-2.0
Compatibility
Platform: any
Ballerina version: 2201.9.3
GraalVM compatible: Yes
Pull count
Total: 0
Current verison: 0
Weekly downloads
Keywords
AI/assitants
vendor/OpenAI
cost/paid
custom-bot
run
threads
Contributors
Other versions
1.0.0