pub struct CreateThreadAndRunRequest {Show 16 fields
pub assistant_id: String,
pub thread: Option<CreateThreadRequest>,
pub model: Option<String>,
pub instructions: Option<String>,
pub tools: Option<Vec<AssistantTools>>,
pub tool_resources: Option<AssistantToolResources>,
pub metadata: Option<HashMap<String, Value>>,
pub temperature: Option<f32>,
pub top_p: Option<f32>,
pub stream: Option<bool>,
pub max_prompt_tokens: Option<u32>,
pub max_completion_tokens: Option<u32>,
pub truncation_strategy: Option<TruncationObject>,
pub tool_choice: Option<AssistantsApiToolChoiceOption>,
pub parallel_tool_calls: Option<bool>,
pub response_format: Option<AssistantsApiResponseFormatOption>,
}
Fields§
§assistant_id: String
The ID of the assistant to use to execute this run.
thread: Option<CreateThreadRequest>
If no thread is provided, an empty thread will be created.
model: Option<String>
The ID of the Model to be used to execute this run. If a value is provided here, it will override the model associated with the assistant. If not, the model associated with the assistant will be used.
instructions: Option<String>
Override the default system message of the assistant. This is useful for modifying the behavior on a per-run basis.
tools: Option<Vec<AssistantTools>>
Override the tools the assistant can use for this run. This is useful for modifying the behavior on a per-run basis.
tool_resources: Option<AssistantToolResources>
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the code_interpreter
tool requires a list of file IDs, while the file_search
tool requires a list of vector store IDs.
metadata: Option<HashMap<String, Value>>
§temperature: Option<f32>
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
top_p: Option<f32>
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
stream: Option<bool>
If true
, returns a stream of events that happen during the Run as server-sent events, terminating when the Run enters a terminal state with a data: [DONE]
message.
max_prompt_tokens: Option<u32>
The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete
. See incomplete_details
for more info.
max_completion_tokens: Option<u32>
The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status incomplete
. See incomplete_details
for more info.
truncation_strategy: Option<TruncationObject>
Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.
tool_choice: Option<AssistantsApiToolChoiceOption>
§parallel_tool_calls: Option<bool>
Whether to enable parallel function calling during tool use.
response_format: Option<AssistantsApiResponseFormatOption>
Trait Implementations§
Source§impl Clone for CreateThreadAndRunRequest
impl Clone for CreateThreadAndRunRequest
Source§fn clone(&self) -> CreateThreadAndRunRequest
fn clone(&self) -> CreateThreadAndRunRequest
1.0.0 · Source§fn clone_from(&mut self, source: &Self)
fn clone_from(&mut self, source: &Self)
source
. Read more