Crate async_openai_wasm

Source
Expand description

Rust library for OpenAI on WASM

§Creating client

use async_openai_wasm::{Client, config::OpenAIConfig};

// Create a OpenAI client with api key from env var OPENAI_API_KEY and default base url.
let client = Client::new();

// Above is shortcut for
let config = OpenAIConfig::default();
let client = Client::with_config(config);

// OR use API key from different source and a non default organization
let api_key = "sk-..."; // This secret could be from a file, or environment variable.
let config = OpenAIConfig::new()
    .with_api_key(api_key)
    .with_org_id("the-continental");

let client = Client::with_config(config);

// Use custom reqwest client
let http_client = reqwest::ClientBuilder::new().user_agent("async-openai-wasm").build().unwrap();
let client = Client::new().with_http_client(http_client);

§Microsoft Azure Endpoints

use async_openai_wasm::{Client, config::AzureConfig};

let config = AzureConfig::new()
    .with_api_base("https://my-resource-name.openai.azure.com")
    .with_api_version("2023-03-15-preview")
    .with_deployment_id("deployment-id")
    .with_api_key("...");

let client = Client::with_config(config);

// Note that `async-openai-wasm` only implements OpenAI spec
// and doesn't maintain parity with the spec of Azure OpenAI service.

§Making requests


 use async_openai_wasm::{Client, types::{CreateCompletionRequestArgs}};

 // Create client
 let client = Client::new();

 // Create request using builder pattern
 // Every request struct has companion builder struct with same name + Args suffix
 let request = CreateCompletionRequestArgs::default()
     .model("gpt-3.5-turbo-instruct")
     .prompt("Tell me the recipe of alfredo pasta")
     .max_tokens(40_u32)
     .build()
     .unwrap();

 // Call API
 let response = client
     .completions()      // Get the API "group" (completions, images, etc.) from the client
     .create(request)    // Make the API call in that "group"
     .await
     .unwrap();

 println!("{}", response.choices.first().unwrap().text);

§Examples

For full working examples of the original async-openai for all supported features see examples directory in the repository. Also see wasm examples

Modules§

  • Client configurations: OpenAIConfig for OpenAI, AzureConfig for Azure OpenAI Service.
  • Errors originating from API calls, parsing responses, and reading-or-writing to the file system.
  • Types used in OpenAI API requests and responses. These types are created from component schemas in the OpenAPI spec

Structs§

  • Files attached to an assistant.
  • Build assistants that can call models and use tools to perform tasks.
  • Turn audio into text or text into audio. Related guide: Speech to text
  • Logs of user actions and configuration changes within this organization. To log events, you must activate logging in the Organization Settings. Once activated, for security reasons, logging cannot be deactivated.
  • Create large batches of API requests for asynchronous processing. The Batch API returns completions within 24 hours for a 50% discount.
  • Given a list of messages comprising a conversation, the model will return a response.
  • Client is a container for config and http_client used to make API calls.
  • Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position. We recommend most users use our Chat completions API. Learn more
  • Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms.
  • Files are used to upload documents that can be used with features like Assistants and Fine-tuning.
  • Manage fine-tuning jobs to tailor a model to your specific training data.
  • Given a prompt and/or an input image, the model will generate a new image.
  • Invite and manage invitations for an organization. Invited users are automatically added to the Default project.
  • Files attached to a message.
  • Represents a message within a thread.
  • List and describe the various models available in the API. You can refer to the Models documentation to understand what models are available and the differences between them.
  • Given text and/or image inputs, classifies if those inputs are potentially harmful across several categories.
  • Manage API keys for a given project. Supports listing and deleting keys for users. This API does not allow issuing keys for users, as users need to authorize themselves to generate keys.
  • Manage service accounts within a project. A service account is a bot user that is not associated with a user. If a user leaves an organization, their keys and membership in projects will no longer work. Service accounts do not have this limitation. However, service accounts can also be deleted from a project.
  • Manage users within a project, including adding, updating roles, and removing users. Users cannot be removed from the Default project, unless they are being removed from the organization.
  • Manage the projects within an organization includes creation, updating, and archiving or projects. The Default project cannot be modified or archived.
  • Represents an execution run on a thread.
  • Represents a step in execution of a run.
  • Create threads that assistants can interact with.
  • Allows you to upload large files in multiple parts.
  • Manage users and their role in an organization. Users will be automatically added to the Default project.
  • Vector store file batches represent operations to add multiple files to a vector store.
  • Vector store files represent files inside a vector store.