Expand description
Client implementation for interacting with Ollama LLMs.
This module provides functionality to interact with locally running Ollama LLMs, particularly focused on the Llama model family. It supports various API endpoints including chat, generation, and embeddings, with primary support for chat-based interactions.
The client handles URL management, request building, and response parsing while providing sensible defaults and helpful warnings when using fallback configurations.
§Examples
use learner::llm::{LlamaRequest, Model, OllamaEndpoint};
let request = LlamaRequest::new()
.with_host("http://localhost:11434")
.with_endpoint(OllamaEndpoint::Chat)
.with_model(Model::Llama3p2c3b)
.with_message("What is quantum computing?");
let response = request.send().await?;
println!("Response: {}", response.message.content);
Structs§
- Request builder for Ollama LLM interactions.
- Response structure from Ollama LLM requests.
- Message structure for LLM interactions.
- Configuration options for LLM inference.
Enums§
- Available models for use with the Ollama service.
- Available API endpoints for the Ollama service.