Expand description
ML framework for Rust
use candle_core::{Tensor, DType, Device};
let a = Tensor::arange(0f32, 6f32, &Device::Cpu)?.reshape((2, 3))?;
let b = Tensor::arange(0f32, 12f32, &Device::Cpu)?.reshape((3, 4))?;
let c = a.matmul(&b)?;
§Features
- Simple syntax (looks and feels like PyTorch)
- CPU and Cuda backends (and M1 support)
- Enable serverless (CPU) small and fast deployments
- Model training
- Distributed computing (NCCL).
- Models out of the box (Llama, Whisper, Falcon, …)
§FAQ
- Why Candle?
Candle stems from the need to reduce binary size in order to enable serverless possible by making the whole engine smaller than PyTorch very large library volume
And simply removing Python from production workloads. Python can really add overhead in more complex workflows and the GIL is a notorious source of headaches.
Rust is cool, and a lot of the HF ecosystem already has Rust crates safetensors and tokenizers
§Other Crates
Candle consists of a number of crates. This crate holds core the common data structures but you may wish to look at the docs for the other crates which can be found here:
- candle-core. Core Datastructures and DataTypes.
- candle-nn. Building blocks for Neural Nets.
- candle-datasets. Rust access to commonly used Datasets like MNIST.
- candle-examples. Examples of Candle in Use.
- candle-onnx. Loading and using ONNX models.
- candle-pyo3. Access to Candle from Python.
- candle-transformers. Candle implemntation of many published transformer models.
Re-exports§
pub use cpu_backend::CpuStorage;
pub use cpu_backend::CpuStorageRef;
pub use error::Error;
pub use error::Result;
pub use layout::Layout;
pub use shape::Shape;
pub use shape::D;
pub use streaming::StreamTensor;
pub use streaming::StreamingBinOp;
pub use streaming::StreamingModule;
pub use dummy_cuda_backend as cuda;
pub use cuda::CudaDevice;
pub use cuda::CudaStorage;
Modules§
- Numpy support for tensors.
- The shape of a tensor is a tuple with the size of each of its dimensions.
Macros§
Structs§
- An iterator over offset position for items of an N-dimensional arrays stored in a flat buffer using some potential strides.
- The core struct for manipulating tensors.
- Unique identifier for tensors.
- A variable is a wrapper around a tensor, however variables can have their content modified whereas tensors are immutable.
Enums§
- The different types of elements allowed in tensors.
- A
DeviceLocation
represents a physical device whereas multipleDevice
can live on the same location (typically for cuda devices). - Generic structure used to index a slice of the tensor
Traits§
- Unary ops that can be defined in user-land.
- Trait used to implement multiple signatures for ease of use of the slicing of a tensor
- Unary ops that can be defined in user-land. These ops work in place and as such back-prop is unsupported.