pub struct LlamaContext<'a> {
pub model: &'a LlamaModel,
/* private fields */
}
Expand description
Safe wrapper around llama_context
.
Fields§
§model: &'a LlamaModel
a reference to the contexts model.
Implementations§
Source§impl LlamaContext<'_>
impl LlamaContext<'_>
Sourcepub fn copy_cache(&mut self, src: i32, dest: i32, size: i32)
pub fn copy_cache(&mut self, src: i32, dest: i32, size: i32)
Copy the cache from one sequence to another.
§Parameters
src
- The sequence id to copy the cache from.dest
- The sequence id to copy the cache to.size
- The size of the cache to copy.
Sourcepub fn copy_kv_cache_seq(
&mut self,
src: i32,
dest: i32,
p0: Option<u32>,
p1: Option<u32>,
) -> Result<(), KvCacheConversionError>
pub fn copy_kv_cache_seq( &mut self, src: i32, dest: i32, p0: Option<u32>, p1: Option<u32>, ) -> Result<(), KvCacheConversionError>
Copy the cache from one sequence to another.
§Returns
A Result
indicating whether the operation was successful.
§Parameters
src
- The sequence id to copy the cache from.dest
- The sequence id to copy the cache to.p0
- The start position of the cache to clear. IfNone
, the entire cache is copied up top1
.p1
- The end position of the cache to clear. IfNone
, the entire cache is copied starting fromp0
.
§Errors
If either position exceeds i32::MAX
.
Sourcepub fn clear_kv_cache_seq(
&mut self,
src: Option<u32>,
p0: Option<u32>,
p1: Option<u32>,
) -> Result<bool, KvCacheConversionError>
pub fn clear_kv_cache_seq( &mut self, src: Option<u32>, p0: Option<u32>, p1: Option<u32>, ) -> Result<bool, KvCacheConversionError>
Clear the kv cache for the given sequence within the specified range [p0, p1)
Returns false
only when partial sequence removals fail. Full sequence removals always succeed.
§Returns
A Result
indicating whether the operation was successful. If the sequence id or
either position exceeds the maximum i32 value, no removal is attempted and an Err
is returned.
§Parameters
src
- The sequence id to clear the cache for. IfNone
, matches all sequencesp0
- The start position of the cache to clear. IfNone
, the entire cache is cleared up top1
.p1
- The end position of the cache to clear. IfNone
, the entire cache is cleared fromp0
.
§Errors
If the sequence id or either position exceeds i32::MAX
.
Sourcepub fn get_kv_cache_used_cells(&self) -> i32
pub fn get_kv_cache_used_cells(&self) -> i32
Returns the number of used KV cells (i.e. have at least one sequence assigned to them)
Sourcepub fn clear_kv_cache(&mut self)
pub fn clear_kv_cache(&mut self)
Clear the KV cache
Sourcepub fn llama_kv_cache_seq_keep(&mut self, seq_id: i32)
pub fn llama_kv_cache_seq_keep(&mut self, seq_id: i32)
Removes all tokens that do not belong to the specified sequence
§Parameters
seq_id
- The sequence id to keep
Sourcepub fn kv_cache_seq_add(
&mut self,
seq_id: i32,
p0: Option<u32>,
p1: Option<u32>,
delta: i32,
) -> Result<(), KvCacheConversionError>
pub fn kv_cache_seq_add( &mut self, seq_id: i32, p0: Option<u32>, p1: Option<u32>, delta: i32, ) -> Result<(), KvCacheConversionError>
Adds relative position “delta” to all tokens that belong to the specified sequence and have positions in [p0, p1)
If the KV cache is RoPEd, the KV data is updated accordingly:
- lazily on next
LlamaContext::decode
- explicitly with
Self::kv_cache_update
§Returns
A Result
indicating whether the operation was successful.
§Parameters
seq_id
- The sequence id to updatep0
- The start position of the cache to update. IfNone
, the entire cache is updated up top1
.p1
- The end position of the cache to update. IfNone
, the entire cache is updated starting fromp0
.delta
- The relative position to add to the tokens
§Errors
If either position exceeds i32::MAX
.
Sourcepub fn kv_cache_seq_div(
&mut self,
seq_id: i32,
p0: Option<u32>,
p1: Option<u32>,
d: NonZeroU8,
) -> Result<(), KvCacheConversionError>
pub fn kv_cache_seq_div( &mut self, seq_id: i32, p0: Option<u32>, p1: Option<u32>, d: NonZeroU8, ) -> Result<(), KvCacheConversionError>
Integer division of the positions by factor of d > 1
If the KV cache is RoPEd
, the KV data is updated accordingly:
- lazily on next
LlamaContext::decode
- explicitly with
Self::kv_cache_update
§Returns
A Result
indicating whether the operation was successful.
§Parameters
seq_id
- The sequence id to updatep0
- The start position of the cache to update. IfNone
, the entire cache is updated up top1
.p1
- The end position of the cache to update. IfNone
, the entire cache is updated starting fromp0
.d
- The factor to divide the positions by
§Errors
If either position exceeds i32::MAX
.
Sourcepub fn kv_cache_seq_pos_max(&self, seq_id: i32) -> i32
pub fn kv_cache_seq_pos_max(&self, seq_id: i32) -> i32
Returns the largest position present in the KV cache for the specified sequence
§Parameters
seq_id
- The sequence id to get the max position for
Sourcepub fn kv_cache_defrag(&mut self)
pub fn kv_cache_defrag(&mut self)
Defragment the KV cache This will be applied:
- lazily on next
LlamaContext::decode
- explicitly with
Self::kv_cache_update
Sourcepub fn kv_cache_update(&mut self)
pub fn kv_cache_update(&mut self)
Apply the KV cache updates (such as K-shifts, defragmentation, etc.)
Sourcepub fn get_kv_cache_token_count(&self) -> i32
pub fn get_kv_cache_token_count(&self) -> i32
Returns the number of tokens in the KV cache (slow, use only for debug) If a KV cell has multiple sequences assigned to it, it will be counted multiple times
Sourcepub fn new_kv_cache_view(&self, n_max_seq: i32) -> KVCacheView<'_>
pub fn new_kv_cache_view(&self, n_max_seq: i32) -> KVCacheView<'_>
Create an empty KV cache view. (use only for debugging purposes)
§Parameters
n_max_seq
- Maximum number of sequences that can exist in a cell. It’s not an error if there are more sequences in a cell than this value, however they will not be visible in the viewcells_sequences
.
Source§impl LlamaContext<'_>
impl LlamaContext<'_>
Sourcepub fn save_session_file(
&self,
path_session: impl AsRef<Path>,
tokens: &[LlamaToken],
) -> Result<(), SaveSessionError>
pub fn save_session_file( &self, path_session: impl AsRef<Path>, tokens: &[LlamaToken], ) -> Result<(), SaveSessionError>
Save the current session to a file.
§Parameters
path_session
- The file to save to.tokens
- The tokens to associate the session with. This should be a prefix of a sequence of tokens that the context has processed, so that the relevant KV caches are already filled.
§Errors
Fails if the path is not a valid utf8, is not a valid c string, or llama.cpp fails to save the session file.
Sourcepub fn load_session_file(
&mut self,
path_session: impl AsRef<Path>,
max_tokens: usize,
) -> Result<Vec<LlamaToken>, LoadSessionError>
pub fn load_session_file( &mut self, path_session: impl AsRef<Path>, max_tokens: usize, ) -> Result<Vec<LlamaToken>, LoadSessionError>
Load a session file into the current context.
You still need to pass the returned tokens to the context for inference to work. What this function buys you is that the KV caches are already filled with the relevant data.
§Parameters
path_session
- The file to load from. It must be a session file from a compatible context, otherwise the function will error.max_tokens
- The maximum token length of the loaded session. If the session was saved with a longer length, the function will error.
§Errors
Fails if the path is not a valid utf8, is not a valid c string, or llama.cpp fails to load the session file. (e.g. the file does not exist, is not a session file, etc.)
Sourcepub fn get_state_size(&self) -> usize
pub fn get_state_size(&self) -> usize
Returns the maximum size in bytes of the state (rng, logits, embedding
and kv_cache
) - will often be smaller after compacting tokens
Sourcepub unsafe fn copy_state_data(&self, dest: *mut u8) -> usize
pub unsafe fn copy_state_data(&self, dest: *mut u8) -> usize
Copies the state to the specified destination address.
Returns the number of bytes copied
§Safety
Destination needs to have allocated enough memory.
Sourcepub unsafe fn set_state_data(&mut self, src: &[u8]) -> usize
pub unsafe fn set_state_data(&mut self, src: &[u8]) -> usize
Set the state reading from the specified address Returns the number of bytes read
§Safety
help wanted: not entirely sure what the safety requirements are here.
Source§impl<'model> LlamaContext<'model>
impl<'model> LlamaContext<'model>
Sourcepub fn n_batch(&self) -> u32
pub fn n_batch(&self) -> u32
Gets the max number of logical tokens that can be submitted to decode. Must be greater than or equal to Self::n_ubatch
.
Sourcepub fn n_ubatch(&self) -> u32
pub fn n_ubatch(&self) -> u32
Gets the max number of physical tokens (hardware level) to decode in batch. Must be less than or equal to Self::n_batch
.
Sourcepub fn decode(&mut self, batch: &mut LlamaBatch) -> Result<(), DecodeError>
pub fn decode(&mut self, batch: &mut LlamaBatch) -> Result<(), DecodeError>
Decodes the batch.
§Errors
DecodeError
if the decoding failed.
§Panics
- the returned
std::ffi::c_int
from llama-cpp does not fit into a i32 (this should never happen on most systems)
Sourcepub fn encode(&mut self, batch: &mut LlamaBatch) -> Result<(), EncodeError>
pub fn encode(&mut self, batch: &mut LlamaBatch) -> Result<(), EncodeError>
Encodes the batch.
§Errors
EncodeError
if the decoding failed.
§Panics
- the returned
std::ffi::c_int
from llama-cpp does not fit into a i32 (this should never happen on most systems)
Sourcepub fn embeddings_seq_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
pub fn embeddings_seq_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
Get the embeddings for the i
th sequence in the current context.
§Returns
A slice containing the embeddings for the last decoded batch.
The size corresponds to the n_embd
parameter of the context’s model.
§Errors
- When the current context was constructed without enabling embeddings.
- If the current model had a pooling type of
llama_cpp_sys_2::LLAMA_POOLING_TYPE_NONE
- If the given sequence index exceeds the max sequence id.
§Panics
n_embd
does not fit into a usize
Sourcepub fn embeddings_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
pub fn embeddings_ith(&self, i: i32) -> Result<&[f32], EmbeddingsError>
Get the embeddings for the i
th token in the current context.
§Returns
A slice containing the embeddings for the last decoded batch of the given token.
The size corresponds to the n_embd
parameter of the context’s model.
§Errors
- When the current context was constructed without enabling embeddings.
- When the given token didn’t have logits enabled when it was passed.
- If the given token index exceeds the max token id.
§Panics
n_embd
does not fit into a usize
Sourcepub fn candidates(&self) -> impl Iterator<Item = LlamaTokenData> + '_
pub fn candidates(&self) -> impl Iterator<Item = LlamaTokenData> + '_
Sourcepub fn token_data_array(&self) -> LlamaTokenDataArray
pub fn token_data_array(&self) -> LlamaTokenDataArray
Sourcepub fn get_logits(&self) -> &[f32]
pub fn get_logits(&self) -> &[f32]
Token logits obtained from the last call to decode()
.
The logits for which batch.logits[i] != 0
are stored contiguously
in the order they have appeared in the batch.
Rows: number of tokens for which batch.logits[i] != 0
Cols: n_vocab
§Returns
A slice containing the logits for the last decoded token.
The size corresponds to the n_vocab
parameter of the context’s model.
§Panics
n_vocab
does not fit into a usize- token data returned is null
Sourcepub fn candidates_ith(
&self,
i: i32,
) -> impl Iterator<Item = LlamaTokenData> + '_
pub fn candidates_ith( &self, i: i32, ) -> impl Iterator<Item = LlamaTokenData> + '_
Sourcepub fn token_data_array_ith(&self, i: i32) -> LlamaTokenDataArray
pub fn token_data_array_ith(&self, i: i32) -> LlamaTokenDataArray
Sourcepub fn get_logits_ith(&self, i: i32) -> &[f32]
pub fn get_logits_ith(&self, i: i32) -> &[f32]
Get the logits for the ith token in the context.
§Panics
i
is greater thann_ctx
n_vocab
does not fit into a usize- logit
i
is not initialized.
Sourcepub fn reset_timings(&mut self)
pub fn reset_timings(&mut self)
Reset the timings for the context.
Sourcepub fn timings(&mut self) -> LlamaTimings
pub fn timings(&mut self) -> LlamaTimings
Returns the timings for the context.