pub struct LRUCache<K, V> { /* private fields */ }
Expand description
LRU Cache implementation using intrusive collections.
This cache uses an intrusive_collections::LinkedList
to maintain the LRU order,
and an intrusive_collections::RBTree
to allow for efficient lookups by key,
while maintaining only one allocation per key-value pair. Unfortunately, this
is a linked structure, so cache locality is likely poor, but memory usage
and flexibility are improved.
The cache is unbounded by default, but can be limited to a maximum capacity.
§Example
use intrusive_lru_cache::LRUCache;
let mut lru: LRUCache<&'static str, &'static str> = LRUCache::default();
lru.insert("a", "1");
lru.insert("b", "2");
lru.insert("c", "3");
let _ = lru.get("b"); // updates LRU order
assert_eq!(lru.pop(), Some(("a", "1")));
assert_eq!(lru.pop(), Some(("c", "3")));
assert_eq!(lru.pop(), Some(("b", "2")));
assert_eq!(lru.pop(), None);
§Notes
- The cache is not thread-safe, and requires external synchronization.
- Cloning the cache will preserve the LRU order.
Implementations§
Source§impl<K, V> LRUCache<K, V>
impl<K, V> LRUCache<K, V>
Sourcepub fn new() -> Self
pub fn new() -> Self
Creates a new unbounded LRU cache.
This cache has no limit on the number of entries it can hold,
so entries must be manually removed via pop
,
or you can use set_max_capacity
to set a limit.
Sourcepub fn new_with_max_capacity(max_capacity: usize) -> Self
pub fn new_with_max_capacity(max_capacity: usize) -> Self
Creates a new LRU cache with a maximum capacity, after which old entries will be evicted to make room for new ones.
This does not preallocate any memory, only sets an upper limit.
Source§impl<K, V> LRUCache<K, V>where
K: Ord + 'static,
impl<K, V> LRUCache<K, V>where
K: Ord + 'static,
Sourcepub fn get<'a, Q>(&'a mut self, key: &Q) -> Option<&'a mut V>
pub fn get<'a, Q>(&'a mut self, key: &Q) -> Option<&'a mut V>
Returns a reference to the value corresponding to the key, and bumps the key to the front of the LRU list.
This returns a mutable reference to the value because we already have a mutable reference to the cache.
Sourcepub fn peek<'a, Q>(&'a self, key: &Q) -> Option<&'a V>
pub fn peek<'a, Q>(&'a self, key: &Q) -> Option<&'a V>
Returns a reference to the value corresponding to the key, without updating the LRU list.
Sourcepub fn insert(&mut self, key: K, value: V) -> Option<V>
pub fn insert(&mut self, key: K, value: V) -> Option<V>
Inserts a key-value pair into the cache, returning the old value if the key was already present.
Sourcepub fn remove<Q>(&mut self, key: &Q) -> Option<V>
pub fn remove<Q>(&mut self, key: &Q) -> Option<V>
Removes the value corresponding to the key from the cache, and returning it if it was present.
Sourcepub fn insert_or_get(&mut self, key: K, value: V) -> InsertOrGetResult<'_, K, V>
pub fn insert_or_get(&mut self, key: K, value: V) -> InsertOrGetResult<'_, K, V>
Inserts a key-value pair into the cache only if it wasn’t already present, otherwise update the LRU order for this element and return a reference to the value.
The returned value contains a mutable reference to the value, and if the key already existed, it also contains the key and value that were passed in.
Source§impl<K, V> LRUCache<K, V>
impl<K, V> LRUCache<K, V>
Sourcepub fn set_max_capacity(&mut self, max_capacity: usize)
pub fn set_max_capacity(&mut self, max_capacity: usize)
Sets the maximum capacity of the cache.
This does not remove any entries, but will cause the cache to evict entries when inserting new ones if the length exceeds the new capacity.
Use shrink
to manually trigger removal of entries
to meet the new capacity.
Sourcepub fn shrink(&mut self)
pub fn shrink(&mut self)
Removes the oldest entries from the cache until the length is less than or equal to the maximum capacity.
Sourcepub fn shrink_with<F>(&mut self, cb: F)where
F: FnMut(K, V),
pub fn shrink_with<F>(&mut self, cb: F)where
F: FnMut(K, V),
Removes the oldest entries from the cache until the length is less than or equal to the maximum capacity, and calls the provided closure with the removed key-value pairs.
§Example
let mut lru: LRUCache<&'static str, &'static str> = LRUCache::default();
lru.insert("a", "1");
lru.insert("b", "2");
lru.insert("c", "3");
lru.set_max_capacity(1);
let mut removed = Vec::new();
lru.shrink_with(|key, value| {
removed.push((key, value));
});
assert_eq!(removed, vec![("a", "1"), ("b", "2")]);
Sourcepub fn pop(&mut self) -> Option<(K, V)>
pub fn pop(&mut self) -> Option<(K, V)>
Removes and returns the least recently used key-value pair.
This is an O(1)
operation.
Sourcepub fn iter_lru(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
pub fn iter_lru(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
Returns an iterator over the key-value pairs in the cache, in order of most recently used to least recently used.
Sourcepub fn iter_ord(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
pub fn iter_ord(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
Returns an iterator over the key-value pairs in the cache,
in order of key Ord
order.
Trait Implementations§
Source§impl<K, V> Extend<(K, V)> for LRUCache<K, V>where
K: Ord + 'static,
impl<K, V> Extend<(K, V)> for LRUCache<K, V>where
K: Ord + 'static,
Source§fn extend<T>(&mut self, iter: T)where
T: IntoIterator<Item = (K, V)>,
fn extend<T>(&mut self, iter: T)where
T: IntoIterator<Item = (K, V)>,
Source§fn extend_one(&mut self, item: A)
fn extend_one(&mut self, item: A)
extend_one
)Source§fn extend_reserve(&mut self, additional: usize)
fn extend_reserve(&mut self, additional: usize)
extend_one
)