pub struct LRUCache<K, V> { /* private fields */ }
Expand description
LRU Cache implementation using intrusive collections.
This cache uses an intrusive_collections::LinkedList
to maintain the LRU order,
and an intrusive_collections::RBTree
to allow for efficient lookups by key,
while maintaining only one allocation per key-value pair. Unfortunately, this
is a linked structure, so cache locality is likely poor, but memory usage
and flexibility are improved.
The cache is unbounded by default, but can be limited to a maximum capacity.
The smart_*
methods allow for reading or updating the LRU order at the same time,
based on how the value is accessed. The get
method always updates the LRU order,
and the peek_*
methods allow for reading without updating the LRU order.
An overarching principle here is that the ‘Used’ in Lease-Recently-Used is defined by the mutable access to the value. This allows for a more flexible API, where the LRU order can be updated only when necessary, and not on every read access.
§Example
use intrusive_lru_cache::LRUCache;
let mut lru: LRUCache<&'static str, &'static str> = LRUCache::default();
lru.insert("a", "1");
lru.insert("b", "2");
lru.insert("c", "3");
let _ = lru.get("b"); // updates LRU order
assert_eq!(lru.pop(), Some(("a", "1")));
assert_eq!(lru.pop(), Some(("c", "3")));
assert_eq!(lru.pop(), Some(("b", "2")));
assert_eq!(lru.pop(), None);
§Notes
- Cloning preserves LRU order.
- If the
atomic
crate feature is enabled, the cache is thread-safe ifK
andV
areSend
/Sync
.
Implementations§
Source§impl<K, V> LRUCache<K, V>
impl<K, V> LRUCache<K, V>
Sourcepub const NODE_SIZE: usize = _
pub const NODE_SIZE: usize = _
Defines the size of the internal node structure in bytes.
The memory footprint of the entire cache can then be calculated as:
LRUCache::<K, V>::NODE_SIZE * cache.len()
+ size_of::<LRUCache<K, V>>() // size of the cache itself
or via memory_footprint
.
This is a nice benefit of intrusive collections, as it allows for a single allocation per key-value pair.
Sourcepub const fn memory_footprint(&self) -> usize
pub const fn memory_footprint(&self) -> usize
Returns the total bytes consumed by the cache, including all allocations, internal structures, and the cache itself.
Sourcepub fn unbounded() -> Self
pub fn unbounded() -> Self
Creates a new unbounded LRU cache.
This cache has no limit on the number of entries it can hold,
so entries must be manually removed via pop
,
or you can use set_max_capacity
to set a limit.
Source§impl<K, V> LRUCache<K, V>where
K: Ord + 'static,
impl<K, V> LRUCache<K, V>where
K: Ord + 'static,
Sourcepub fn peek<'a, Q>(&'a self, key: &Q) -> Option<&'a V>
pub fn peek<'a, Q>(&'a self, key: &Q) -> Option<&'a V>
Returns a reference to the value corresponding to the key, without updating the LRU list.
This is an O(log n)
operation.
Sourcepub fn peek_newest(&self) -> Option<(&K, &V)>
pub fn peek_newest(&self) -> Option<(&K, &V)>
Returns a reference to the most recently used key-value pair, without updating the LRU order further.
This is an O(1)
operation.
Sourcepub fn peek_oldest(&self) -> Option<(&K, &V)>
pub fn peek_oldest(&self) -> Option<(&K, &V)>
Returns a reference to the least recently used key-value pair, without updating the LRU order further.
This is an O(1)
operation.
Sourcepub fn get_newest(&mut self) -> Option<(&K, &mut V)>
pub fn get_newest(&mut self) -> Option<(&K, &mut V)>
Returns a reference to the most recently used key-value pair.
Because this is already the most recently used, it’s free to be accessed without any additional cost or additional modification of the LRU order.
This is an O(1)
operation.
Sourcepub fn get<'a, Q>(&'a mut self, key: &Q) -> Option<&'a mut V>
pub fn get<'a, Q>(&'a mut self, key: &Q) -> Option<&'a mut V>
Returns a reference to the value corresponding to the key, and bumps the key to the front of the LRU list.
This is an O(log n)
operation.
Sourcepub fn promote<Q>(&mut self, key: &Q)
pub fn promote<Q>(&mut self, key: &Q)
If the key is present in the cache, it is bumped to the front of the LRU list as the most recently used. This will cause it to be the last to be evicted.
Has no effect if the key is not present.
This is an O(log n)
operation.
Sourcepub fn demote<Q>(&mut self, key: &Q)
pub fn demote<Q>(&mut self, key: &Q)
If the key is present in the cache, it is demoted to the back of the LRU list as the least recently used. This will cause it to be evicted first if the cache is full.
Has no effect if the key is not present.
This is an O(log n)
operation.
Sourcepub fn smart_get<'a, Q>(&'a mut self, key: &Q) -> Option<SmartEntry<'a, K, V>>
pub fn smart_get<'a, Q>(&'a mut self, key: &Q) -> Option<SmartEntry<'a, K, V>>
Returns a smart reference to the value corresponding to the key, allowing for reading and updating at the same time, only updating the LRU on mutable access.
This does not immediately update the LRU order; only
when the value is accessed via SmartEntry::get
or
SmartEntry::deref_mut
.
Immutable access via SmartEntry::peek
or SmartEntry::deref
does not update the LRU order.
This is an O(log n)
operation.
Sourcepub fn smart_get_oldest(&mut self) -> Option<SmartEntry<'_, K, V>>
pub fn smart_get_oldest(&mut self) -> Option<SmartEntry<'_, K, V>>
Returns a smart reference to the least recently used key-value pair, allowing for reading and updating at the same time, only updating the LRU on mutable access.
This does not immediately update the LRU order; only
when the value is accessed via SmartEntry::get
or
SmartEntry::deref_mut
.
This is an O(1)
operation.
Sourcepub fn peek_range<'a, MIN, MAX>(
&'a self,
min: &MIN,
max: &MAX,
) -> impl DoubleEndedIterator<Item = (&'a K, &'a V)>
pub fn peek_range<'a, MIN, MAX>( &'a self, min: &MIN, max: &MAX, ) -> impl DoubleEndedIterator<Item = (&'a K, &'a V)>
Returns an iterator over the key-value pairs in the cache described by the range,
without updating the LRU order. The order of iteration is dependent on the order
of the keys in the tree via Ord
.
The range follows [min, max)
, where min
is inclusive and max
is exclusive.
This is an O(log n)
operation, though the iterator itself is O(1)
to increment.
Sourcepub fn range<'a, MIN, MAX>(
&'a mut self,
min: &MIN,
max: &MAX,
) -> impl DoubleEndedIterator<Item = (&'a K, &'a mut V)>
pub fn range<'a, MIN, MAX>( &'a mut self, min: &MIN, max: &MAX, ) -> impl DoubleEndedIterator<Item = (&'a K, &'a mut V)>
Returns an iterator over the key-value pairs in the cache described by the range,
and updates the LRU order as they are yielded. The order of iteration is dependent
on the order of the keys in the tree via Ord
.
The range follows [min, max)
, where min
is inclusive and max
is exclusive.
This is an O(log n)
operation, though the iterator itself is O(1)
to increment.
Sourcepub fn smart_range<'a, MIN, MAX>(
&'a mut self,
min: &MIN,
max: &MAX,
) -> impl DoubleEndedIterator<Item = SmartEntry<'a, K, V>>
pub fn smart_range<'a, MIN, MAX>( &'a mut self, min: &MIN, max: &MAX, ) -> impl DoubleEndedIterator<Item = SmartEntry<'a, K, V>>
Returns an iterator over the key-value pairs in the cache described by the range,
which allows for reading and updating the LRU order at the same time, only
bumping the LRU order when the value is accessed mutably via either SmartEntry::get
or SmartEntry::deref_mut
.
This is an O(log n)
operation, though the iterator itself is O(1)
to increment.
Sourcepub fn insert(&mut self, key: K, value: V) -> Option<V>
pub fn insert(&mut self, key: K, value: V) -> Option<V>
Inserts a key-value pair into the cache, replacing the existing value if the key was already present, and then returning it. In both cases, the entry is moved to the front of the LRU list.
This is an O(log n)
operation.
Sourcepub fn contains<Q>(&self, key: &Q) -> bool
pub fn contains<Q>(&self, key: &Q) -> bool
Returns true if the cache contains the key.
This does not update the LRU order.
This is an O(log n)
operation.
Sourcepub fn remove<Q>(&mut self, key: &Q) -> Option<V>
pub fn remove<Q>(&mut self, key: &Q) -> Option<V>
Removes the value corresponding to the key from the cache, and returning it if it was present. This has no effect on the order of other entries in the LRU list.
This is an O(log n)
operation.
Sourcepub fn get_or_insert<F>(&mut self, key: K, f: F) -> GetOrInsertResult<'_, K, V>where
F: FnOnce() -> V,
pub fn get_or_insert<F>(&mut self, key: K, f: F) -> GetOrInsertResult<'_, K, V>where
F: FnOnce() -> V,
Attempts to get a mutable reference to the value corresponding to the key, and if it’s not present, inserts a new key-value pair with the value returned by the provided closure. The entry is then moved to the front of the LRU list.
This is similar to insert_or_get
, but for cases
where the value is likely to be found in the cache already.
See also get_or_insert2
for a version that allows for a borrowed key type.
This is an O(log n)
operation.
Sourcepub fn get_or_insert2<Q, F>(&mut self, key: &Q, f: F) -> &mut V
pub fn get_or_insert2<Q, F>(&mut self, key: &Q, f: F) -> &mut V
Like get_or_insert
, but allows for a different borrowed key
type, so long as it can be made into an owned key of type K
.
Because the key is borrowed initially, we can return &mut V
instead of
GetOrInsertResult<K, V>
, as the key is not consumed.
This is an O(log n)
operation.
§Example
let mut lru = LRUCache::<String, String>::unbounded();
// note that the key is just an `&str`
let v = lru.get_or_insert2("a", || "Hello".to_owned());
v.push_str(", World!");
assert_eq!(lru.pop().unwrap(), ("a".to_owned(), "Hello, World!".to_owned()));
Sourcepub fn insert_or_get(&mut self, key: K, value: V) -> InsertOrGetResult<'_, K, V>
pub fn insert_or_get(&mut self, key: K, value: V) -> InsertOrGetResult<'_, K, V>
Inserts a key-value pair into the cache only if it wasn’t already present, otherwise update the LRU order for this element and return a reference to the value.
The returned value contains a mutable reference to the value, and if the key already existed, it also contains the key and value that were passed in.
This is similar to get_or_insert
, but for cases
where insertion is expected.
This is an O(log n)
operation.
Source§impl<K, V> LRUCache<K, V>
impl<K, V> LRUCache<K, V>
Sourcepub fn pop(&mut self) -> Option<(K, V)>
pub fn pop(&mut self) -> Option<(K, V)>
Removes and returns the least recently used key-value pair.
This is an O(1)
operation.
Sourcepub fn pop_highest(&mut self) -> Option<(K, V)>
pub fn pop_highest(&mut self) -> Option<(K, V)>
Removes and returns the highest Ord
key-value pair.
This is an O(1)
operation.
Sourcepub fn pop_lowest(&mut self) -> Option<(K, V)>
pub fn pop_lowest(&mut self) -> Option<(K, V)>
Removes and returns the lowest Ord
key-value pair.
This is an O(1)
operation.
Source§impl<K, V> LRUCache<K, V>
impl<K, V> LRUCache<K, V>
Sourcepub fn set_max_capacity(&mut self, max_capacity: usize)
pub fn set_max_capacity(&mut self, max_capacity: usize)
Sets the maximum capacity of the cache.
This does not remove any entries, but will cause the cache to evict entries when inserting new ones if the length exceeds the new capacity.
Use shrink
to manually trigger removal of entries
to meet the new capacity.
Sourcepub const fn capacity(&self) -> usize
pub const fn capacity(&self) -> usize
Returns the maximum capacity of the cache, which is the point at which the cache will start evicting older entries.
Sourcepub fn resize(&mut self, max_capacity: usize)
pub fn resize(&mut self, max_capacity: usize)
Sets the maximum capacity of the cache, and removes the oldest entries until the length is less than or equal to the new capacity.
If the new capacity is greater than the current length, no entries are removed.
Sourcepub fn shrink(&mut self)
pub fn shrink(&mut self)
Removes the oldest entries from the cache until the length is less than or equal to the maximum capacity.
Sourcepub fn shrink_by(&mut self, amount: usize)
pub fn shrink_by(&mut self, amount: usize)
Removes up to amount
of the oldest entries from the cache.
Sourcepub fn shrink_with<F>(&mut self, cb: F)where
F: FnMut(K, V),
pub fn shrink_with<F>(&mut self, cb: F)where
F: FnMut(K, V),
Removes the oldest entries from the cache until the length is less than or equal to the maximum capacity, and calls the provided closure with the removed key-value pairs.
§Example
let mut lru: LRUCache<&'static str, &'static str> = LRUCache::default();
lru.insert("a", "1");
lru.insert("b", "2");
lru.insert("c", "3");
lru.set_max_capacity(1);
let mut removed = Vec::new();
lru.shrink_with(|key, value| {
removed.push((key, value));
});
assert_eq!(removed, vec![("a", "1"), ("b", "2")]);
Sourcepub fn shrink_by_with<F>(&mut self, amount: usize, cb: F)where
F: FnMut(K, V),
pub fn shrink_by_with<F>(&mut self, amount: usize, cb: F)where
F: FnMut(K, V),
Removes up to amount
of the oldest entries from the cache,
and calls the provided closure with the removed key-value pairs.
Sourcepub fn is_full(&self) -> bool
pub fn is_full(&self) -> bool
Returns true
if the cache is full.
Attempting to insert a new key-value pair will cause the cache to evict at least one entry to make room for the new one.
Sourcepub fn keys(&self) -> impl DoubleEndedIterator<Item = &K>
pub fn keys(&self) -> impl DoubleEndedIterator<Item = &K>
Returns an iterator over the keys in the cache,
in order of key Ord
order.
NOTE: This does not update the LRU order.
Sourcepub fn iter_peek_lru(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
pub fn iter_peek_lru(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
Returns an iterator over immutable key-value pairs in the cache, in order of most recently used to least recently used.
NOTE: This does not update the LRU order.
Sourcepub fn iter_peek_ord(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
pub fn iter_peek_ord(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>
Returns an iterator over immutable key-value pairs in the cache,
in order of key Ord
order.
NOTE: This does not update the LRU order.
Sourcepub fn smart_iter(
&mut self,
) -> impl DoubleEndedIterator<Item = SmartEntry<'_, K, V>>
pub fn smart_iter( &mut self, ) -> impl DoubleEndedIterator<Item = SmartEntry<'_, K, V>>
Returns an iterator yielding key-value pairs in the cache as SmartEntry
,
in the order determined by the Ord
implementation of the keys. This allows for
reading and updating the LRU order at the same time, only updating the LRU order
when the value is accessed mutably.
Entries yielded do not immediately update the LRU order; only when the value is accessed
via SmartEntry::get
or SmartEntry::deref_mut
.
Sourcepub fn retain<F>(&mut self, predicate: F)
pub fn retain<F>(&mut self, predicate: F)
Iterates over all key-value pairs in the cache, and calls the provided closure
to determine if the key-value pair should be retained. If the closure returns false
,
the key-value pair is removed from the cache.
LRU Order is unchanged.
This is an O(n)
operation.
Trait Implementations§
Source§impl<K, V> Extend<(K, V)> for LRUCache<K, V>where
K: Ord + 'static,
impl<K, V> Extend<(K, V)> for LRUCache<K, V>where
K: Ord + 'static,
Source§fn extend<T>(&mut self, iter: T)where
T: IntoIterator<Item = (K, V)>,
fn extend<T>(&mut self, iter: T)where
T: IntoIterator<Item = (K, V)>,
Source§fn extend_one(&mut self, item: A)
fn extend_one(&mut self, item: A)
extend_one
)Source§fn extend_reserve(&mut self, additional: usize)
fn extend_reserve(&mut self, additional: usize)
extend_one
)