intrusive_lru_cache

Struct LRUCache

Source
pub struct LRUCache<K, V> { /* private fields */ }
Expand description

LRU Cache implementation using intrusive collections.

This cache uses an intrusive_collections::LinkedList to maintain the LRU order, and an intrusive_collections::RBTree to allow for efficient lookups by key, while maintaining only one allocation per key-value pair. Unfortunately, this is a linked structure, so cache locality is likely poor, but memory usage and flexibility are improved.

The cache is unbounded by default, but can be limited to a maximum capacity.

The smart_* methods allow for reading or updating the LRU order at the same time, based on how the value is accessed. The get method always updates the LRU order, and the peek_* methods allow for reading without updating the LRU order.

An overarching principle here is that the ‘Used’ in Lease-Recently-Used is defined by the mutable access to the value. This allows for a more flexible API, where the LRU order can be updated only when necessary, and not on every read access.

§Example

use intrusive_lru_cache::LRUCache;

let mut lru: LRUCache<&'static str, &'static str> = LRUCache::default();

lru.insert("a", "1");
lru.insert("b", "2");
lru.insert("c", "3");

let _ = lru.get("b"); // updates LRU order

assert_eq!(lru.pop(), Some(("a", "1")));
assert_eq!(lru.pop(), Some(("c", "3")));
assert_eq!(lru.pop(), Some(("b", "2")));
assert_eq!(lru.pop(), None);

§Notes

  • Cloning preserves LRU order.
  • If the atomic crate feature is enabled, the cache is thread-safe if K and V are Send/Sync.

Implementations§

Source§

impl<K, V> LRUCache<K, V>

Source

pub const NODE_SIZE: usize = _

Defines the size of the internal node structure in bytes.

The memory footprint of the entire cache can then be calculated as:

LRUCache::<K, V>::NODE_SIZE * cache.len()
    + size_of::<LRUCache<K, V>>() // size of the cache itself

or via memory_footprint.

This is a nice benefit of intrusive collections, as it allows for a single allocation per key-value pair.

Source

pub const fn memory_footprint(&self) -> usize

Returns the total bytes consumed by the cache, including all allocations, internal structures, and the cache itself.

Source

pub fn unbounded() -> Self

Creates a new unbounded LRU cache.

This cache has no limit on the number of entries it can hold, so entries must be manually removed via pop, or you can use set_max_capacity to set a limit.

Source

pub fn new(max_capacity: usize) -> Self

Creates a new LRU cache with a maximum capacity, after which old entries will be evicted to make room for new ones.

This does not preallocate any memory, only sets an upper limit.

Source§

impl<K, V> LRUCache<K, V>
where K: Ord + 'static,

Source

pub fn peek<'a, Q>(&'a self, key: &Q) -> Option<&'a V>
where K: Borrow<Q>, Q: Ord + ?Sized,

Returns a reference to the value corresponding to the key, without updating the LRU list.

This is an O(log n) operation.

Source

pub fn peek_newest(&self) -> Option<(&K, &V)>

Returns a reference to the most recently used key-value pair, without updating the LRU order further.

This is an O(1) operation.

Source

pub fn peek_oldest(&self) -> Option<(&K, &V)>

Returns a reference to the least recently used key-value pair, without updating the LRU order further.

This is an O(1) operation.

Source

pub fn get_newest(&mut self) -> Option<(&K, &mut V)>

Returns a reference to the most recently used key-value pair.

Because this is already the most recently used, it’s free to be accessed without any additional cost or additional modification of the LRU order.

This is an O(1) operation.

Source

pub fn get<'a, Q>(&'a mut self, key: &Q) -> Option<&'a mut V>
where K: Borrow<Q>, Q: Ord + ?Sized,

Returns a reference to the value corresponding to the key, and bumps the key to the front of the LRU list.

This is an O(log n) operation.

Source

pub fn promote<Q>(&mut self, key: &Q)
where K: Borrow<Q>, Q: Ord + ?Sized,

If the key is present in the cache, it is bumped to the front of the LRU list as the most recently used. This will cause it to be the last to be evicted.

Has no effect if the key is not present.

This is an O(log n) operation.

Source

pub fn demote<Q>(&mut self, key: &Q)
where K: Borrow<Q>, Q: Ord + ?Sized,

If the key is present in the cache, it is demoted to the back of the LRU list as the least recently used. This will cause it to be evicted first if the cache is full.

Has no effect if the key is not present.

This is an O(log n) operation.

Source

pub fn smart_get<'a, Q>(&'a mut self, key: &Q) -> Option<SmartEntry<'a, K, V>>
where K: Borrow<Q>, Q: Ord + ?Sized,

Returns a smart reference to the value corresponding to the key, allowing for reading and updating at the same time, only updating the LRU on mutable access.

This does not immediately update the LRU order; only when the value is accessed via SmartEntry::get or SmartEntry::deref_mut.

Immutable access via SmartEntry::peek or SmartEntry::deref does not update the LRU order.

This is an O(log n) operation.

Source

pub fn smart_get_oldest(&mut self) -> Option<SmartEntry<'_, K, V>>

Returns a smart reference to the least recently used key-value pair, allowing for reading and updating at the same time, only updating the LRU on mutable access.

This does not immediately update the LRU order; only when the value is accessed via SmartEntry::get or SmartEntry::deref_mut.

This is an O(1) operation.

Source

pub fn peek_range<'a, MIN, MAX>( &'a self, min: &MIN, max: &MAX, ) -> impl DoubleEndedIterator<Item = (&'a K, &'a V)>
where K: Borrow<MIN> + Borrow<MAX>, MIN: Ord + ?Sized, MAX: Ord + ?Sized,

Returns an iterator over the key-value pairs in the cache described by the range, without updating the LRU order. The order of iteration is dependent on the order of the keys in the tree via Ord.

The range follows [min, max), where min is inclusive and max is exclusive.

This is an O(log n) operation, though the iterator itself is O(1) to increment.

Source

pub fn range<'a, MIN, MAX>( &'a mut self, min: &MIN, max: &MAX, ) -> impl DoubleEndedIterator<Item = (&'a K, &'a mut V)>
where K: Borrow<MIN> + Borrow<MAX>, MIN: Ord + ?Sized, MAX: Ord + ?Sized,

Returns an iterator over the key-value pairs in the cache described by the range, and updates the LRU order as they are yielded. The order of iteration is dependent on the order of the keys in the tree via Ord.

The range follows [min, max), where min is inclusive and max is exclusive.

This is an O(log n) operation, though the iterator itself is O(1) to increment.

Source

pub fn smart_range<'a, MIN, MAX>( &'a mut self, min: &MIN, max: &MAX, ) -> impl DoubleEndedIterator<Item = SmartEntry<'a, K, V>>
where K: Borrow<MIN> + Borrow<MAX>, MIN: Ord + ?Sized, MAX: Ord + ?Sized,

Returns an iterator over the key-value pairs in the cache described by the range, which allows for reading and updating the LRU order at the same time, only bumping the LRU order when the value is accessed mutably via either SmartEntry::get or SmartEntry::deref_mut.

This is an O(log n) operation, though the iterator itself is O(1) to increment.

Source

pub fn insert(&mut self, key: K, value: V) -> Option<V>

Inserts a key-value pair into the cache, replacing the existing value if the key was already present, and then returning it. In both cases, the entry is moved to the front of the LRU list.

This is an O(log n) operation.

Source

pub fn contains<Q>(&self, key: &Q) -> bool
where K: Borrow<Q>, Q: Ord + ?Sized,

Returns true if the cache contains the key.

This does not update the LRU order.

This is an O(log n) operation.

Source

pub fn remove<Q>(&mut self, key: &Q) -> Option<V>
where K: Borrow<Q>, Q: Ord + ?Sized,

Removes the value corresponding to the key from the cache, and returning it if it was present. This has no effect on the order of other entries in the LRU list.

This is an O(log n) operation.

Source

pub fn get_or_insert<F>(&mut self, key: K, f: F) -> GetOrInsertResult<'_, K, V>
where F: FnOnce() -> V,

Attempts to get a mutable reference to the value corresponding to the key, and if it’s not present, inserts a new key-value pair with the value returned by the provided closure. The entry is then moved to the front of the LRU list.

This is similar to insert_or_get, but for cases where the value is likely to be found in the cache already.

See also get_or_insert2 for a version that allows for a borrowed key type.

This is an O(log n) operation.

Source

pub fn get_or_insert2<Q, F>(&mut self, key: &Q, f: F) -> &mut V
where K: Borrow<Q>, Q: ToOwned<Owned = K> + Ord + ?Sized, F: FnOnce() -> V,

Like get_or_insert, but allows for a different borrowed key type, so long as it can be made into an owned key of type K.

Because the key is borrowed initially, we can return &mut V instead of GetOrInsertResult<K, V>, as the key is not consumed.

This is an O(log n) operation.

§Example
let mut lru = LRUCache::<String, String>::unbounded();

// note that the key is just an `&str`
let v = lru.get_or_insert2("a", || "Hello".to_owned());
v.push_str(", World!");

assert_eq!(lru.pop().unwrap(), ("a".to_owned(), "Hello, World!".to_owned()));
Source

pub fn insert_or_get(&mut self, key: K, value: V) -> InsertOrGetResult<'_, K, V>

Inserts a key-value pair into the cache only if it wasn’t already present, otherwise update the LRU order for this element and return a reference to the value.

The returned value contains a mutable reference to the value, and if the key already existed, it also contains the key and value that were passed in.

This is similar to get_or_insert, but for cases where insertion is expected.

This is an O(log n) operation.

Source§

impl<K, V> LRUCache<K, V>

Source

pub fn pop(&mut self) -> Option<(K, V)>

Removes and returns the least recently used key-value pair.

This is an O(1) operation.

Source

pub fn pop_highest(&mut self) -> Option<(K, V)>

Removes and returns the highest Ord key-value pair.

This is an O(1) operation.

Source

pub fn pop_lowest(&mut self) -> Option<(K, V)>

Removes and returns the lowest Ord key-value pair.

This is an O(1) operation.

Source§

impl<K, V> LRUCache<K, V>

Source

pub fn set_max_capacity(&mut self, max_capacity: usize)

Sets the maximum capacity of the cache.

This does not remove any entries, but will cause the cache to evict entries when inserting new ones if the length exceeds the new capacity.

Use shrink to manually trigger removal of entries to meet the new capacity.

Source

pub const fn capacity(&self) -> usize

Returns the maximum capacity of the cache, which is the point at which the cache will start evicting older entries.

Source

pub fn resize(&mut self, max_capacity: usize)

Sets the maximum capacity of the cache, and removes the oldest entries until the length is less than or equal to the new capacity.

If the new capacity is greater than the current length, no entries are removed.

Source

pub fn clear(&mut self)

Clears the cache, removing all key-value pairs.

Source

pub fn shrink(&mut self)

Removes the oldest entries from the cache until the length is less than or equal to the maximum capacity.

Source

pub fn shrink_by(&mut self, amount: usize)

Removes up to amount of the oldest entries from the cache.

Source

pub fn shrink_with<F>(&mut self, cb: F)
where F: FnMut(K, V),

Removes the oldest entries from the cache until the length is less than or equal to the maximum capacity, and calls the provided closure with the removed key-value pairs.

§Example
let mut lru: LRUCache<&'static str, &'static str> = LRUCache::default();

lru.insert("a", "1");
lru.insert("b", "2");
lru.insert("c", "3");

lru.set_max_capacity(1);

let mut removed = Vec::new();

lru.shrink_with(|key, value| {
   removed.push((key, value));
});

assert_eq!(removed, vec![("a", "1"), ("b", "2")]);
Source

pub fn shrink_by_with<F>(&mut self, amount: usize, cb: F)
where F: FnMut(K, V),

Removes up to amount of the oldest entries from the cache, and calls the provided closure with the removed key-value pairs.

Source

pub const fn len(&self) -> usize

Returns the number of key-value pairs in the cache.

Source

pub fn is_empty(&self) -> bool

Returns true if the cache is empty.

Source

pub fn is_full(&self) -> bool

Returns true if the cache is full.

Attempting to insert a new key-value pair will cause the cache to evict at least one entry to make room for the new one.

Source

pub fn keys(&self) -> impl DoubleEndedIterator<Item = &K>

Returns an iterator over the keys in the cache, in order of key Ord order.

NOTE: This does not update the LRU order.

Source

pub fn iter_peek_lru(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>

Returns an iterator over immutable key-value pairs in the cache, in order of most recently used to least recently used.

NOTE: This does not update the LRU order.

Source

pub fn iter_peek_ord(&self) -> impl DoubleEndedIterator<Item = (&K, &V)>

Returns an iterator over immutable key-value pairs in the cache, in order of key Ord order.

NOTE: This does not update the LRU order.

Source

pub fn smart_iter( &mut self, ) -> impl DoubleEndedIterator<Item = SmartEntry<'_, K, V>>

Returns an iterator yielding key-value pairs in the cache as SmartEntry, in the order determined by the Ord implementation of the keys. This allows for reading and updating the LRU order at the same time, only updating the LRU order when the value is accessed mutably.

Entries yielded do not immediately update the LRU order; only when the value is accessed via SmartEntry::get or SmartEntry::deref_mut.

Source

pub fn retain<F>(&mut self, predicate: F)
where F: FnMut(&K, &V) -> bool,

Iterates over all key-value pairs in the cache, and calls the provided closure to determine if the key-value pair should be retained. If the closure returns false, the key-value pair is removed from the cache.

LRU Order is unchanged.

This is an O(n) operation.

Trait Implementations§

Source§

impl<K, V> Clone for LRUCache<K, V>
where K: Clone + Ord + 'static, V: Clone,

Source§

fn clone(&self) -> Self

Returns a copy of the value. Read more
1.0.0 · Source§

fn clone_from(&mut self, source: &Self)

Performs copy-assignment from source. Read more
Source§

impl<K, V> Debug for LRUCache<K, V>
where K: Debug, V: Debug,

Source§

fn fmt(&self, f: &mut Formatter<'_>) -> Result

Formats the value using the given formatter. Read more
Source§

impl<K, V> Default for LRUCache<K, V>

Source§

fn default() -> Self

Returns the “default value” for a type. Read more
Source§

impl<K, V> Drop for LRUCache<K, V>

Source§

fn drop(&mut self)

Executes the destructor for this type. Read more
Source§

impl<K, V> Extend<(K, V)> for LRUCache<K, V>
where K: Ord + 'static,

Source§

fn extend<T>(&mut self, iter: T)
where T: IntoIterator<Item = (K, V)>,

Extends a collection with the contents of an iterator. Read more
Source§

fn extend_one(&mut self, item: A)

🔬This is a nightly-only experimental API. (extend_one)
Extends a collection with exactly one element.
Source§

fn extend_reserve(&mut self, additional: usize)

🔬This is a nightly-only experimental API. (extend_one)
Reserves capacity in a collection for the given number of additional elements. Read more
Source§

impl<K, V> FromIterator<(K, V)> for LRUCache<K, V>
where K: Ord + 'static,

Source§

fn from_iter<T>(iter: T) -> Self
where T: IntoIterator<Item = (K, V)>,

Creates a value from an iterator. Read more
Source§

impl<K, V, Q> Index<&Q> for LRUCache<K, V>
where K: Borrow<Q> + Ord + 'static, Q: Ord + ?Sized,

Source§

fn index(&self, index: &Q) -> &Self::Output

Returns a reference to the value corresponding to the key, without updating the LRU order.

§Panics

Panics if the key is not present in the cache.

Source§

type Output = V

The returned type after indexing.
Source§

impl<K, V> IntoIterator for LRUCache<K, V>
where K: Ord + 'static,

Source§

type Item = (K, V)

The type of the elements being iterated over.
Source§

type IntoIter = IntoIter<K, V>

Which kind of iterator are we turning this into?
Source§

fn into_iter(self) -> Self::IntoIter

Creates an iterator from a value. Read more

Auto Trait Implementations§

§

impl<K, V> Freeze for LRUCache<K, V>

§

impl<K, V> !RefUnwindSafe for LRUCache<K, V>

§

impl<K, V> Send for LRUCache<K, V>
where K: Send + Sync, V: Send + Sync,

§

impl<K, V> Sync for LRUCache<K, V>
where K: Sync, V: Sync,

§

impl<K, V> Unpin for LRUCache<K, V>

§

impl<K, V> !UnwindSafe for LRUCache<K, V>

Blanket Implementations§

Source§

impl<T> Any for T
where T: 'static + ?Sized,

Source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
Source§

impl<T> Borrow<T> for T
where T: ?Sized,

Source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
Source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

Source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
Source§

impl<T> CloneToUninit for T
where T: Clone,

Source§

unsafe fn clone_to_uninit(&self, dst: *mut u8)

🔬This is a nightly-only experimental API. (clone_to_uninit)
Performs copy-assignment from self to dst. Read more
Source§

impl<T> From<T> for T

Source§

fn from(t: T) -> T

Returns the argument unchanged.

Source§

impl<T, U> Into<U> for T
where U: From<T>,

Source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

Source§

impl<T> ToOwned for T
where T: Clone,

Source§

type Owned = T

The resulting type after obtaining ownership.
Source§

fn to_owned(&self) -> T

Creates owned data from borrowed data, usually by cloning. Read more
Source§

fn clone_into(&self, target: &mut T)

Uses borrowed data to replace owned data, usually by cloning. Read more
Source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

Source§

type Error = Infallible

The type returned in the event of a conversion error.
Source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
Source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

Source§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
Source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.