wasmer_types::lib::std

Module sync

1.0.0 · source
Available on crate feature std only.
Expand description

Useful synchronization primitives.

§The need for synchronization

Conceptually, a Rust program is a series of operations which will be executed on a computer. The timeline of events happening in the program is consistent with the order of the operations in the code.

Consider the following code, operating on some global static variables:

// FIXME(static_mut_refs): Do not allow `static_mut_refs` lint
#![allow(static_mut_refs)]

static mut A: u32 = 0;
static mut B: u32 = 0;
static mut C: u32 = 0;

fn main() {
    unsafe {
        A = 3;
        B = 4;
        A = A + B;
        C = B;
        println!("{A} {B} {C}");
        C = A;
    }
}

It appears as if some variables stored in memory are changed, an addition is performed, result is stored in A and the variable C is modified twice.

When only a single thread is involved, the results are as expected: the line 7 4 4 gets printed.

As for what happens behind the scenes, when optimizations are enabled the final generated machine code might look very different from the code:

  • The first store to C might be moved before the store to A or B, as if we had written C = 4; A = 3; B = 4.

  • Assignment of A + B to A might be removed, since the sum can be stored in a temporary location until it gets printed, with the global variable never getting updated.

  • The final result could be determined just by looking at the code at compile time, so constant folding might turn the whole block into a simple println!("7 4 4").

The compiler is allowed to perform any combination of these optimizations, as long as the final optimized code, when executed, produces the same results as the one without optimizations.

Due to the concurrency involved in modern computers, assumptions about the program’s execution order are often wrong. Access to global variables can lead to nondeterministic results, even if compiler optimizations are disabled, and it is still possible to introduce synchronization bugs.

Note that thanks to Rust’s safety guarantees, accessing global (static) variables requires unsafe code, assuming we don’t use any of the synchronization primitives in this module.

§Out-of-order execution

Instructions can execute in a different order from the one we define, due to various reasons:

  • The compiler reordering instructions: If the compiler can issue an instruction at an earlier point, it will try to do so. For example, it might hoist memory loads at the top of a code block, so that the CPU can start prefetching the values from memory.

    In single-threaded scenarios, this can cause issues when writing signal handlers or certain kinds of low-level code. Use compiler fences to prevent this reordering.

  • A single processor executing instructions out-of-order: Modern CPUs are capable of superscalar execution, i.e., multiple instructions might be executing at the same time, even though the machine code describes a sequential process.

    This kind of reordering is handled transparently by the CPU.

  • A multiprocessor system executing multiple hardware threads at the same time: In multi-threaded scenarios, you can use two kinds of primitives to deal with synchronization:

    • memory fences to ensure memory accesses are made visible to other CPUs in the right order.
    • atomic operations to ensure simultaneous access to the same memory location doesn’t lead to undefined behavior.

§Higher-level synchronization objects

Most of the low-level synchronization primitives are quite error-prone and inconvenient to use, which is why the standard library also exposes some higher-level synchronization objects.

These abstractions can be built out of lower-level primitives. For efficiency, the sync objects in the standard library are usually implemented with help from the operating system’s kernel, which is able to reschedule the threads while they are blocked on acquiring a lock.

The following is an overview of the available synchronization objects:

  • Arc: Atomically Reference-Counted pointer, which can be used in multithreaded environments to prolong the lifetime of some data until all the threads have finished using it.

  • Barrier: Ensures multiple threads will wait for each other to reach a point in the program, before continuing execution all together.

  • Condvar: Condition Variable, providing the ability to block a thread while waiting for an event to occur.

  • mpsc: Multi-producer, single-consumer queues, used for message-based communication. Can provide a lightweight inter-thread synchronisation mechanism, at the cost of some extra memory.

  • mpmc: Multi-producer, multi-consumer queues, used for message-based communication. Can provide a lightweight inter-thread synchronisation mechanism, at the cost of some extra memory.

  • Mutex: Mutual Exclusion mechanism, which ensures that at most one thread at a time is able to access some data.

  • Once: Used for a thread-safe, one-time global initialization routine. Mostly useful for implementing other types like OnceLock.

  • OnceLock: Used for thread-safe, one-time initialization of a variable, with potentially different initializers based on the caller.

  • LazyLock: Used for thread-safe, one-time initialization of a variable, using one nullary initializer function provided at creation.

  • RwLock: Provides a mutual exclusion mechanism which allows multiple readers at the same time, while allowing only one writer at a time. In some cases, this can be more efficient than a mutex.

Modules§

  • Atomic types
  • Multi-producer, single-consumer FIFO queue communication primitives.
  • mpmcExperimental
    Multi-producer, multi-consumer FIFO queue communication primitives.

Structs§

  • A thread-safe reference-counting pointer. ‘Arc’ stands for ‘Atomically Reference Counted’.
  • A barrier enables multiple threads to synchronize the beginning of some computation.
  • A BarrierWaitResult is returned by Barrier::wait() when all threads in the Barrier have rendezvoused.
  • A Condition Variable
  • A value which is initialized on the first access.
  • A mutual exclusion primitive useful for protecting shared data
  • An RAII implementation of a “scoped lock” of a mutex. When this structure is dropped (falls out of scope), the lock will be unlocked.
  • A low-level synchronization primitive for one-time global execution.
  • A synchronization primitive which can nominally be written to only once.
  • State yielded to Once::call_once_force()’s closure parameter. The state can be used to query the poison status of the Once.
  • A type of error which can be returned whenever a lock is acquired.
  • A reader-writer lock
  • RAII structure used to release the shared read access of a lock when dropped.
  • RAII structure used to release the exclusive write access of a lock when dropped.
  • A type indicating whether a timed wait on a condition variable returned due to a time out or not.
  • Weak is a version of Arc that holds a non-owning reference to the managed allocation. The allocation is accessed by calling upgrade on the Weak pointer, which returns an Option<Arc<T>>.
  • ExclusiveExperimental
    Exclusive provides only mutable access, also referred to as exclusive access to the underlying value. It provides no immutable, or shared access to the underlying value.
  • MappedMutexGuardExperimental
    An RAII mutex guard returned by MutexGuard::map, which can point to a subfield of the protected data. When this structure is dropped (falls out of scope), the lock will be unlocked.
  • RAII structure used to release the shared read access of a lock when dropped, which can point to a subfield of the protected data.
  • RAII structure used to release the exclusive write access of a lock when dropped, which can point to a subfield of the protected data.
  • ReentrantLockExperimental
    A re-entrant mutual exclusion lock
  • ReentrantLockGuardExperimental
    An RAII implementation of a “scoped lock” of a re-entrant lock. When this structure is dropped (falls out of scope), the lock will be unlocked.

Enums§

Constants§

Type Aliases§

  • A type alias for the result of a lock method which can be poisoned.
  • A type alias for the result of a nonblocking locking method.