async_std/sync/
mod.rs

1//! Synchronization primitives.
2//!
3//! This module is an async version of [`std::sync`].
4//!
5//! [`std::sync`]: https://doc.rust-lang.org/std/sync/index.html
6//!
7//! ## The need for synchronization
8//!
9//! async-std's sync primitives are scheduler-aware, making it possible to
10//! `.await` their operations - for example the locking of a [`Mutex`].
11//!
12//! Conceptually, a Rust program is a series of operations which will
13//! be executed on a computer. The timeline of events happening in the
14//! program is consistent with the order of the operations in the code.
15//!
16//! Consider the following code, operating on some global static variables:
17//!
18//! ```
19//! static mut A: u32 = 0;
20//! static mut B: u32 = 0;
21//! static mut C: u32 = 0;
22//!
23//! fn main() {
24//!     unsafe {
25//!         A = 3;
26//!         B = 4;
27//!         A = A + B;
28//!         C = B;
29//!         println!("{} {} {}", A, B, C);
30//!         C = A;
31//!     }
32//! }
33//! ```
34//!
35//! It appears as if some variables stored in memory are changed, an addition
36//! is performed, result is stored in `A` and the variable `C` is
37//! modified twice.
38//!
39//! When only a single thread is involved, the results are as expected:
40//! the line `7 4 4` gets printed.
41//!
42//! As for what happens behind the scenes, when optimizations are enabled the
43//! final generated machine code might look very different from the code:
44//!
45//! - The first store to `C` might be moved before the store to `A` or `B`,
46//!   _as if_ we had written `C = 4; A = 3; B = 4`.
47//!
48//! - Assignment of `A + B` to `A` might be removed, since the sum can be stored
49//!   in a temporary location until it gets printed, with the global variable
50//!   never getting updated.
51//!
52//! - The final result could be determined just by looking at the code
53//!   at compile time, so [constant folding] might turn the whole
54//!   block into a simple `println!("7 4 4")`.
55//!
56//! The compiler is allowed to perform any combination of these
57//! optimizations, as long as the final optimized code, when executed,
58//! produces the same results as the one without optimizations.
59//!
60//! Due to the [concurrency] involved in modern computers, assumptions
61//! about the program's execution order are often wrong. Access to
62//! global variables can lead to nondeterministic results, **even if**
63//! compiler optimizations are disabled, and it is **still possible**
64//! to introduce synchronization bugs.
65//!
66//! Note that thanks to Rust's safety guarantees, accessing global (static)
67//! variables requires `unsafe` code, assuming we don't use any of the
68//! synchronization primitives in this module.
69//!
70//! [constant folding]: https://en.wikipedia.org/wiki/Constant_folding
71//! [concurrency]: https://en.wikipedia.org/wiki/Concurrency_(computer_science)
72//!
73//! ## Out-of-order execution
74//!
75//! Instructions can execute in a different order from the one we define, due to
76//! various reasons:
77//!
78//! - The **compiler** reordering instructions: If the compiler can issue an
79//!   instruction at an earlier point, it will try to do so. For example, it
80//!   might hoist memory loads at the top of a code block, so that the CPU can
81//!   start [prefetching] the values from memory.
82//!
83//!   In single-threaded scenarios, this can cause issues when writing
84//!   signal handlers or certain kinds of low-level code.
85//!   Use [compiler fences] to prevent this reordering.
86//!
87//! - A **single processor** executing instructions [out-of-order]:
88//!   Modern CPUs are capable of [superscalar] execution,
89//!   i.e., multiple instructions might be executing at the same time,
90//!   even though the machine code describes a sequential process.
91//!
92//!   This kind of reordering is handled transparently by the CPU.
93//!
94//! - A **multiprocessor** system executing multiple hardware threads
95//!   at the same time: In multi-threaded scenarios, you can use two
96//!   kinds of primitives to deal with synchronization:
97//!   - [memory fences] to ensure memory accesses are made visible to
98//!     other CPUs in the right order.
99//!   - [atomic operations] to ensure simultaneous access to the same
100//!     memory location doesn't lead to undefined behavior.
101//!
102//! [prefetching]: https://en.wikipedia.org/wiki/Cache_prefetching
103//! [compiler fences]: https://doc.rust-lang.org/std/sync/atomic/fn.compiler_fence.html
104//! [out-of-order]: https://en.wikipedia.org/wiki/Out-of-order_execution
105//! [superscalar]: https://en.wikipedia.org/wiki/Superscalar_processor
106//! [memory fences]: https://doc.rust-lang.org/std/sync/atomic/fn.fence.html
107//! [atomic operations]: https://doc.rust-lang.org/std/sync/atomic/index.html
108//!
109//! ## Higher-level synchronization objects
110//!
111//! Most of the low-level synchronization primitives are quite error-prone and
112//! inconvenient to use, which is why async-std also exposes some
113//! higher-level synchronization objects.
114//!
115//! These abstractions can be built out of lower-level primitives.
116//! For efficiency, the sync objects in async-std are usually
117//! implemented with help from the scheduler, which is
118//! able to reschedule the tasks while they are blocked on acquiring
119//! a lock.
120//!
121//! The following is an overview of the available synchronization
122//! objects:
123//!
124//! - [`Arc`]: Atomically Reference-Counted pointer, which can be used
125//!   in multithreaded environments to prolong the lifetime of some
126//!   data until all the threads have finished using it.
127//!
128//! - [`Barrier`]: Ensures multiple threads will wait for each other
129//!   to reach a point in the program, before continuing execution all
130//!   together.
131//!
132//! - [`Mutex`]: Mutual exclusion mechanism, which ensures that at
133//!   most one task at a time is able to access some data.
134//!
135//! - [`RwLock`]: Provides a mutual exclusion mechanism which allows
136//!   multiple readers at the same time, while allowing only one
137//!   writer at a time. In some cases, this can be more efficient than
138//!   a mutex.
139//!
140//! If you're looking for channels, check out
141//! [`async_std::channel`][crate::channel].
142//!
143//! [`Arc`]: struct.Arc.html
144//! [`Barrier`]: struct.Barrier.html
145//! [`channel`]: fn.channel.html
146//! [`Mutex`]: struct.Mutex.html
147//! [`RwLock`]: struct.RwLock.html
148//!
149//! # Examples
150//!
151//! Spawn a task that updates an integer protected by a mutex:
152//!
153//! ```
154//! # async_std::task::block_on(async {
155//! #
156//! use async_std::sync::{Arc, Mutex};
157//! use async_std::task;
158//!
159//! let m1 = Arc::new(Mutex::new(0));
160//! let m2 = m1.clone();
161//!
162//! task::spawn(async move {
163//!     *m2.lock().await = 1;
164//! })
165//! .await;
166//!
167//! assert_eq!(*m1.lock().await, 1);
168//! #
169//! # })
170//! ```
171
172#![allow(clippy::needless_doctest_main)]
173
174#[doc(inline)]
175pub use std::sync::{Arc, Weak};
176
177#[doc(inline)]
178pub use async_lock::{Mutex, MutexGuard, MutexGuardArc};
179
180#[doc(inline)]
181pub use async_lock::{RwLock, RwLockReadGuard, RwLockUpgradableReadGuard, RwLockWriteGuard};
182
183cfg_unstable! {
184    pub use async_lock::{Barrier, BarrierWaitResult};
185    pub use condvar::Condvar;
186    pub(crate) use waker_set::WakerSet;
187
188    mod condvar;
189
190    pub(crate) mod waker_set;
191}