tokio_threadpool/lib.rs
1#![doc(html_root_url = "https://docs.rs/tokio-threadpool/0.1.18")]
2#![deny(missing_docs, missing_debug_implementations)]
3
4//! A work-stealing based thread pool for executing futures.
5//!
6//! > **Note:** This crate is **deprecated in tokio 0.2.x** and has been moved
7//! > and refactored into various places in the [`tokio::runtime`] module of the
8//! > [`tokio`] crate. Note that there is no longer a `ThreadPool` type, you are
9//! > instead encouraged to make use of the thread pool used by a `Runtime`
10//! > configured to use the [threaded scheduler].
11//!
12//! [`tokio::runtime`]: https://docs.rs/tokio/latest/tokio/runtime/index.html
13//! [`tokio`]: https://docs.rs/tokio/latest/tokio/index.html
14//! [threaded scheduler]: https://docs.rs/tokio/latest/tokio/runtime/index.html#threaded-scheduler
15//!
16//! The Tokio thread pool supports scheduling futures and processing them on
17//! multiple CPU cores. It is optimized for the primary Tokio use case of many
18//! independent tasks with limited computation and with most tasks waiting on
19//! I/O. Usually, users will not create a `ThreadPool` instance directly, but
20//! will use one via a [`runtime`].
21//!
22//! The `ThreadPool` structure manages two sets of threads:
23//!
24//! * Worker threads.
25//! * Backup threads.
26//!
27//! Worker threads are used to schedule futures using a work-stealing strategy.
28//! Backup threads, on the other hand, are intended only to support the
29//! `blocking` API. Threads will transition between the two sets.
30//!
31//! The advantage of the work-stealing strategy is minimal cross-thread
32//! coordination. The thread pool attempts to make as much progress as possible
33//! without communicating across threads.
34//!
35//! ## Worker overview
36//!
37//! Each worker has two queues: a deque and a mpsc channel. The deque is the
38//! primary queue for tasks that are scheduled to run on the worker thread. Tasks
39//! can only be pushed onto the deque by the worker, but other workers may
40//! "steal" from that deque. The mpsc channel is used to submit futures while
41//! external to the pool.
42//!
43//! As long as the thread pool has not been shutdown, a worker will run in a
44//! loop. Each loop, it consumes all tasks on its mpsc channel and pushes it onto
45//! the deque. It then pops tasks off of the deque and executes them.
46//!
47//! If a worker has no work, i.e., both queues are empty. It attempts to steal.
48//! To do this, it randomly scans other workers' deques and tries to pop a task.
49//! If it finds no work to steal, the thread goes to sleep.
50//!
51//! When the worker detects that the pool has been shut down, it exits the loop,
52//! cleans up its state, and shuts the thread down.
53//!
54//! ## Thread pool initialization
55//!
56//! Note, users normally will use the threadpool created by a [`runtime`].
57//!
58//! By default, no threads are spawned on creation. Instead, when new futures are
59//! spawned, the pool first checks if there are enough active worker threads. If
60//! not, a new worker thread is spawned.
61//!
62//! ## Spawning futures
63//!
64//! The spawning behavior depends on whether a future was spawned from within a
65//! worker or thread or if it was spawned from an external handle.
66//!
67//! When spawning a future while external to the thread pool, the current
68//! strategy is to randomly pick a worker to submit the task to. The task is then
69//! pushed onto that worker's mpsc channel.
70//!
71//! When spawning a future while on a worker thread, the task is pushed onto the
72//! back of the current worker's deque.
73//!
74//! ## Blocking annotation strategy
75//!
76//! The [`blocking`] function is used to annotate a section of code that
77//! performs a blocking operation, either by issuing a blocking syscall or
78//! performing any long running CPU-bound computation.
79//!
80//! The strategy for handling blocking closures is to hand off the worker to a
81//! new thread. This implies handing off the `deque` and `mpsc`. Once this is
82//! done, the new thread continues to process the work queue and the original
83//! thread is able to block. Once it finishes processing the blocking future, the
84//! thread has no additional work and is inserted into the backup pool. This
85//! makes it available to other workers that encounter a [`blocking`] call.
86//!
87//! [`blocking`]: fn.blocking.html
88//! [`runtime`]: https://docs.rs/tokio/0.1/tokio/runtime/
89
90extern crate tokio_executor;
91
92extern crate crossbeam_deque;
93extern crate crossbeam_queue;
94extern crate crossbeam_utils;
95#[macro_use]
96extern crate futures;
97#[macro_use]
98extern crate lazy_static;
99extern crate num_cpus;
100extern crate slab;
101
102#[macro_use]
103extern crate log;
104
105// ## Crate layout
106//
107// The primary type, `Pool`, holds the majority of a thread pool's state,
108// including the state for each worker. Each worker's state is maintained in an
109// instance of `worker::Entry`.
110//
111// `Worker` contains the logic that runs on each worker thread. It holds an
112// `Arc` to `Pool` and is able to access its state from `Pool`.
113//
114// `Task` is a harness around an individual future. It manages polling and
115// scheduling that future.
116//
117// ## Sleeping workers
118//
119// Sleeping workers are tracked using a [Treiber stack]. This results in the
120// thread that most recently went to sleep getting woken up first. When the pool
121// is not under load, this helps threads shutdown faster.
122//
123// Sleeping is done by using `tokio_executor::Park` implementations. This allows
124// the user of the thread pool to customize the work that is performed to sleep.
125// This is how injecting timers and other functionality into the thread pool is
126// done.
127//
128// ## Notifying workers
129//
130// When there is work to be done, workers must be notified. However, notifying a
131// worker requires cross thread coordination. Ideally, a worker would only be
132// notified when it is sleeping, but there is no way to know if a worker is
133// sleeping without cross thread communication.
134//
135// The two cases when a worker might need to be notified are:
136//
137// 1. A task is externally submitted to a worker via the mpsc channel.
138// 2. A worker has a back log of work and needs other workers to steal from it.
139//
140// In the first case, the worker will always be notified. However, it could be
141// possible to avoid the notification if the mpsc channel has two or greater
142// number of tasks *after* the task is submitted. In this case, we are able to
143// assume that the worker has previously been notified.
144//
145// The second case is trickier. Currently, whenever a worker spawns a new future
146// (pushing it onto its deque) and when it pops a future from its mpsc, it tries
147// to notify a sleeping worker to wake up and start stealing. This is a lot of
148// notification and it **might** be possible to reduce it.
149//
150// Also, whenever a worker is woken up via a signal and it does find work, it,
151// in turn, will try to wake up a new worker.
152//
153// [Treiber stack]: https://en.wikipedia.org/wiki/Treiber_Stack
154
155#[doc(hidden)]
156pub mod blocking;
157mod builder;
158mod callback;
159mod config;
160mod notifier;
161pub mod park;
162mod pool;
163mod sender;
164mod shutdown;
165mod task;
166mod thread_pool;
167mod worker;
168
169pub use blocking::{blocking, BlockingError};
170pub use builder::Builder;
171pub use sender::Sender;
172pub use shutdown::Shutdown;
173pub use thread_pool::{SpawnHandle, ThreadPool};
174pub use worker::{Worker, WorkerId};