1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122
//! This module implements fedimints custom atomic broadcast abstraction. A
//! such, it is responsible for ordering serialized items in the form of byte
//! vectors. The Broadcast is able to recover from a crash at any time via a
//! backup that it maintains in the servers [fedimint_core::db::Database]. In
//! Addition, it stores the history of accepted items in the form of
//! signed session outcomes in the database as well in order to catch up fellow
//! guardians which have been offline for a prolonged period of time.
//!
//! Though the broadcast depends on [fedimint_core] for [fedimint_core::PeerId],
//! [fedimint_core::encoding::Encodable] and [fedimint_core::db::Database]
//! it implements no consensus logic specific to Fedimint, to which we will
//! refer as Fedimint Consensus going forward. To the broadcast a consensus item
//! is merely a vector of bytes without any further structure.
//!
//! # The journey of a ConsensusItem
//!
//! Let us sketch the journey of an [fedimint_core::epoch::ConsensusItem] into a
//! signed session outcome.
//!
//! * The node which wants to order the item calls consensus_encode to serialize
//! it and sends the resulting serialization to its running atomic broadcast
//! instance via the mempool item sender.
//! * Every 250ms the broadcasts currently running session instance creates a
//! new batch from its mempool and attaches it to a unit in the form of a
//! UnitData::Batch. The size of a batch and therefore the size of a
//! serialization is limited to 10kB.
//! * The unit is then included in a [Message] and send to the network layer via
//! the outgoing message sender.
//! * The network layer receives the message, serializes it via consensus_encode
//! and sends it to its peers, which in turn deserialize it via
//! consensus_decode and relay it to their broadcast instance via their
//! incoming message sender.
//! * The unit is added to the local subgraph of a common directed acyclic graph
//! of units generated cooperatively by all peers for every session.
//! * As the local subgraph grows the units within it are ordered and so are the
//! attached batches. As soon as it is ordered the broadcast instances unpacks
//! our batch sends the serialization to Fedimint Consensus in the form of an
//! ordered item.
//! * Fedimint Consensus then deserializes the item and either accepts the item
//! bases on its current consensus state or discards it otherwise. Fedimint
//! Consensus transmits its decision to its broadcast instance via the
//! decision_sender and processes the next item.
//! * Assuming our item has been accepted the broadcast instance appends its
//! deserialization is added to the session outcome corresponding to the
//! current session.
//! * Roughly every five minutes the session completes. Then the broadcast
//! creates a threshold signature for the session outcome's header and saves
//! both in the form of a signed session outcome in the local database.
//!
//! # Interplay with Fedimint Consensus
//!
//! As an item is only recorded in a session outcome if it has been accepted the
//! decision has to be consisted for all correct nodes in order for them to
//! create identical session outcomes for every session. We introduce this
//! complexity in order to prevent a critical DOS vector were a client submits
//! conflicting items, like double spending an ecash note for example, to
//! different peers. If Fedimint Consensus would not be able to discard the
//! conflicting items in such a way that they do not become part of the
//! broadcasts history all of those items would need to be maintained on disk
//! indefinitely.
//!
//! Therefore it cannot be guaranteed that all broadcast instances return the
//! exact stream of ordered items. However, if two correct peers obtain two
//! ordered items from their broadcast instances they are guaranteed to be in
//! the same order. Furthermore, an ordered items is guaranteed to be seen by
//! all correct nodes if a correct peer accepts it. Those two guarantees are
//! sufficient to build consistent replicated state machines like Fedimint
//! Consensus on top of the broadcast. Such a state machine has to accept an
//! item if it changes the machines state and should discard it otherwise. Let
//! us consider the case of an ecash note being double spend by the items
//! A and B while one peer is offline. First, item A is ordered and all correct
//! peers include the note as spent in their state. Therefore they also accept
//! the item A. Then, item B is ordered and all correct nodes notice the double
//! spend and make no changes to their state. Now they can safely discard the
//! item B as it did not cause a state transition. When the session completes
//! only item A is part of the corresponding session outcome. When the offline
//! peer comes back online it downloads the session outcome. Therefore the
//! recovering peer will only see Item A but arrives at the same state as its
//! peers at the end of the session regardless. However, it did so by processing
//! one less ordered item and without realizing that a double spend had
//! occurred.
pub mod backup;
pub mod data_provider;
pub mod finalization_handler;
pub mod keychain;
pub mod network;
pub mod spawner;
use aleph_bft::NodeIndex;
use fedimint_core::encoding::{Decodable, Encodable};
use fedimint_core::PeerId;
/// This keychain implements naive threshold schnorr signatures over secp256k1.
/// The broadcasts uses this keychain to sign messages for peers and create
/// the threshold signatures for the signed session outcome.
pub use keychain::Keychain;
use serde::{Deserialize, Serialize};
/// The majority of these messages need to be delivered to the intended
/// [Recipient] in order for the broadcast to make progress. However, the
/// broadcast does not assume a reliable network layer and implements all
/// necessary retry logic. Therefore, the caller can discard a message
/// immediately if its intended recipient is offline.
#[derive(Clone, Debug, Encodable, Decodable, Serialize, Deserialize)]
pub struct Message(Vec<u8>);
/// This enum defines the intended destination of a [Message].
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Recipient {
Everyone,
Peer(PeerId),
}
pub fn to_peer_id(node_index: NodeIndex) -> PeerId {
u16::try_from(usize::from(node_index))
.expect("The node index corresponds to a valid PeerId")
.into()
}
pub fn to_node_index(peer_id: PeerId) -> NodeIndex {
usize::from(u16::from(peer_id)).into()
}