Struct solana_runtime::accounts_db::AccountsDb
source · pub struct AccountsDb {Show 17 fields
pub accounts_index: AccountsIndex<AccountInfo>,
pub accounts_hash_complete_one_epoch_old: RwLock<Slot>,
pub ancient_append_vec_offset: Option<i64>,
pub skip_initial_hash_calc: bool,
pub accounts_cache: AccountsCache,
pub next_id: AtomicAppendVecId,
pub shrink_candidate_slots: Mutex<HashMap<Slot, Arc<AccountStorageEntry>>>,
pub shrink_paths: RwLock<Option<Vec<PathBuf>>>,
pub thread_pool: ThreadPool,
pub thread_pool_clean: ThreadPool,
pub stats: AccountsStats,
pub cluster_type: Option<ClusterType>,
pub account_indexes: AccountSecondaryIndexes,
pub filler_account_suffix: Option<Pubkey>,
pub filler_accounts_per_slot: AtomicU64,
pub filler_account_slots_remaining: AtomicU64,
pub epoch_accounts_hash_manager: EpochAccountsHashManager,
/* private fields */
}
Fields§
§accounts_index: AccountsIndex<AccountInfo>
Keeps tracks of index into AppendVec on a per slot basis
accounts_hash_complete_one_epoch_old: RwLock<Slot>
slot that is one epoch older than the highest slot where accounts hash calculation has completed
ancient_append_vec_offset: Option<i64>
Some(offset) iff we want to squash old append vecs together into ‘ancient append vecs’ Some(offset) means for slots up to (max_slot - (slots_per_epoch - ‘offset’)), put them in ancient append vecs
skip_initial_hash_calc: bool
true iff we want to skip the initial hash calculation on startup
accounts_cache: AccountsCache
§next_id: AtomicAppendVecId
distribute the accounts across storage lists
shrink_candidate_slots: Mutex<HashMap<Slot, Arc<AccountStorageEntry>>>
Set of shrinkable stores organized by map of slot to append_vec_id
shrink_paths: RwLock<Option<Vec<PathBuf>>>
§thread_pool: ThreadPool
Thread pool used for par_iter
thread_pool_clean: ThreadPool
§stats: AccountsStats
§cluster_type: Option<ClusterType>
§account_indexes: AccountSecondaryIndexes
§filler_account_suffix: Option<Pubkey>
§filler_accounts_per_slot: AtomicU64
number of filler accounts to add for each slot
filler_account_slots_remaining: AtomicU64
number of slots remaining where filler accounts should be added
epoch_accounts_hash_manager: EpochAccountsHashManager
the full accounts hash calculation as of a predetermined block height ‘N’ to be included in the bank hash at a predetermined block height ‘M’ The cadence is once per epoch, all nodes calculate a full accounts hash as of a known slot calculated using ‘N’ Some time later (to allow for slow calculation time), the bank hash at a slot calculated using ‘M’ includes the full accounts hash. Thus, the state of all accounts on a validator is known to be correct at least once per epoch.
Implementations§
source§impl AccountsDb
impl AccountsDb
sourcepub fn notify_account_restore_from_snapshot(&self)
pub fn notify_account_restore_from_snapshot(&self)
Notify the plugins of of account data when AccountsDb is restored from a snapshot. The data is streamed in the reverse order of the slots so that an account is only streamed once. At a slot, if the accounts is updated multiple times only the last write (with highest write_version) is notified.
pub fn notify_account_at_accounts_update<P>( &self, slot: Slot, account: &AccountSharedData, txn_signature: &Option<&Signature>, pubkey: &Pubkey, write_version_producer: &mut P )where P: Iterator<Item = u64>,
source§impl AccountsDb
impl AccountsDb
pub const ACCOUNTS_HASH_CACHE_DIR: &str = "accounts_hash_cache"
pub fn default_for_tests() -> Self
pub fn new_for_tests(paths: Vec<PathBuf>, cluster_type: &ClusterType) -> Self
pub fn new_for_tests_with_caching( paths: Vec<PathBuf>, cluster_type: &ClusterType ) -> Self
pub fn new_with_config( paths: Vec<PathBuf>, cluster_type: &ClusterType, account_indexes: AccountSecondaryIndexes, shrink_ratio: AccountShrinkThreshold, accounts_db_config: Option<AccountsDbConfig>, accounts_update_notifier: Option<AccountsUpdateNotifier>, exit: &Arc<AtomicBool> ) -> Self
pub fn set_shrink_paths(&self, paths: Vec<PathBuf>)
pub fn file_size(&self) -> u64
pub fn new_single_for_tests() -> Self
pub fn new_single_for_tests_with_caching() -> Self
pub fn new_single_for_tests_with_secondary_indexes( secondary_indexes: AccountSecondaryIndexes ) -> Self
pub fn expected_cluster_type(&self) -> ClusterType
sourcepub fn notify_accounts_hash_calculated_complete(
&self,
completed_slot: Slot,
epoch_schedule: &EpochSchedule
)
pub fn notify_accounts_hash_calculated_complete( &self, completed_slot: Slot, epoch_schedule: &EpochSchedule )
hash calc is completed as of ‘slot’ so, any process that wants to take action on really old slots can now proceed up to ‘completed_slot’-slots per epoch
sourcepub fn clean_accounts_for_tests(&self)
pub fn clean_accounts_for_tests(&self)
Call clean_accounts() with the common parameters that tests/benches use.
pub fn clean_accounts( &self, max_clean_root_inclusive: Option<Slot>, is_startup: bool, last_full_snapshot_slot: Option<Slot> )
pub fn shrink_candidate_slots(&self) -> usize
pub fn shrink_all_slots( &self, is_startup: bool, last_full_snapshot_slot: Option<Slot> )
pub fn scan_accounts<F>( &self, ancestors: &Ancestors, bank_id: BankId, scan_func: F, config: &ScanConfig ) -> ScanResult<()>where F: FnMut(Option<(&Pubkey, AccountSharedData, Slot)>),
pub fn unchecked_scan_accounts<F>( &self, metric_name: &'static str, ancestors: &Ancestors, scan_func: F, config: &ScanConfig )where F: FnMut(&Pubkey, LoadedAccount<'_>, Slot),
sourcepub fn range_scan_accounts<F, R>(
&self,
metric_name: &'static str,
ancestors: &Ancestors,
range: R,
config: &ScanConfig,
scan_func: F
)where
F: FnMut(Option<(&Pubkey, AccountSharedData, Slot)>),
R: RangeBounds<Pubkey> + Debug,
pub fn range_scan_accounts<F, R>( &self, metric_name: &'static str, ancestors: &Ancestors, range: R, config: &ScanConfig, scan_func: F )where F: FnMut(Option<(&Pubkey, AccountSharedData, Slot)>), R: RangeBounds<Pubkey> + Debug,
Only guaranteed to be safe when called from rent collection
pub fn index_scan_accounts<F>( &self, ancestors: &Ancestors, bank_id: BankId, index_key: IndexKey, scan_func: F, config: &ScanConfig ) -> ScanResult<bool>where F: FnMut(Option<(&Pubkey, AccountSharedData, Slot)>),
sourcepub fn scan_account_storage<R, B>(
&self,
slot: Slot,
cache_map_func: impl Fn(LoadedAccount<'_>) -> Option<R> + Sync,
storage_scan_func: impl Fn(&B, LoadedAccount<'_>) + Sync
) -> ScanStorageResult<R, B>where
R: Send,
B: Send + Default + Sync,
pub fn scan_account_storage<R, B>( &self, slot: Slot, cache_map_func: impl Fn(LoadedAccount<'_>) -> Option<R> + Sync, storage_scan_func: impl Fn(&B, LoadedAccount<'_>) + Sync ) -> ScanStorageResult<R, B>where R: Send, B: Send + Default + Sync,
Scan a specific slot through all the account storage
sourcepub fn insert_default_bank_hash(&self, slot: Slot, parent_slot: Slot)
pub fn insert_default_bank_hash(&self, slot: Slot, parent_slot: Slot)
Insert a new bank hash for slot
The new bank hash is empty/default except for the slot. This fn is called when creating a new bank from parent. The bank hash for this slot is updated with real values later.
pub fn load( &self, ancestors: &Ancestors, pubkey: &Pubkey, load_hint: LoadHint ) -> Option<(AccountSharedData, Slot)>
pub fn load_account_into_read_cache(&self, ancestors: &Ancestors, pubkey: &Pubkey)
sourcepub fn load_with_fixed_root(
&self,
ancestors: &Ancestors,
pubkey: &Pubkey
) -> Option<(AccountSharedData, Slot)>
pub fn load_with_fixed_root( &self, ancestors: &Ancestors, pubkey: &Pubkey ) -> Option<(AccountSharedData, Slot)>
note this returns None for accounts with zero lamports
sourcepub fn flush_read_only_cache_for_tests(&self)
pub fn flush_read_only_cache_for_tests(&self)
remove all entries from the read only accounts cache useful for benches/tests
pub fn load_account_hash( &self, ancestors: &Ancestors, pubkey: &Pubkey, max_root: Option<Slot>, load_hint: LoadHint ) -> Option<Hash>
pub fn create_drop_bank_callback( &self, pruned_banks_sender: DroppedSlotsSender ) -> SendDroppedBankCallback
sourcepub fn purge_slot(&self, slot: Slot, bank_id: BankId, is_serialized_with_abs: bool)
pub fn purge_slot(&self, slot: Slot, bank_id: BankId, is_serialized_with_abs: bool)
This should only be called after the Bank::drop()
runs in bank.rs, See BANK_DROP_SAFETY
comment below for more explanation.
is_serialized_with_abs
- indicates whehter this call runs sequentially with all other accounts_db relevant calls, such as shrinking, purging etc., in account background service.
pub fn remove_unrooted_slots(&self, remove_slots: &[(Slot, BankId)])
pub fn hash_account<T: ReadableAccount>( slot: Slot, account: &T, pubkey: &Pubkey, include_slot: IncludeSlotInHash ) -> Hash
pub fn mark_slot_frozen(&self, slot: Slot)
pub fn expire_old_recycle_stores(&self)
pub fn flush_accounts_cache( &self, force_flush: bool, requested_flush_root: Option<Slot> )
sourcepub fn find_unskipped_slot(
&self,
slot: Slot,
ancestors: Option<&Ancestors>
) -> Option<Slot>
pub fn find_unskipped_slot( &self, slot: Slot, ancestors: Option<&Ancestors> ) -> Option<Slot>
find slot >= ‘slot’ which is a root or in ‘ancestors’
pub fn checked_iterative_sum_for_capitalization( total_cap: u64, new_cap: u64 ) -> u64
pub fn checked_sum_for_capitalization<T: Iterator<Item = u64>>( balances: T ) -> u64
pub fn calculate_accounts_hash_from_index( &self, max_slot: Slot, config: &CalcAccountsHashConfig<'_> ) -> Result<(AccountsHash, u64), BankHashVerificationError>
pub fn get_accounts_hash(&self, slot: Slot) -> AccountsHash
pub fn update_accounts_hash_for_tests( &self, slot: Slot, ancestors: &Ancestors, debug_verify: bool, is_startup: bool ) -> (AccountsHash, u64)
pub fn update_accounts_hash( &self, data_source: CalcAccountsHashDataSource, debug_verify: bool, slot: Slot, ancestors: &Ancestors, expected_capitalization: Option<u64>, epoch_schedule: &EpochSchedule, rent_collector: &RentCollector, is_startup: bool ) -> (AccountsHash, u64)
pub fn calculate_accounts_hash_from_storages( &self, config: &CalcAccountsHashConfig<'_>, storages: &SortedStorages<'_>, stats: HashStats ) -> Result<(AccountsHash, u64), BankHashVerificationError>
sourcepub fn calculate_incremental_accounts_hash(
&self,
config: &CalcAccountsHashConfig<'_>,
storages: &SortedStorages<'_>,
base_slot: Slot,
stats: HashStats
) -> Result<(AccountsHash, u64), BankHashVerificationError>
pub fn calculate_incremental_accounts_hash( &self, config: &CalcAccountsHashConfig<'_>, storages: &SortedStorages<'_>, base_slot: Slot, stats: HashStats ) -> Result<(AccountsHash, u64), BankHashVerificationError>
Calculate the incremental accounts hash
This calculation is intended to be used by incremental snapshots, and thus differs from a “full” accounts hash in a few ways:
- Zero-lamport accounts are included in the hash because zero-lamport accounts are also included in the incremental snapshot. This ensures reconstructing the AccountsDb is still correct when using this incremental accounts hash.
storages
must be greater thanbase_slot
. This follows the same requirements as incremental snapshots.
sourcepub fn verify_bank_hash_and_lamports(
&self,
slot: Slot,
ancestors: &Ancestors,
total_lamports: u64,
test_hash_calculation: bool,
epoch_schedule: &EpochSchedule,
rent_collector: &RentCollector,
ignore_mismatch: bool,
store_hash_raw_data_for_debug: bool,
use_bg_thread_pool: bool
) -> Result<(), BankHashVerificationError>
pub fn verify_bank_hash_and_lamports( &self, slot: Slot, ancestors: &Ancestors, total_lamports: u64, test_hash_calculation: bool, epoch_schedule: &EpochSchedule, rent_collector: &RentCollector, ignore_mismatch: bool, store_hash_raw_data_for_debug: bool, use_bg_thread_pool: bool ) -> Result<(), BankHashVerificationError>
Only called from startup or test code.
sourcepub fn calculate_accounts_delta_hash(&self, slot: Slot) -> AccountsDeltaHash
pub fn calculate_accounts_delta_hash(&self, slot: Slot) -> AccountsDeltaHash
Calculate accounts delta hash for slot
As part of calculating the accounts delta hash, get a list of accounts modified this slot
(aka dirty pubkeys) and add them to self.uncleaned_pubkeys
for future cleaning.
sourcepub fn get_bank_hash_info(&self, slot: Slot) -> Option<BankHashInfo>
pub fn get_bank_hash_info(&self, slot: Slot) -> Option<BankHashInfo>
Get the bank hash info for slot
sourcepub fn set_bank_hash_info_from_snapshot(
&self,
slot: Slot,
bank_hash_info: BankHashInfo
)
pub fn set_bank_hash_info_from_snapshot( &self, slot: Slot, bank_hash_info: BankHashInfo )
When reconstructing AccountsDb from a snapshot, insert the bank_hash_info
into the
internal bank hashses map.
This fn is only called when loading from a snapshot, which means AccountsDb is new and its
bank hashes is unpopulated. Therefore, a bank hash must not already exist at slot
1.
1 Slot 0 is a special case, however. When a new AccountsDb is created–like when loading from a snapshot–the bank hashes map is populated with a default entry at slot 0. It is valid to have a snapshot at slot 0, so it must be handled accordingly.
pub fn store_cached<'a, T: ReadableAccount + Sync + ZeroLamport + 'a>( &self, accounts: impl StorableAccounts<'a, T>, txn_signatures: Option<&'a [Option<&'a Signature>]> )
sourcepub fn store_uncached(
&self,
slot: Slot,
accounts: &[(&Pubkey, &AccountSharedData)]
)
pub fn store_uncached( &self, slot: Slot, accounts: &[(&Pubkey, &AccountSharedData)] )
Store the account update. only called by tests
pub fn add_root(&self, slot: Slot) -> AccountsAddRootTiming
sourcepub fn get_snapshot_storages(
&self,
requested_slots: impl RangeBounds<Slot> + Sync,
ancestors: Option<&Ancestors>
) -> (Vec<Arc<AccountStorageEntry>>, Vec<Slot>)
pub fn get_snapshot_storages( &self, requested_slots: impl RangeBounds<Slot> + Sync, ancestors: Option<&Ancestors> ) -> (Vec<Arc<AccountStorageEntry>>, Vec<Slot>)
Get storages to use for snapshots, for the requested slots
pub fn is_filler_account_helper( pubkey: &Pubkey, filler_account_suffix: Option<&Pubkey> ) -> bool
sourcepub fn is_filler_account(&self, pubkey: &Pubkey) -> bool
pub fn is_filler_account(&self, pubkey: &Pubkey) -> bool
true if ‘pubkey’ is a filler account
sourcepub fn filler_accounts_enabled(&self) -> bool
pub fn filler_accounts_enabled(&self) -> bool
true if it is possible that there are filler accounts present
sourcepub fn maybe_add_filler_accounts(
&self,
epoch_schedule: &EpochSchedule,
slot: Slot
)
pub fn maybe_add_filler_accounts( &self, epoch_schedule: &EpochSchedule, slot: Slot )
filler accounts are space-holding accounts which are ignored by hash calculations and rent. They are designed to allow a validator to run against a network successfully while simulating having many more accounts present. All filler accounts share a common pubkey suffix. The suffix is randomly generated per validator on startup. The filler accounts are added to each slot in the snapshot after index generation. The accounts added in a slot are setup to have pubkeys such that rent will be collected from them before (or when?) their slot becomes an epoch old. Thus, the filler accounts are rewritten by rent and the old slot can be thrown away successfully.