Aptos Indexer
Tails the blockchain's transactions and pushes them into a postgres DB
Tails the node utilizing the rest interface/client, and maintains state for each registered TransactionProcessor
. On
startup, by default, will retry any previously errored versions for each registered processor.
When developing your own, ensure each TransactionProcessor
is idempotent, and being called with the same input won't
result in an error if some or all of the processing had previously been completed.
Example invocation:
Try running the indexer with --help
to get more details
Requirements
Local Development
Installation Guide (for apple sillicon)
brew install libpq
(this is a postgres C API library). Also perform all export commands post-installationbrew install postgres
pg_ctl -D /opt/homebrew/var/postgres start
orbrew services start postgresql
/opt/homebrew/bin/createuser -s postgres
- Ensure you're able to do:
psql postgres
cargo install diesel_cli --no-default-features --features postgres
diesel migration run --database-url postgresql://localhost/postgres
- Start indexer
# or
Optional PgAdmin4
- Complete Installation Guide above
brew install --cask pgadmin4
- Open PgAdmin4
- Create a master password
- Right Click Servers >
Register
>Server
- Enter the information in the registration Modal:
General:
Name: Indexer
Connection:
Hostname / Address: 127.0.0.1
Port: 5432
Maintenance Database: postgres
Username: postgres
- Save
Notes:
- Diesel uses the
DATABASE_URL
env var to connect to the database.- Diesel CLI can be installed via cargo, e.g.,
cargo install diesel_cli --no-default-features --features postgres
.diesel migration run
sets up the database and runs all available migrations.- Aptos tests use the
INDEXER_DATABASE_URL
env var. It needs to be set for the relevant tests to run.- Postgres can be installed and run via brew.
Adding new tables / Updating tables with Diesel
diesel migration generate <your_migration_name>
generates a new folder containingup.sql + down.sql
for your migrationdiesel migration run
to apply the missing migrations. This will re-generateschema.rs
as required.diesel migration redo
to rollback and apply the last migrationdiesel database reset
drops the existing database and reruns all the migrations- You can find more information in the Diesel documentation
General Flow
The Tailer
is the central glue that holds all the other components together. It's responsible for the following:
- Maintaining processor state. The
Tailer
keeps a record of theResult
of eachTransactionProcessor
's output for each transaction version (eg: transaction). If aTransactionProcessor
returns aResult::Err()
for a transaction, theTailer
will mark that version as failed in the database (along with the stringified error text) and continue on. - Retry failed versions for each
TransactionProcessor
. By default, when aTailer
is started, it will re-fetch the versions for allTransactionProcessor
which have failed, and attempt to re-process them. TheResult::Ok
/Result::Err
returned from theTransactionProcessor::process_version
replace the state in the DB for the givenTransactionProcessor
/version combination. - Piping new transactions from the
Fetcher
into eachTransactionProcessor
that was registered to it. EachTransactionProcessor
gets its own copy, in its owntokio::Task
, for each version. These are done in batches, the size of which is specifiable via--batch-size
. For other tunable parameters, trycargo run -- --help
.
The Fetcher
is responsible for fetching transactions from a node in one of two ways:
- One at a time (used by the
Tailer
when retrying previously errored transactions). - In bulk, with an internal buffer. Although the
Tailer
only fetches one transaction at a time from theFetcher
, internally theFetcher
will fetch from the/transactions
endpoint, which returns potentially hundreds of transactions at a time. This is much more efficient than making hundreds of individual HTTP calls. In the future, when there is a streaming Node API, that would be the optimal source of transactions.
All the above comes free 'out of the box'. The TransactionProcessor
is where everything becomes useful for those
writing their own indexers. The trait only has one main method that needs to be implemented: process_transaction
. You
can do anything you want in a TransactionProcessor
- write data to Postgres tables like the DefaultProcessor
does,
make restful HTTP calls to some other service, submit its own transactions to the chain: anything at all. There is just
one note: transaction processing is guaranteed at least once. It's possible for a given TransactionProcessor
to
receive the same transaction more than once: and so your implementation must be idempotent.
To implement your own TransactionProcessor
, check out the documentation and source code
here: ./src/indexer/transaction_processor.rs
.
Miscellaneous
- If you run into
=
)
first make sure you have postgres
and libpq
installed via homebrew
, see installation guide above for more details.
This is complaining about the libpq
library, a postgres C API library which diesel needs to run, more on this issue here
2. Postgresql Mac M1 installation guide
3. Stop postgresql: brew services stop postgresql
4. Since homebrew installs packages in /opt/homebrew/bin/postgres
, your pg_hba.conf
should be in /opt/homebrew/var/postgres/
for Apple Silicon users
5. Likewise, your postmaster.pid
should be retrievable via cat /opt/homebrew/var/postgres/postmaster.pid
. Sometimes you may have to remove this if you are unable to start your server, an error like:
)
then run brew services restart postgresql
6. Alias for starting testnet (put this in ~/.zshrc
)
Then run source ~/.zshrc
, and start testnet by running testnet
in your terminal