DIY Eclipse Jump Starter Kit - Pure and Undiluted

Not too many words, but a lot of useful stuff.
Intended for the Hive API node operators, witnesses, and developers.

Except for the video, the video is for everyone:

Cool, huh?

Services

API node

https://api.openhive.network
It currently runs hived at v1.24.6 and hivemind at the latest develop.
A lot of fixes and optimizations have been recently introduced, so I try to keep up to date.
During the maintenance mode, it will fall back to https://api.hive.blog

Seed node

hived v1.24.6 listens on gtg.openhive.network:2001

Stuff for download

TL;DR https://gtg.openhive.network/get

Binaries

./get/bincontains hived and cli_wallet binaries built on Ubuntu 18.04 LTS
Right now it’s: v1.24.6

Blocks

./get/blockchain
As usual, the block_log file, roughly 300GB and counting.

Snapshots

./get/snapshot/api/ contains a relatively recent snapshot of the API node with all the fancy plugins.
There’s also the example-api-config.ini file out there.
The snapshot is compressed with lbzip2: 280GB

lbzip2 is a free, multi-threaded compression utility with support for bzip2 compressed file format.

To decompress, you can use simply run it through something like: lbzip2 -dc | tar xv
The uncompressed snapshot: 413GB

To use snapshots:

  • Get block_log, it can be bigger but not smaller than the one used when the snapshot was made.
  • Get a reference config file, adjust it to your needs, make sure you don’t affect it in a way that changes the state.
  • Get a binary compatible with the snapshot (a newer one is OK only when no replay is required).
  • Run hived with --load-snapshot name, assuming the snapshot is stored in snapshot/name

hived API node runtime: 722GB (incl. shm 18GB, excl. snapshot)

Hivemind database dump

./get/hivemind/ contains a relatively recent dump of the Hivemind database: 50GB
I use self-describing file names such as:
hivemind-20201105-294e2411-Fc.dump
Take your best guess.
The custom format is compressed by default and you can use pg_restore with -j 6 (recommended) to restore the database in parallel.
After restoring, make sure you run the db_upgrade script.
When restored, database can take 600-700GB

All resources are offered AS IS. Obviously.

From zero to hero

By using the above resources you can get your fully featured API node running in one working day.

More to come

Stay tuned.

H2
H3
H4
3 columns
2 columns
1 column
13 Comments
Ecency