Proposal
The proposal to fund this project is now funded. Many thanks to everyone who voted for it.
You can vote for it here:
I missed last week's update as I was busy setting up the dev environment for the new server and documenting how to set up HAF Plug & Play from scratch.
This update's pull request: https://github.com/imwatsi/haf-plug-play/pull/2
I have WIP code for the following:
follow
plug endpoint expansioncommunity
plug functions and endpoints- subscription logic for apps using HAF Plug & Play
I will push code when it is stable, and include it in the next update.
I also added detailed setup instructions to the README, as seen below:
Install dependencies
apt-get install -y \
libboost-chrono-dev \
libboost-context-dev \
libboost-coroutine-dev \
libboost-date-time-dev \
libboost-filesystem-dev \
libboost-iostreams-dev \
libboost-locale-dev \
libboost-program-options-dev \
libboost-serialization-dev \
libboost-signals-dev \
libboost-system-dev \
libboost-test-dev \
libboost-thread-dev
apt install python3 python3-pip postgresql libpq-dev apt install postgresql-server-dev-all
# Setup PostgreSQL
Once you have your PostgreSQL installation, setup the username and password access. For example:
```
sudo -u postgres psql template1
ALTER USER postgres with encrypted password 'your_password';
\q #quit psql
sudo systemctl restart postgresql.service
Edit the file /etc/postgresql/12/main/pg_hba.conf
to use MD5 authentication: local all postgres md5
After that, create the haf
database:
psql -U postgres
CREATE DATABASE haf;
Give PostgreSQL permissions
CREATE ROLE hived LOGIN PASSWORD 'hivedpass' INHERIT IN ROLE hived_group;
CREATE ROLE application LOGIN PASSWORD 'applicationpass' INHERIT IN ROLE hive_applications_group;
Build hived and configure the node
git clone https://gitlab.syncad.com/hive/hive.git
pushd hive
git checkout develop
git submodule update --init --recursive
popd
mkdir build_haf_sql_serializer
pushd build_haf_sql_serializer
cmake -DCMAKE_BUILD_TYPE=Release ../hive
make -j${nproc}
pushd programs/hived
./hived -d data --dump-config
Install the fork_manager
git clone https://gitlab.syncad.com/hive/psql_tools.git
cd psql_tools
cmake .
make extension.hive_fork_manager
make install
Add sql_serializer to params to hived config
nano data/config.ini
# ADD THE FOLLOWING:
plugin = sql_serializer
psql-url = dbname=haf user=postgres password=your_password hostaddr=127.0.0.1 port=5432
Install HAF Plug & Play
git clone https://github.com/imwatsi/haf-plug-play.git
cd haf-plug-play
pip3 install -e .
Run the HAF Plug & Play setup script
cd haf_plug_play/database
python3 haf_sync.py
Donwload block log for replay
cd build_haf_sql_serializer/programs/hived/data
mkdir -p blockchain
wget -O blockchain/block_log https://gtg.openhive.network/get/blockchain/block_log
Run hived and sync to a specific block height
cd build_haf_sql_serializer/programs/hived
./hived -d data --replay-blockchain --stop-replay-at-block 5000000 --exit-after-replay
Next week's scope
In the coming week, I plan to work on expanding the endpoints for the follow
plug, finish up the functions for the community
plug as well as the subscription logic for apps using HAF Plug & Play.
I run a Hive witness node:
- Witness name:
imwatsi
- HiveSigner
- Witness name: