1st update of 2024: Change notices for HAF app developers

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last report. I looked back at my last report before beginning this one, written more than a month ago, in order to see what’s changed since my last report. My first thought was “where did all the time go?”. We did have a holiday period, but I worked through most of it. Sometimes the best time to get work done is when no one else is around because the computers run faster and there are fewer people to say “no” :-)

Anyways, as I review the merge request history in the repos, it turns out quite a lot was done, to the point where a simple progress report would just run too long. So instead, I’ve decided to focus this report on issues which will most directly affect Hive app developers, as we are close to releasing new versions of everything, so they need to prepare their apps for various changes. So I won’t be discussing any ongoing work except for API-related work in this particular post.

HAF (Hive Application Framework)

We did a major review and overhaul of the scripts used to install and uninstall HAF apps, creating a common methodology for the process. This was actually quite a lot of work, and even now I’m not sure if we got everything “perfect”, but it is much better than before.

Operation ids are no longer monotonic

Operation ids don’t exist as such in the blockchain itself. They are an arbitrary “handles” we assign to an operation to uniquely identify them.

Previously operation ids were created sequentially by HAF as new operations were processed, but they could potentially vary across HAF nodes which replayed at different times and hence saw different forks (the operation id counter wasn’t reverted back to earlier values after a fork). Having them vary across nodes could be slightly troublesome if an app switched from one node to another and assumed the operation ids were the same across nodes, so we decided this needed to be fixed.

One option would have been to fix the sql_serializer to revert the counter after forks, but it was decided for some potential future efficiency reasons to instead use operation ids built from block_number, operation_position_in_block, and operation_type. The resulting ids keep operations in the same “total order” as previous implementations, but the ids don’t increase sequentially anymore. However, they are deterministically assigned now, allowing for full inter-node compatibility of operation ids.

The same change was made to the hived account_history plugin, so operation ids will also be consistent between ah plugin and hafah (which also simplifies future testing).

In practice, I don’t think any apps relied on operation ids to be monotonic, but if they do, they will need to adapt to this change.

Role updates (roles are essentially database privilege levels)

Haf apps typically create two roles: an owner role (which creates the app’s schema and writes to its table) and a user role (which is used by the app’s API server to read the app’s tables during execution of API queries).

There are also two “admin” level roles: haf_admin (a super user who manages the haf server itself) and haf_app_admin (a user who can install haf apps).

We updated HAF and all the HAF apps to eliminate the haf_app_admin role in favor of haf_admin as I decided it was too fine a distinction for the extra complexity it entailed.

Updates to how apps report their status during massive sync

This weekend I changed the method a HAF app uses to update its current block number when it is massive sync mode (when it is processing a bunch of old blocks in order to catch up to the blockchain’s head block). I think the new methodology is much simpler and less likely to lead to errors during app creation. I also added docs to discuss how to employ the new calls: hive.app_get/set_current_block_num.

As part of the above process I also fixed several errors that could occur when interrupting and restarting the balance_tracker and haf_block_explorer apps (both in massive sync and live sync), and updated the haf docs to discuss how to avoid problems of this type (discusses when and when not to commit during block processing).
https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/61
https://gitlab.syncad.com/hive/balance_tracker/-/merge_requests/65
https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/135
https://gitlab.syncad.com/hive/haf_block_explorer/-/merge_requests/138

HAFAH – HAF-based account history API (IMPORTANT NOTE FOR APP DEVELOPERS)

HAFAH is a replacement for the old hive-based account-history plugin. HAFAH is much faster than the plugin, but during testing we did discover one issue that will require a change by apps that use the account-history API. When we originally added support for filtering operations to the hived account-history plugin, it was simply too slow to implement the API as we wanted to do. In particular, if you ask the old API for 1000 operations filtered by an operation type such as “transfers”, it will only return however many transfers it finds in the last 1000 operations in an account’s history, because if the plugin had tried to find 1000 actual transfers it would take far too long. So, apps making these calls would ask for far more operations than they needed, hoping to actually receive enough that they actually needed (for example enough to fill a page), and just ask for more if they didn’t get enough.

Since HAFAH is much faster than the plugin, HAFAH’s API actually does return the last 1000 transfers when this call is made as was originally desired. As a result, to reduce the workload on API servers running hafah, apps should reduce the number of operations requested to the amount they actually need to display (e.g. 100 or less). It might be beneficial for API caching purposes if apps asked for the same amount, so maybe we could standardize on 100 for most apps?

HAF Block Explorer and Balance Tracker APIs

We made a bunch of performance optimizations to both of these APIs, and as a result of that, we reduced the time to sync the block explorer backend (which uses the balance tracker backend) from 70+ hours down to 16 hours. We also were able to eliminate several indexes, reducing the storage requirements for the block explorer dramatically. And changes to indexes and index clustering allowed us to speedup several “slow” block-explorer API calls to acceptable performance levels.

Hivemind API

We’ve been using goreplay to test HAF hivemind API responses against legacy hivemind using “real-world” traffic from our API node. We’ve found several differences, for the most part expected ones, but we’re still analyzing the difference data, and I’ll report later if we find any of especial significance.

One thing of particular note for app developers is that the new hivemind reports changes related to blocks much faster: the old non-haf based hivemind always reported changes two blocks behind the head block (i.e. 6 seconds later) whereas the haf-based hivemind reports changes as soon as they become irreversible and this usually happens within a couple hundred millseconds after blocks are broadcast normally.

Shoutouts for testing assistance

Several of the API node operators have helped us during testing of the new API stack. I’d especially like to thank @mahdiyari for his feedback during testing of HAFAH performance and @disregardfiat for contributing an assisted_startup.sh script to simplify rapidly setting up a new API node:
https://gitlab.syncad.com/hive/haf_api_node/-/issues/1
https://gitlab.syncad.com/hive/haf_api_node/-/issues/6
One note I’ll add is that I usually run the script in a screen to avoid it getting interrupted when I’m working on a remote terminal.

What’s next?

After we’ve finished analyzing hivemind differences, we’ll decide what, if anything, still needs to be done before we tag new release candidates. After that, we’ll setup a test node where apps that don’t run their own API nodes can test their apps against the new API stack.

H2
H3
H4
3 columns
2 columns
1 column
19 Comments
Ecency