Updates to anyx.io Infrastructure (Including Hivemind Support)

server-2160321_1920.jpg

A few months ago, I announced anyx.io, a high performance Full-Node API. The goal of this infrastructure project is to create a high throughput, high availability alternative to Steemit Inc's own API offering (api.steemit.com). Furthermore, the idea is to promote and increase the decentralization of Steem: if everyone uses Steemits' severs, we lose the ability to publicly audit the information they provide us, and moreover, succumb to to any censorship they might enact.

In this post, I wanted to outline a few of the things I've been working on since the announcement. To summarize, here's whats new:

  1. Hivemind full-node queries are now available at https://hive.anyx.io.
  2. Legacy API Support Continues at https://anyx.io.
  3. HTTP (non-ssl, port 80) support now offered.
  4. Improvements to custom middleware solution (Jussi replacement).
  5. Bugfixes (e.g. large payload issues).

1. Hivemind Support Now Offered

Steemit recently announced the use of Hivemind in production. What this means for application developers is that many API offerings have changed, and going forward, certain API's traditionally offered have become deprecated. In this sense, api.steemit.com no longer provides a "full node" as we are used to, but has moved to a different standard.

In the goal of increasing decentralization of API services, I have added a new stack to my infrastructure that includes a full hivemind node, accessible at hive.anyx.io. This offers a public alternative to the same semantics that api.steemit.com offers.

How this was done specifically will be explained in part 4, but one important note that needs to be made about hive is that it requires a steemd node to link into in order to build its state. I've noticed that some people offer hivemind nodes but are not clear about what steemd node provides the back end, and indeed, if one uses api.steemit.com to fill the information in the hive node, it's not really auditing the information. In my case, the steemd node it retrieves its information from is part of the anyx.io stack.

2. Legacy API Support Continues

Since Steemit dropped support for previous "full node" semantics quite quickly, many developers were caught unprepared and have not updated their applications yet. To support these developers, anyx.io will continue to support the legacy "full node" API for as long as it's economical to do so. (Please consider voting for me as a witness!)

In addition to the "full node" semantics, websocket support continues for legacy applications such as the desktop wallet Vessel.

Adding hive.anyx.io is intended to aid developers using my infrastructure to try out their applications with the new API semantics without having to rely on api.steemit.com. Eventually legacy support will likely end, and so developers relying on legacy support should try out their applications sooner rather than later!

3. HTTP Support Added

Previously, anyx.io was only reachable via SSL (https, port 443). The reason for this was due to a limitation of Jussi, Steemit's provided middleware. As I have dropped this and replaced it with my own middleware (see part 4), I now have also opened regular http (port 80) support.

For most users, you should continue to use https. Honestly, if you don't know why you would want to use http instead, you should certainly continue to use https. Only those who know and understand the trade-offs and ramifications of http-only should consider it.

That being said, for anyone testing latency as opposed to throughput as a performance metric (looking at you @holger80), testing http://anyx.io is preferable, as it is slightly more optimized for latency. https://anyx.io continues to be optimized for throughput.

For those that don't understand the difference: Latency is a metric of how quickly you can retrieve a response back after you request it. Throughput is how many total requests can be served. As an example, if 100 clients can retrieve data every 1 second, the throughput is 100 r/s with a latency of 1s. However, if 500 clients can retrieve data once every 2 seconds, the throughput is 250 r/s with a latency of 2s. At a high level, the entirety of the anyx.io stack is optimized for throughput, as the intent is to offer a public node that gives fairness to many, many concurrent clients.

4. Custom Router Development

As mentioned previously, I have replaced Jussi (Steemit's middleware solution) with my own custom solution. There were several reasons for avoiding Jussi, primarily:

  • Lack of support for port management
  • Poor performance in throughput
  • Caching often too overzealous
  • Does not support unix sockets

For the replacement, I built a custom solution in Golang that connects to steemd via unix sockets (this is I feature I added to Steem itself, here, for much better local performance and to avoid the tcp/ip stack where possible) and offers better performance in general due to being a compiled, static language rather than a dynamic one (how python operates). As an outcome, I've noticed a drastic decrease in timeouts compared to jussi, as all requests are served concurrently with excellent throughput.

For caching, my solution is less zealous and will attempt to retrieve new information as soon as possible. In general, this will mean more up-to-date results compared to solutions with heavy caching.

Finally, hivemind support was added to the stack. If your request is made to hive.anyx.io instead of anyx.io, the hivemind API calls (which can be found here) are intercepted by this middleware and sent to the hivemind stack, but any other calls will continue to the anyx.io stack as usual.

Notably, with this new middleware has come a few issues -- as semantics do not perfectly match those of Jussi (which many other API's offer). As such, this is a work-in-progress, so if you notice any discrepancies between my API node and any others, please feel free to let me know.

5. Bugfixes and Performance Improvements

A side note that's important to mention is that tweaks and improvements are ongoing! Certain issues like caching returning out-of-date results have been resolved (opting to be more sensitive to time), and some nginx edge-cases like payload size causing interference have been fixed (see this hivemind issue). I also recently added support for batched requests (of limited size) since certain application developers require it.

If you run into any issues, please feel free to poke me so that I can resolve them! The goal is to provide a feature-complete API alternative to remove dependency and reliance on Steemit Inc., and so any improvements I can make will help me reach that goal.



Like what I'm doing for Steem? You can read more about my witness candidacy here:
https://steemit.com/witness/@anyx/updated-witness-application

Then please consider voting for me as a witness here!
https://steemit.com/~witnesses

H2
H3
H4
3 columns
2 columns
1 column
36 Comments
Ecency