Proposal: Funding for anyx.io API Infrastructure Recurrent Costs

TL;DR: This proposal seeks to help reimburse the recurrent costs of the public, free-to-use anyx.io Steem API infrastructure. If you use sites or services such as Busy.org, Splinterlands (SteemMonsters), Partiko, or many more — these services rely on this infrastructure for both uptime and performance.

Motivation

API services play a crucial role in the Steem ecosystem. As just recently seen with the previous hard forks — without an API node, it doesn’t matter much if the chain is live if you can’t use it.

All software and services that interact with the Steem chain do require an API node. However, with heavy reliance on Steemit Inc’s nodes, we can see excessive downtime when things go wrong. It does not make for a very decentralized chain to have only one single point of failure and only one good option for public API access -- this is the antithesis of Blockchain ideology.

While others do provide public API nodes, many are configured without all available plugins, do not support high throughput, do not offer good uptime, or are otherwise held back. However, the anyx.io node has already proven itself to be robust, and has recently expanded throughput capabilities.

Timeline

This proposal is set for one year, though the intent will be to continue operation as long as possible. By having a renewal proposal in the future, I can re-evaluate expansion or contraction if needed.

Funding Rationale

The recurrent costs for the infrastructure — namely datacenter hosting — sum totals approximately $450 per month, or about $15 per day. Notably, this proposal does not aim to cover the already-spent hardware costs, which have exceeded $30,000.

The Software Configuration

The current stack in the infrastructure is:
2x Full steemd instances
3x Light steemd instances
3x Hivemind instances
Supplementary custom API instances (e.g. Vessel wallet support)

The Hardware Infrastructure

The current stack of hardware is as follows:

2x “Heavy”:

  • 512GB DDR4 RAM
  • 8-16 core Xeon
  • 1-2 NVME drives
  • 1 OPTANE drive
  • 1-4 SSD drives
  • 1 Gigabit public ethernet

3x “Light”:

  • 64GB DDR4 RAM
  • 4-8 core Xeon or i7
  • 1-2 NVME drives
  • 1 Gigabit public ethernet

Configuration Philosophy

Hosted hardware is owned, not rented. Sure, additional software services (reverse proxy & ddos protection) are "cloud", but these are agile around the back-end owned hardware.

The nodes were custom built with Steem APIs in mind, ensuring a balance of high frequency cores for single threaded tasks (e.g., valiating transactions), with sufficient core count to handle large throughput. Drives are high-end NVME or Optane to ensure low-latency for each request, with sufficient storage available for elements like account history and communities data (hivemind). Due to the high volume of fast storage, this is one of the few nodes that actually offers account history (including get_transaction) support in full.

While originally starting with one "Heavy" node, a second was purchased and installed to ensure backup and redundancy is available. This enables stronger uptime guarantees, as even during crash or failure of one, requests can still be served while recovery takes place. In addition, this enables asynchronous backups that do not interrupt service.

Why not Witness pay?

I have previously been funding this infrastructure costs out of my witness pay. However, with the growth and desired scale-out to ensure satisfying the public demand, operational costs now exceed 25% of a current top 21 witness pay. No other witness offers infrastructure at this level -- the only other entity that can handle heavy public requests is Steemit Inc.

This funding does not seek to reward myself, it is targeted to fund what I believe to be a "public good". The SBD from this proposal will be sold to cover costs.

There have been many debates around whether or not API access should be privatized or offered by organizations to be paid for. Philosophically, I believe API access should be public and freely available -- enabling developers to quickly build and test applications and provide value to the ecosystem, without having to deal with infrastructure woes. The SPS system offers a good avenue to fund this "public good".

This proposal does not seek to reimburse my development time or offer myself a form of "salary" for keeping services running and up to date. I consider Witness pay responsible for this. This proposal is strictly for recurrent hosting costs of the infrastructure.

Qualifications

The anyx.io endpoint has proven itself to be robust to downtime and responsive to elastic public demand. Many services rely on this infrastructure as their primary API endpoint (such as Busy and Partiko), with many others using it heavily or giving users the option to use it (such as Splinterlands, Steemconnect, Keychain, Steempeak, Steemworld, Beem, and many more).

Since the previous hardfork, I have starting light logging of success metrics. Over the past 8 days, here are some interesting statistics:

  • 280,307,849 Total Requests (Approx. 400 Requests per Second Average)
  • 176,856 Unique IP Addresses
  • 0 Server Errors

Independent testing has shown that the anyx.io infrastructure meets or exceeds Steemit Inc's own provided throughput and latency for API requests. The above metrics, while good, do not anywhere near saturate available performance.

Supplementary Reading

Original announcement:
https://steemit.com/steem/@anyx/announcing-https-anyx-io-a-public-high-performance-full-api

Previous upgrades:
https://steemit.com/steem/@anyx/updates-to-anyx-io-infrastructure-including-hivemind-support
https://steemit.com/steem/@anyx/notice-of-upcoming-changes-to-anyx-io-api

Relevent API Development:
https://steemit.com/steem/@anyx/designing-a-restful-steem-api

Learn more about me from my Witness Application:
https://steemit.com/witness/@anyx/updated-witness-application

Consider Voting for this Proposal here:

https://steemitwallet.com/proposals
https://steempeak.com/me/proposals

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now