This content was deleted by the author. You can see it from Blockchain History logs.

SteemWorld News ~ Big Server-Update ahead!

I have made many major changes/optimizations on the SteemWorld API in the recent week and I will transfer them to the main server in the coming days. The implementation process may lead to some hours of downtime for a few features on SteemWorld. Just in case there should occur issues for some time, you now know the reason for it.

As mentioned before, I have built my own solution for handling and saving blocks without the need of running a steemd instance on my server. Since I have very limited RAM and disk space available and no money for a high performance server, I was on a mission to find a better way for accessing and storing the block transactions and I am very happy with the results.

New Compression Technique

A regular steemd instance uses a 'block_log' file to store the raw blocks in an uncompressed way. Additionally it needs to keep the parsed operations in RAM (or shared memory file on disk, if you can't shit RAM) which is not possible for me with my available hardware (I would need to run a full node for the things that I want to implement in the coming months). I am thinking about storing also some of the 'virtual operations' on disk but that is a different topic and it might get a bit to technically for most of you to talk about this stuff here.

Steemd block_log : ~ 95 GB
My solution takes : ~ 11 GB

Dude, how did you compress a 95 GB file to 11 GB when even the best compression tools like 'xz -9' come to a resulting file size of ~ 34 GB?

I will explain it more in detail in one of my next posts. Compressing redundant field identifiers is one of the major reasons for such a small file size and of course I am not saving data that is only being required for consensus features. Nevertheless my data sets contain all 'non-virtual' operations. My parsing routine decompresses only the data that is currently being needed so that my server will never run out of space. Compression takes place from a cronjob in a separate process (because it takes a while). More details soon ;)

Cached API Requests

One 'experimental' optimization that will be activated in the coming days is to make use of 'Cached API Requests' for the users on SteemWorld. Instead of requesting most of the general data directly from the public nodes for each client every 30 seconds, my server will do these requests only once, cache the results and send the cached data to the client. I am not sure if my server is able to handle the huge amount of requests but I think it is possible. If this works as expected, there will be much less requests to the public nodes from my tool.

I am glad to see that Steemit is working on solutions for the upcoming problems regarding to memory usage and (even if I have built my own data service) I think that SBDS (Steem Blockchain Data Service) is a big step in the right direction!

Multiple SteemWorld Tabs

I have thought about how to minimize the number of server requests even more and came to the idea of using browser cookies for request caching. Some users like to open 5 or more SteemWorld tabs for different accounts at the same time and this of course multiplies the number of requests for refreshing the data by the number of open tabs. For these cases I want to store the result of each (main) request in a cookie so that the scripts in the other tabs receive the data from there instead of making new requests all the time. This will also get implemented in the coming week.

Thanks for your support!

Now back to work... 'Delegation History' coming soon! I wish you could see how many things happen in background daily and how much work it is to build and provide such a stable tool.

STEEM to the moon! We are living in exciting times, friends ;)
Much love and peace to all of you ~