Hive Scaling: 30x More Capacity

We discussed Hive as a database. When it comes to decentralized text storage, Hive is rather unique.

In an era where data is so important, we are residing in a world where the flow is controlled. Web 2.0 is structured in a way where a handful of companies control most of the data. Access to the database, especially to write, is only granted with permission.

This is evident in the fact that accounts are controlled by the companies behind the platform. Banning is still fairly common. This can come without warning. What makes the situation worse is that, once this happens, not only can the database not be written to but older information is no longer accessible.

One is completely cut off.

Here is where Web 3.0 offers a different solution.


Source

Data In An AI Era

We all have seen images similar to the one above.

This depicts the progress the Internet went through. Initially we were dealing with something static. This was the read era. With the advent of social media, comment sections, and forums, we entered read-write. now, with Web 3.0, tokenization leads to ownership.

It is something that is important for a variety of reason.

To start, in a digital era, cancelling an account is akin to killing someone's digital life. Their entire online existence, whatever that might be, is gone in an instant. All connections, works, and years of engagement is gone.

It is as if the person (digital) never existed.

Then we also have the atypical freedom necessary to combat not only censorship but tyranny. There are many parts of the world where speaking freely could get one imprisoned, or worse. Independent journalists around the world can face some harsh conditions for reporting on things such as genocide or corruption.

In my view, as important as these are, they pale in comparison to what we are entering.

With the progress into artificial intelligence, it is crucial that we start to consider the need to have data that is not under the control of the major entities. There is a reason why Big Tech is dominating the sector. Naturally, the requirement for massive compute is one of the reasons. However, it is also essential to have a massive amount of data. Companies like X, Google, and Facebook (Meta) have this.

Most starts ups and smaller entities are left out in the cold on this one.

Here is where Hive could offer an alternative. Actually, anything that is posted to permissionless blockchains are part of this battle.

Hive's Capacity

What is the capacity of Hive as a database?

This is something that is a major question throughout the entire blockchain spectrum. When it comes to scaling, we are finding an area that most have concern about.

Hive is no exception.

Fortunately, this was a concept since the blockchain went live. Steps are repeatedly being taken to ensure we can handle a larger amount of data. This is going to be vital because all the capacity is eventually going to be needed. The creation of data is not slowing down.

In other words, all the bandwidth that is out there will be utilized.

So what can Hive handle?

We gained some insight from the most recent core developer call.

The team is testing out the capabilities of larger blocks to be prepared. For now, we are still using 64K block sizes, with plenty of room to spare. This is not likely to always be the case.

When the blockchain was designed, the max size was set at 2MB. There were tests done occasionally to see how it would perform. Recently, more were done with a focus on how the Hive application Framework (HAF) would perform.

This is from the transcript:

And we needed to make sure that it could also handle those larger box and Bartek told me today that the test they've done so far seemed to indicate we're not going to have a problem there either so that's. And I mean without even any making any, I think any adjustments to half itself in just the code as it currently is operating was able to operate live sync with two megabyte size blocks being sent out in the test net situation. I think that's extremely positive.

So what does this mean for capacity? Here is how it was explained:

You know, we'll still be doing some more testing but I think we're on the right track there so we have lots of headroom I mean just to put that in perspective, a two megabyte block is about 30 times larger than current blocks so that's if we're not filling up our normal blocks right now at 64 K we have plenty of headroom on that on that side now.

According to Blocktrades, we are dealing with a 30x increase. This is a significant jump. Whether there is a 1:1 correlation in throughput is unknown to me. However, the point is clear: we have plenty of room to add more data to the chain with the buffer of larger block sizes.

For now, there is no need to alter the size since we are still operating without issue as is. The key is that, when needed, the capacity is there.

Of course, when it comes to network efficiency, there is a lot more than just storage space. Nevertheless, this is a simple metric telling me we have a lot more room.

The AI Race

It is hard to dispute there is an AI arms race taking place.

Each week, it seems as if a new technology player has another announcement or upgrade that advances things. While this could slow down, it seems we entered a new era of development.

This is putting everyone at a crossroads.

The question of how we best proceed is being discussed. Some feel this technology is too powerful to allow in the hands of anyone. This leads to the conclusion that closed models are best. On the flip side, some believe that it is too powerful to have in the hands of any one government or corporation. The view here is nobody can be trusted to this degree.

For me, I believe having as much out there is the best defense. We know that corporations and governments can become corrupted (if not already). Big Tech has given us no reason to trust them based upon the last few decades. Naturally, the same is true of governments.

Hive is a miniscule player in the online world. That said, it does have a powerful characteristic: a decentralized, permissionless text database without any direct transaction fees.

It seems, in this era, that is becoming more important.

As we progress forward, it is uncertain when we will need the additional capacity but it is good to know it is there when we need it.

According to the numbers we have, a 30x increase in throughput size is possible.


What Is Hive

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now