Scaling Of Hive Is Of Utmost Importance

There were a lot of takeaways from Hivefest. However, for me, there were a clear focus exhibited by the core developers that I feel is crucial.

When looking at this stuff, my view is that it is vital to look at things from "the ground up". We have to start at the most basic level and work up from there.

Hive is a network. It is a series of nodes that are interconnected which update the database every few seconds. Since the nodes are not related with regards to ownership, we are dealing with something that is decentralized.

On this level, it is also permissionless. Anyone with the technical know-how is free to set one up. Permission from some corporate body or head is not required.

Finally, we are dealing with a text database. Unlike the Bitcoin network which is, at this point, mostly financial transactions, Hive can store any type of text. This is akin to Wikipedia as opposed to the database JPMorgan runs.

With the basics, what is the most important factor. Most will say users. From the presentations, we can see how there is a different focus.


Image generated by Ideogram

Scaling Of Hive Of Utmost Importance

What happens if there are a million users on Hive? Obviously, this would make a lot of people happy.

However, what would happen if the network was only capable of handling the activity for 250K users?

Do you think that would be a problem?

The advantage that Web 2.0 have is they can scale. We know that, no matter what happens with the Meta applications, the company will have enough compute and bandwidth to process it all.

Web 3.0 has repeatedly run into this problem. Going back to the days of Steem, we see where the issue stems from.

Here is a short clip of what was said by Howo, one of the core developers.

https://inleo.io/threads/view/lucidlucrecia/re-nisxkwyrwh?referral=lucidlucrecia

This was a sentiment echoed by Blocktrades in one of his discussions. He stated that the roadmap was laid present when the fork took place back in 2020. The team knew exactly what needed to be done.

Unfortunately, it is not a matter of just changing a few lines of code. A lot was built over the last few years, with more to come.

My observation is that the base layer is designed to be as efficient as possible. This means moving activity elsewhere.

Hive application Framework (HAF)

One of the biggest moves in this direction might be the development of HAF. This is a layer 2 server system that frees the nodes from many of the calls from the applications.

By creating this layer, nodes that have blockchain information (read databases) can be accessed. This means the data is pulled from the blockchain by the HAF nodes but the engagement with the applications is with the HAF server. This lightens the load on the main nodes, freeing them up for other activities.

This might not be exciting stuff but look at the alternative. How many years has Ethereum been working on scaling that network? I think the update map was presented in 2017. Here we are 7 years later and it is still being focused upon.

Naturally, this stuff takes time.

It seems counterintuitive to worry about this now when there is no traffic. Of course, the mantra by many is Hive is dead. Here is a question: do you think so much effort would be placed by the development team if it was going nowhere?

They are well aware of the technologies that are out there regarding decentralization, open source, and Web 3.0. We are not dealing with non-technical people who are not aware of what the challenges of the industry are.

The best developers from the cypherpunk era worked hard on a decentralized consensus system that could be applied to money. For two decades, all attempts failed under Satoshi Nakamoto broke the code.

Unfortunately, as revolutionary as that was, it did not scale.

To me, that is fine. A network like that can have its place. However, that is not going to be for the masses.

Of course, we are looking at so much more. HAF is dealing with activities (and data) other than just financial. In fact, much of Hive's future is outside the bounds of finance.

Web 3.0, in my estimation, is going to be about AI. What this is means is the activity is going to skyrocket. The same is true of Web 2.0. We can already see the amount of inference compute that is required.

In other words, everyone is looking at scaling.

Millions of AI Agents

The idea of millions of users leads people to conclude that we are talking about humans.

When it comes to transactions, humans are of minimal impact as compared to machines. The amount of activity that comes from computer-to-computer engagement is of no comparison.

What happens if Hive has millions of AI agents? This is something that we much consider.

If the future is based, at least in part, on AI, then we must see this operate and be tied to blockchain. After all, few want it held exclusively by Big Tech.

Here is where open source and decentralization if crucial. However, to be a factor, Hive has to be able to handle the volume.

Howo said something very important: he felt that Steemit was trying to build on top of an unstable foundation.

This is a key point. If the base is not capable of operating in a efficient manner, it doesn't matter what is developed on top. It will end up suffering performance issues which will only drive users, developers and entrepreneurs away.

All of this has to be considered. The fact that so much attention is being given to scaling means that being able to handle the future activity is being addressed.

Waiting this out is not exciting stuff but then building a foundation rarely is.


What Is Hive

H2
H3
H4
3 columns
2 columns
1 column
15 Comments