Retrieving Power Up Data for Required Accounts Using HiveSQL

A contest was held within the Hive-TR community a few days ago. This gave me a reason for a purpose to interact with Hive blockchain data programmatically using HiveSQL. I think the contest is held on the 19th of every month. The last contest was the 4th one of this.

Base Image from Pexels

Rankings are made based on power up amounts and ratios on the 19th of each month in the contest. There doesn't seem to be a problem in finding the power up amount of a specific user within certain date ranges by using filtering methods at this point. However, to calculate the power up ratio, we need to know how much Hive Power the account had before the power up. HiveSQL currently does not store account history. It is good that I knew this before the lost myself in a lot of dynamic tables. Thanks @arcange for this. Maybe another method will be needed at this point. Let's focus on how to call the power up amounts of specific users in HiveSQL for a specific date for now.

Yu need to have access to HiveSQL to do this. I won't go into this topic. You can easily find it with a simple search if you are curious about it. Also, I will run the SQL queries with the help of a Python library named pyodbc. You can also use other database tools if you prefer.

# Import required libraries
!pip install pymssql
!pip install plotly
!pip install pyodbc
!pip install -U beem

We need the ODBC driver to be able to establish a database connection. I continue with version 17 as a Linux user. You can find and install the appropriate driver for your system for SQL Server connection here.

If you are using Jupyter notebook, you can change the bash code below accordingly and use it as an another option.

%%sh

curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
curl https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/prod.list > /etc/apt/sources.list.d/mssql-release.list
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get -q -y install msodbcsql17

If the driver is ready, let's establish the database connection. You can enter your own account information in the code below. I got Hive-TR community information from the relevant database table for the test purpose.

import pyodbc
import pandas as pd
import numpy as np

cnxn = pyodbc.connect('Driver={ODBC Driver 17 for SQL Server};'
                              'Server=vip.hivesql.io;'
                              'Database=DBHive;'
                              'uid=Hive-kedi;'
                              'pwd=xxxxxxxxxxxxxx') # Enter your password here
cursor = cnxn.cursor()

def decode_sketchy_utf16(raw_bytes):
    s = raw_bytes.decode("utf-16le", "ignore")
    try:
        n = s.index('\u0000')
        s = s[:n]  # respect null terminator
    except ValueError:
        pass
    return s

# ...

prev_converter = cnxn.get_output_converter(pyodbc.SQL_WVARCHAR)
cnxn.add_output_converter(pyodbc.SQL_WVARCHAR, decode_sketchy_utf16)
col_info = cursor.columns("Clients").fetchall()
cnxn.add_output_converter(pyodbc.SQL_WVARCHAR, prev_converter)  # restore previous behaviour

# Test Query
SQLCommand = '''
SELECT *
FROM Communities
WHERE Name = 'hive-124065'
'''

result = cursor.execute(SQLCommand)
result = result.fetchmany(5)
result
#connection.close()

We need to make the filters accordingly when considering the competition rules. The rules need to be clear and specific. My approach was as follows in this case:

  • The competition post had to be made within the Hive-TR community. I created the first filter based on this.

  • Another rule was that the post had to include the #hivetrhpud tag. I used another filter for this.

  • Another rule was related to the text length of the post, but I did not include it for now to keep the query from becoming too complex.

The first query I created is as follows. The accounts here are accounts that created a post with the #hivetrhpud tag between the dates of 2023-01-19 00:00:00 and 2023-01-19 23:59:59 in the Hive-TR community. We'll use this as our first table.

SQLCommand = '''
SELECT *
FROM Comments
    INNER JOIN Tags ON Comments.ID = Tags.comment_id
WHERE created BETWEEN '2023-01-19 00:00:00' AND '2023-01-19 23:59:59' 
    AND category = 'hive-124065' 
    AND Tags.tag = 'hivetrhpud'
'''
table = pd.read_sql(SQLCommand, cnxn)
table

Here, I will be using the VOTransferToVestingCompleteds table to see how much power up the accounts have made within specific date ranges. This virtual operation is issued when a power-up is finally taken into account for governance votes. I have written the query below for the test purpose.

SQLCommand = '''

SELECT from_account, to_account, SUM(hive_vested), sum(vesting_shares_received) 
FROM VOTransferToVestingCompleteds
    WHERE VOTransferToVestingCompleteds.timestamp < '2023-01-19 00:00:00' 
    AND VOTransferToVestingCompleteds.from_account = VOTransferToVestingCompleteds.to_account AND from_account = 'kedi'
GROUP BY from_account, to_account
'''
table = pd.read_sql(SQLCommand, cnxn)
table

It creates a query like the one below when I combine both SELECT queries with a LEFT JOIN.

SQLCommand = '''
SELECT t1.last_update, t1.author, t2.hive_vested FROM (

    SELECT *
    FROM Comments
        INNER JOIN Tags ON Comments.ID = Tags.comment_id
    WHERE created BETWEEN '2023-01-19 00:00:00' AND '2023-01-19 23:59:59' 
        AND category = 'hive-124065' 
        AND Tags.tag = 'hivetrhpud' ) t1

        LEFT JOIN

              (
    SELECT * 
    FROM VOTransferToVestingCompleteds
        WHERE VOTransferToVestingCompleteds.timestamp BETWEEN '2023-01-19 00:00:00' AND '2023-01-19 23:59:59' 
    AND VOTransferToVestingCompleteds.from_account = VOTransferToVestingCompleteds.to_account

              ) t2

    ON t1.author = t2.from_account
    ORDER BY t2.hive_vested DESC
'''


table = pd.read_sql(SQLCommand, cnxn)
table.index = np.arange(1, len(table) + 1)
table
ranklast_updateauthorhive_vested
12023-01-19 16:40:42kedi859.130
22023-01-19 20:15:18incublus406.000
32023-01-19 20:58:39ahmetay228.309
42023-01-19 03:17:15baboz189.374
52023-01-19 21:08:36mukadder122.981
62023-01-19 19:24:42wagner3270.701
72023-01-19 20:53:30dusunenkalpp58.500
82023-01-19 13:55:15sunsan53.125
92023-01-19 21:21:00bilginhilal34.000
102023-01-19 13:29:42bilginselcuk30.000
112023-01-19 21:01:42kirazay28.000
122023-01-20 04:54:24catslovers28.000
132023-01-19 08:49:30flsfserkan25.000

As I mentioned, because of HiveSQL currently does not store account history for now we cannot calculate to power up percentage related with hive power history. If there are accounts who did not come to the list above, they may not have come because they did not create posts in the Hive-TR community or because they did not use the #hivetrhpud tag.

HiveSQL allows us to create many statistical tables related to the Hive blockchain. It is possible to create weekly, monthly tracking systems and even many tools that can be useful for curators.

That's all for now. Please feel free to share your thoughts and experiences in the comments.

Regards,
Yaser

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now
Logo
Center