Funding HAF Plug & Play Development #2

hive-plug-and-play-3.png

This proposal continues funding for HAF-based Plug & Play, which is a custom_json and post json_metadata parsing app.

The development node is available on https://plug-play-beta.imwatsi.com/. It is not suitable for use in production. A production server will be available next week.

Proposal

Breakdown of fund usage:

  • 2 dedicated servers (Development and Production: $130 x 2)
  • Development costs (I am the main developer and I am working with a frontend developer to make/maintain an onboarding website and example implementations of HAF-based Plug & Play)
  • Administrative costs (I am working with a dedicated admin to handle Plug & Play onboarding and setup for clients)

Summary of what I'm offering the community:

  • A public, HAF Plug & Play node
  • Database schema design services, to populate custom tables and functions
  • Endpoint design to retrieve custom data sets that Hive dApps need
  • Support for the universal custom_json ops like community and follow
  • API streaming of custom_json ops
  • Subscription-based access (still under deliberation)
  • Help with setting up public HAF Plug & Play node

Current status:

  • The code is on GitHub: https://github.com/imwatsi/haf-plug-play
  • Preparing a public beta version to be released next week
  • Initiated communication with @brianoflondon to support Podping's custom_json and API endpoints
  • I'm working on supporting the parsing of post json_metadata, which will enable apps that use json in posts and comments to populate state for various content related features (still in Alpha stage)

Progress Report (V1 target):

Core code:

[100%] HAF sync script: connects to the `hive_fork_manager` extension, extracts `custom_json` ops and saves them in a PostgreSQL database
[100%] Main "plug" structures: SQL function structures, table schema designs, JSONRPC endpoint design structures to make "plugs" configurable and independent of other plug data

Plug code:

1) Stores, synchronizes and avails historical and state-based data from custom_json ops. It uses PostgreSQL functions to populate tables that store historical operations and secondary functions to populate tables that store derived state.

[100%] PostgreSQL database, functions and table schemas
[100%] App and plug definitions

2) JSONRPC endpoints to handle data requests from apps.

[100%] Follow plug: historical `follow` operations and  Hive accounts' followers list
[100%] Reblog plug: historical `reblog` operations and endpoints to retrieve all reblogs made on a particular post or author, optionally filterable by block_range
[40%] Community plug: historical `community` operations and the state of community related ops like subscribe, unsubscribe, etc (still work-in-progress)

Logging:

[ ] System level logs
[ ] Plug logs
[ ] Server logs

Onboarding website:

[ ] Host documentation (WIP)
[ ] Onboarding contact forms (WIP)
[ ] Showcase sample implementations (WIP)

Roadmap:

Q4 2021:

  • Release V1, with a few sample implementations
  • Onboard Podping
  • Put a website up to handle developer onboarding

Q1 2022:

  • Release v2; with subscription-based services and support for post json_metadata processing

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now