SOP For Community-Driven Identification of Dangerous Persons

3speakcoverSOP.png

Occasionally, within Society, there are people about whom the community at large prefers to be made aware of. Such people are known and proven pedophiles, murderers, and rapists, etc. It is widely recognized that these people most often re-offend, and Society has methods to identify them publicly with the use of sex offenders registers and the like.

DPoS blockchain communities will occasionally have to recognize who these individuals are and take mitigating actions should they chose to.

In the spirit of decentralisation and minimising gatekeeper control of censorship by individuals or small groups of powerful people (as occurs in legacy web2.0 social media today), it will become increasingly important to empower community-driven methods to identify these types of people. This is because they may be using the technology to their advantage against the interests of the community itself or the wider public.

For example, a pedophile may create seemingly innocent, unrelated content using DPoS backed censorship-resistant tools but use this to attract minors who subsequently may be groomed using social media outside of the ecosystem.

It is therefore foreseeable that the community at large may wish to create a mechanism with which to identify these individuals should they already be known to the authorities or the wider public as "Dangerous Persons."

Once identified, the platforms and communities within the ecosystem may then take their mitigating actions to limit potential damage to their users.

At present, platforms can take it upon themselves to police and blocklist at their own discretion; however, as we move forwards, it will be useful to further decentralise the identification of such accounts by making communities the gatekeepers.

One way to solve this issue could be a community-driven, stake-weighted voting system that allows peers to vote on individual accounts that they deem as dangerous to the community. Once the vote reaches a certain threshold (which is also set by the community), the account is put on a "known Dangerous Person List" or "blocklist." Platforms can then use these community-driven decisions as a factor in their mitigating actions.

3Speak and the SPK Network aims to remove centralised, gatekeeper-controlled censorship by implementing decentralised, community-driven stake weighted vote censorship / dangerous account identification.

With this in mind, 3Speak champions:

  • Innocent until proven guilty
  • Community driven censorship resistance
  • A clean slate once a fellon has done their time

With the way our current system is set up, we can prevent accounts from posting to 3speak.tv should they break our rules. At present, any account identified as dangerous, with evidence backing this up, will be prevented from posting to the site. In addition, we will take steps to notify other communities and platforms in the Hive ecosystem about such "Dangerous Users."

As we move forwards, we would like to propose collaboration with other members of the Hive ecosystem to build a decentralised "Dangerous Accounts Identification System" which uses stake-weighted community voting, similar to the one proposed above.

Ultimately, this is an issue that DPoS will also have to face at some point, and so we would like to ask for your feedback on this issue and propose potential solutions that improve upon what we have proposed.


Learn about the SPK Network:

The SPK Network is a decentralized Web 3.0 protocol that rewards value creators and infrastructure providers appropriately and autonomously by distributing reward tokens in such a way that every user, creator, and platform, will be able to earn rewards on a level playing field.

H2
H3
H4
3 columns
2 columns
1 column
60 Comments