The AI Text Generator That’s Too Dangerous To Make Public?

share-with-dlike.jpg

In 2015, car-and-rocket man Elon Musk joined with influential startup backer Sam Altman to put artificial intelligence on a new, more open course. They cofounded a research institute called OpenAI to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.

Google, too, has decided that it’s no longer appropriate to innocently publish new AI research findings and code. Last month, the search company disclosed in a policy paper on AI that it has put constraints on research software it has shared because of fears of misuse. The company recently joined Microsoft in adding language to its financial filings warning investors that its AI software could raise ethical concerns and harm the business.

So Google and not-so-open OpenAI are so concerned about fake news that they’re not releasing their latest AI research?

Does this have FUD written all over it? Artificial Intelligence is here, evolving, and getting stronger. We’re supposed to believe that Google and friends have our best interests in mind as they’re suppressing research findings?

@taskmaster4450 has written several articles about SingularityNet’s open source advancements in AI, specifically to insure that the Googles of the world and their government overlords (or pawns if you will) don’t have a monopoly on this technology.

 

 

 

 


Source of shared Link

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now
Logo
Center