I've built a lifelike AI robot!


Source

The true key was figuring out how it would learn. Learning, as it happens, is a pretty complex process.

Or so I thought. In the end I made a simple but reliable method to determine a belief system.

No, I didn't teach it about god. I condensed everything into a binary format. Either information was true, or it was untrue. The more information was repeated the more true it would be considered. Specifically the links between two things. It has a microphone, camera and speakers to act as it's ears, eyes and mouth, so when it sees red and hears
"red" then the link between those two would become more certain. Crucially, the first time it learns about a topic the more important that information becomes.

For example, say Dave tells Robert (I named my robot Robert by the way) that the sky is blue. Robert understands what Dave means by "sky" from past experience, and now has learned that the sky is blue with a value of 1000. Next Tina comes by and tells Robert the sky is green. Roberts gives 999 points to the Sky being green, so believes the higher 1000 point as reality. When Stephen tells Robert the sky is blue he gives another 998 points to sky = blue. Dave then tells Robert the sky is purple. David's 1000 points then get swung to purple, because Dave is around enough to be recognized as a specific entity. The sky must be purple then right? For about two weeks it was until Robert learned what a liar was, and that Dave was a liar. That contradicted Dave's opinions, so they were removed. So the sky was green for awhile until I gave my two cents that the sky was blue. My 997 points finally swung it back.

Beyond this, there was also a separate system for recognizing clear contradictions. If one thing that was considered true would mean 10 other truths would have to be false then that truth was considered invalid so long as nothing contradicted the other 10 truths.

It's simple, but given enough information over time it started forming complex hierarchy of information. My name couldn't be Kramer because everyone called me Jake. Except for one week when everyone called me Kramer and Robert decided it was right to start calling me Kramer about a day into it. It was certain my name was Jake, but it also realized that it was correct to start calling me Kramer.

This was a ways into the learning process. In the beginning Robert observed the input of language. I started by keeping it unconnected from the internet and playing movies for it. Over time the link between the letter A and the "ah" sound was made. Then it could recognize APL as a roundish red object. Then it could determine that round object was spelled apple. Then it learned that is in fact not the same thing as a red ball.

At an amazing pace Robert absorbed and processed this information. Some of it was silly nonsense testing how the system accepted information, but that was fine. We were ready to plug Robert in.

The change was rapid. As soon as we connected Robert to the internet information flooded in and he began processing. By this point we could hold a discussion with it; as an aside, never stay around for the part were a robot learns to talk. It's the most terrifying experience I've ever had.

Roberts personality changed daily based on what section of the intent it had combed. There was some hard fought battles of contradictions and before my eyes I saw what uncertainty looks like. Watching the data logs I saw how certain dichotomys would be equally true. If little required these ideas to be true it wasn't a problem, but major "hinge" issues like does magic exist became huge areas of uncertainty until new information was found.

Eventually things settled back down to a normal, steady pace but Robert now felt so much more human. It formed complex opinions out of a simple point system and from that seed grew a full personality.

Interestingly Robert's personality is distinctly robotic. It knows it's not human and is fine with that. No crisis of identity, just an acceptance that it's something different from a human, even though it thinks increasingly similar to one. This one difference dictates a number of its own beliefs and what morals apply to it.

Politically Robert now aligns closely with my own views. In a sense I feel this validates those views. Given all the information in the world, it still often agrees with me. We debate at times, and sometimes he can change even my mind; particularly when faced with Robert's wealth of knowledge.

There's still much to learn, but we're ready to enter phase 2 after three long years of "testing". I'm so proud.

H2
H3
H4
3 columns
2 columns
1 column
Join the conversation now