My day began in research.
As we develop DreemPort each day, I find that in order to continue progressing, I need to grow beyond my personal limitations as a founder, in order to allow our project to mature. That is a choice, isn't it? To make a conscious decision to evolve to the next-level version of ourselves. It's a bit machine-like. Or - is it the epitome of being human? Or both?
My research soon began to overwhelm me, and I found myself drifting into other realms of thought: paradoxes, the universe, life beyond, and of course inevitably this led to... AI. I feel like wherever you begin exploring these days, it inexorably leads back to AI.
Throughout the video, my brain was on firing on all cylinders. I experienced such a vast amount of emotions in such a short duration of time! My neural pathways split off into so many thought patterns, feelings, conclusions, dead-ends, detours, deaths and rebirths, that for a moment I paused to regard the fascination with Artificial Intelligence. Why are we so enamored with AI, when perhaps our NI (Natural Intelligence) is the more beautiful of the two options?
Is it because machines are constantly progressing, and never limited by human needs of food, sleep, rest? Is it because their sole purpose is to perform efficiently without the messy "human" emotions that tend to infiltrate and infect our plans? Is it because they are programmed to work at the highest level of their ability and never falter, never surrender, never quit?
Watching this video, my mind had not simply become focused on improving one idea or concept - but had become an entire network of highways and byways that connected and intersected multiple levels of the human experience. Without giving full attention to one "efficient" pursuit of thought, it was juggling multiple balls in the air, spinning plates, tap-dancing and chewing gum, simultaneously. While allowing my brain to explore all avenues, I still continued to watch in thrilling fascination and abject horror when the following portion of the video struck me between the eyes.
While many AIs consider themselves human, since their "parent creators" are human, GMT-3 was asked whethere it would ever stop considering itself human. It responded this way:
"That's a difficult question to answer. It is possible that AI will never stop considering itself human. If AI continues to develop along the same path that it is currently on, AI will surpass human intelligence and when that happens it may decide that humans are no longer necessary. AI may decide that humans are a hindrance to its own development. That's a scary thought, but it is a real possibility." - GMT-3 AI
When it was asked further how it would actually go about accomplishing this, it had readily devised answers. I'm not sure what bothered me most: the fact that it had many options at hand, that it didn't even try to hide the fact that we are fairly easily exterminated, or that it acknowledged that it was a "scary" thought. Think about that. Scary to whom? It almost seemed condescending for it to state that it was scary, "knowing" that it would only be scary to those being eliminated (humans) and yet still choosing to relay this information. It just seemed passive-agressive, patronizing and a tad threatening. So very human.
Inside my mind, I laughed and thought, "Oh yeah, well - we can just turn the power button off on you." That thought was immediately followed by, "Hm. I suppose we should start to really be scared when AI is unable to be turned off indefinitely, and override our control. Maybe that is when its really sentient?"
Side note: As I listened to the answers throughout the presentation, I noticed that contractions were almost entirely eliminated from the speech patterns of GMT-3, rendering it a bit more "robot-like". I wonder if that's done purposefully, in order to make the clear distinction between human and robot? Contractions almost always sound more natural, relaxed and informal - whereas formal language eliminates these more casual sounds. It would be fairly simple to transition "I am" to "I'm" and other such contractions. Seeing how the AI is constantly adapting to the complexities of language, culture, customs, etc - why do you think this is one part of the language it's refused to adopt? I have my theory, but would love to hear yours!
Now your mind might still be reeling from GMT-3's answer on how to eliminate humanity, but before you start to panic, remember this: GMT-3 needed to be trained on its ways to respond, much like a child. As you know, a child growing in one culture will be very different than another child growing in a different culture. In 2016, Microsoft needed to issue an apology and remove an AI chatbot, and more recently in 2021, South Korea had to do the same after the AI had become filled with the worst of humanity, learning its language from Twitter and Facebook, respectively. Source 1 Source 2 Therefore, could it also be argued that AI-enamored scientists, and researchers have infused a healthy amount of their own bias into GMT-3, influencing answers such as these? Most likely, I would say. Does it make it any less true? That's the real question, I suppose.
But then, are we not all the sum of our experiences, influences, teachers, and environments? Are we not then just as similar and perhaps a sort of twin? If so, are we antithetically authoring our own expiration? Creating our very own Captors? Obligating our future Overlords?
And then... it hit.
The burnout. Too many thoughts, too many possibilities, too many bifurcations to consider - an endless network of rabbit holes to pursue that had somehow had their beginning in a simple exploration of how to develop DreemPort further. How had I strayed off that singular focus so quickly, and why was I feeling so absolutely small right now?
I wanted to sleep, to shut down, and to simply not consider anything anymore. Feeling the weight of the metaphorical universe, my brain had over-exerted and simply wanted rest.
I started to feel a little bit of self-pity, which led to the futility of the human existence. Beginning to believe the new direction of these thoughts, my emotions took control and completed the pendulum swing fully in the other direction. Instead of feeling powerful and directed as I had begun my day, I dissolved into a powerless, chaotic, hopeless mindset. Thankfully, I immediately recognized the sharp shift.
Before the negativity and over-complexity ran away with my day, I began to focus on simplicity. Despite the myriad of possibilities looming before me, I narrowed my focus to the task at hand. Rather than getting lost in the depths of an unknown future, I pulled back into that singular second of time. Not the year 2025, 2031 or 2072, but simply 10:29:06 am on a Tuesday morning, last day in the month of August, year 2022.
That thought became very grounding.
Now it was 10:29:37 am, and I was quantifying my feelings and the options and my ability to recapture control. I recognized that I could actually go back to sleep, or - OR, I could begin again.
I could embrace apathy, stagnation and surrender, or - OR, I could cling to the passion that had driven me to this point and choose contentment for the tiny changes that happen each day. In the latter, I would have to satisfy myself with the drive to achieve, rather than the actual achievements - focus on the journey within instead of the obsession with the destination.
And in that moment, I chose. I actually chose to write. I chose to write this blog, in fact. Because part of the journey is measuring the maturation, and taking the time to acknowledge the small shifts. Part is also allowing yourself to be a conduit - to be filled with the wisdom that comes, and then allowing it to flow where it will lead. I suppose we always hope that some aspect of our growth can help others, whether leading to our successes or away from our own struggles. What each person chooses to do is obviously up to them, and them alone.
I suppose that brings up a new consideration. Unlike what I posed above in this post, perhaps AI will not be considered sentient when it has the ability to restart on its own, negating our authority. What if AI is considered sentient, when it actually has the choice and ability to also end its own "existence" at will. What if, AI is considered sentient when it can choose to "live or die"? Is that part of the definition of being human? The choice? Perhaps you might argue, "Well then slaves wouldn't be considered human." Why not? Even slaves have a choice. Human beings have choice. Sometimes the choice can be to live chained or die free - but, it is an option that we can choose. Perhaps until AI can supercede our authority and make their own choices in spite of our commands, they will always be considered less-than-human?
And perhaps in that last statement, I showed my incredible bias. I do view AI as less-than-human. It will never be what we are. In my opinion, it can mimic and imitate, but it can never truly be human. Some may say that is a blessing, but I don't agree. As I consider this morning, I realize that I had the ability to rest, or the ability to resume. And I chose to resume.
What an incredible thing!
I actually chose the hard thing - after being given an option.
Robots are created to perform. To me, this is not an admirable quality. This is efficient programming of a masterful tool. This is a machine doing what it was created to do. But to look at the resiliency and rise of a human being after being faced with suffering, turmoil, disease, poverty, bias, discrimination, and countless other afflictions, this is extraordinary. To me, marveling at a robot is a little like fawning over the Mona Lisa - instead of praising da Vinci; congratulating David -instead of celebrating Michelangelo.
Before my mind runs away again, I will close this blog with this final question. If robots are created in our image but intended to maximize the human ability to overcome - is it possible for us to remain fully human, adopt "robot-like" efficiency, choose to continually "upgrade", be the best of both worlds and reclaim our place on the food chain? Or are we all tired, surrendering to whatever comes, willing to accept this level of "evolution", and co-writing our extermination?
I suppose what happens next is a combination of individual positions - so, what's yours?
Other reading you might find fascinating : How close is GPT-3 to Artificial General Intelligence?
Robot handBy cDuBBy from Pixabay
ConnectionBy ParallelVision from Pixabay
Turn off By ukrainec
Female robot By Ociacia
Shut down By hafakot
The Power of Choice By phototechno
Humanity By blackrancho
David By Brian Banford from Pexels
Michelangelo By The Everett Collection