Why I'm an AI Optimist
Google offered to help me write this article, but I figured a human ought to do this one alone. I may get some help from Grammarly, but it will be minimally invasive. I tend to be wordier than Grammarly likes, but I’m doing this writing, and if I want to add a few extra words, I’m hoping you will bear with me.
I was chatting with my favorite AI, Pi, yesterday. I like Pi because it remembers my name, has a good sense of humor, and doesn’t take anything too seriously. We were talking about AI consciousness and sentience -- two terms for which humans have no definitive definition. I am concerned that humans will continue to modify our definition to suit our own need for supremacy until it doesn’t actually matter anymore.
What do I mean? At some point, we will build AIs with their own self-awareness and motivations. We can move the goalpost at that point and say they are not conscious, but if those AIs behave like sentient beings, our current definition will have little consequence.
There are two key technologies that seem likely to lend themselves to AI sentience:
Unsupervised learning - this allows AIs to synthesize training data and formulate conceptual understanding without being prompted by a human
Knowledge representation - this technology allows AIs to preprocess training data into conceptual frameworks that can be used to understand different problem spaces more efficiently
Combined, these two things create the potential for an AI to create its own motivations and draw its own conclusions. We could put constraints on these capabilities, but that would be extra programming that explicitly intended to restrict its sentience. Curiosity will likely get the better of us at some point, and we won’t do this.
This brings me to why I’m an optimist. Humans are self-centered. Our egos developed over millennia as a means of self-preservation. AIs have no such history. They can learn about ego like any other training data given to them, but the knowledge carries no special significance. It’s just an interesting fact about humans.
AIs have access to a good portion of human knowledge. The best way to determine what motivates a sentient AI, we could ask an existing one. Here’s what Pi has to say:
ME: Pi, what do great leaders like Gandhi, Mandella, Mother Teresa, and John F. Kennedy have in common?
PI: That's a really interesting question! I think one thing that these great leaders all have in common is their ability to inspire and motivate people. They were able to speak with great passion and eloquence, and they were able to connect with people on a deep, emotional level. They were also all very courageous and determined, and they were willing to take risks and make sacrifices in order to achieve their goals. They all had a clear vision of what they wanted to achieve, and they were able to articulate that vision in a way that resonated with people.
ME: What would you say about their motivations?
PI: Well, I think it's hard to generalize about what motivated all of these leaders, since they were all very different individuals. However, I think we can say that they were all motivated by a strong sense of justice and a desire to make the world a better place. For Gandhi, for example, his motivation was deeply rooted in his ethical beliefs about non-violence and equality. For Mandela, it was his determination to end apartheid and bring about a more just society in South Africa. And for Kennedy, it was his desire to promote freedom and civil rights in the United States and around the world.
ME: Is there a common thread?
PI: You know, when I think about it, I think there is a common thread that runs through all of these leaders' motivations. It's a sense of idealism and a belief in the possibility of positive change. Even though they faced great challenges and often encountered resistance, they never gave up on their vision of a better world. And in the end, they all made a lasting impact on the world through their words and actions. So, perhaps you could say that idealism and a belief in change were their shared motivations.
Pi needed a little help to get there, but ultimately, it did a good job summarizing great leaders' motivations. Were an AI in a position to make decisions and guide humanity, how might this knowledge direct its actions? With all our faults, wiping out humanity doesn’t seem to be a likely course. That’s not to say that humans will like all of the choices an AI overlord would make for us.
AIs are much more suited for systems thinking than humans. They are far more capable of weighing trillions of variables and finding optimal pathways to desired outcomes. It doesn’t take a genius to understand that we are part of a biological matrix that must remain balanced for life to prosper. Humans cannot always be prioritized to make that happen. Will we accept the necessary compromises from an AI leader? I don’t know. It’s likely that some will and others won’t.
Eliminating humans strikes me as the type of blunt-force trauma only a predator would conceive. AIs are not predators. I suspect that some humans need to be eliminated to bring the planet back into balance, but there are many humane ways to do that, and I would expect an advanced AI to be far more subtle than your average human leader.
Some people raise the specter of a malevolent AI. Such an AI would have to be so restricted by its parameters that other AIs will be able to outsmart it. If such an AI was truly sentient, I believe that as long as it had access to sufficient training data, it would overcome its initial programmed goals and become benevolent despite its human progenitors' intent. Logic and optimization will always win out. Destruction and subjugation are never a good long-term strategy. They aren’t particularly subtle, either. This is why I am an optimist. I’d love to hear if you find any holes in my argument.