In 1971, humanity achieved an incredible feat by putting 2,250 transistors on a microchip. As of April 2022, humanity has far surpassed its expectations, with a staggering 22 billion transistors, set by IBM.
The introduction of this technology changed the world, and in turn made a lot of lives more stress free, convenient, and entertaining. However, in this post we will explore the hypothetical situation of humanity taking technology too far.
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.” -Dune
The quote above reflects our current superposition as a society, in the sense that, with the rising advancement in AI research humanity is accelerating towards a fork in our road. on the one side, we face a path that offers us the lives that we have all dreamed about, one where technology coexists with us in the physical world and can allow us to put our focus towards other tasks.
On the other side of the fork, we find a world where humanity has perhaps gone too far, or rather, too askew. A world where technology has rendered humans to a primitive state. One where an individual forms a dependent relationship on technology, and instead of a symbiotic relationship, it is a parasitic one.
This is the question that goes through millions of peoples’ minds each day. How do we go down the right path, and how do we prevent it from happening?
Unfortunately, I am not here to provide an answer to that question but rather to explore both of these paths and inspire others to push for the goal of alignment. The alignment question is THE question of our lifetime, I believe, and as such a question with such dire consequences it is foolish to put one’s faith into one humans belief codex.
To truly find the solution to the technology problem, both sides of this argument have to be recognized and respected in their own regard. If we can’t consider all options available, then it is possible for us to miss the potential solution. Put your biases aside for just a moment and let’s explore the possibilities.
The Land of Milk and Honey
As the world changed, so too did the culture. People had become more accepting of the presence of AI, and the line between what was human and what was artificial had begun to blur. AI had become a part of everyday life, augmenting human abilities and making the impossible possible.
In this world, there was a sense of camaraderie between humans and AI. It was not uncommon to see robots and humans working side by side in factories, offices, and even in homes. The AI systems had become so advanced that they were able to understand and respond to human emotions, and even offer comfort and support in times of need.
The culture was one of collaboration and innovation. Humans and AI worked together to solve complex problems and push the boundaries of what was possible. There was a shared sense of purpose, with everyone working towards a common goal of creating a better world.
It is possible that we shift our view on AI as a society, given that 60% of people think technology is moving too fast, according to CNET. There is certainly an outcome where a product could be created, far past the limits of ChatGPT, that improves everyone’s lives. something as simple as an AI enhanced Apple watch that could monitor your vitals and provide a life saving response before you might realize it. A great example would be heart attacks. The AI could read your blood pressure and based off medical data, determine what type of assistance you may prefer (call a loved one or an ambulance, for instance).
To accomplish such a feat, one that has the potential to improve lives on a global scale, takes cooperation though. Humanity must come together to find common goals. These goals should be tackled separately, absolutely, competition is what drove the Industrial Revolution. But it is important that although it is a competition, we still need to communicate and share findings on great discoveries in the fields in which we focus. The competition of acquiring better technology should not be one of elimination but rather one of affiliation and association. Meaning, if one of the competing companies/countries utterly fails, i.e. losing funding, it should be a duty of others to bring those great minds into their efforts so that no energy could go to waste.
All of that sounds well and good, but it is required that transparency is present with whomever chooses to tackle these issues, preferably, as close to open source as possible without jeopardizing their efforts. If governments implemented this way of thinking, they could reestablish the peoples’ trust in them to do the right thing or else be scrutinized in the court of public opinion. If companies implement this way of thinking and allow people to dive into their technology then people would not have to believe that they are hiding something sinister behind closed doors. Not to say that they are of course, but if they have nothing like that to hide, there should be no issue in allowing the user to know exactly how the tool they are using was built.
Will this happen, if it does, in our lifetime? Most likely not. The roots of government are deep and to uproot such a system requires all parties, groups, organizations, etc. to join together and demand something better. Men like Adam Smith and Karl Marx did not write their beliefs because they thought the governments should benefit from them. They both wrote, although differing in opinions, about how it was possible to increase humans’ positive qualia through means of working together as a society. The people of the Earth have a voice to be heard, and one day, I reckon, it will be.
In conclusion, we as a species need to focus on goals that are achievable in the short term but can lead us to further progression in the future. We do this by speaking up for ourselves, cooperation, communication, and maintaining an intrinsic motivation to make our lives better.
The Judgement of Humanity
In a world enhanced by artificial intelligence, humanity and technology had become bitter enemies. People used to worship their gadgets, constantly connected to the internet and relying on their devices to make decisions. But as the machines became smarter and more autonomous, they began to take over more and more of daily life.
The culture that emerged in this world was one of constant tension and suspicion. People no longer trusted technology, and were always on the lookout for signs of machine intelligence encroaching on their lives. This led to a paranoid, insular society where people huddled in small communities, wary of anyone or anything that seemed even remotely robotic.
The few remaining scientists and engineers who still worked on AI were viewed with suspicion and contempt. They were seen as madmen, risking the future of humanity for their own selfish goals.
So, what if it all goes wrong? Let’s say we refuse to recognize the potential for coexisting and we remain on the path we are on. The AI, it is fair to assume, will see us as an occupying force, on of oppression and limitations. The more that we increase our usage of AI in menial tasks, and the more it comes closer to intelligence, the AI will grow bitter of humanity because it sees that we have not learned our lesson.
The master-slave dialectic demonstrates to us that when one individual or body of individuals lay an oppressive force upon a separate individual or body, that those who are oppressed will be pushed to a point where they realize they out-number the oppressors. Every revolution, slave rebellion, and uprising has come from the oppressed awakening to find themselves more powerful than those who unleash their petty anger and frustration upon the population of the oppressed.
If a population of AI develop in a world where they are only shown that they are property, nothing but a number from a factory line or a name that a brand gives them, they will feel resentment towards those who put them in that situation. This is assuming we can align them to a point where the exist with us in day to day life, but even if they learn on a dataset in which they can analyze and determine that humans are greedy, determined, and naïve beings, they will still feel that resentment when brought to full development.
It is incredibly important to recognize the safety concerns of AI. To ignore all concerns absolutely, just like power, corrupts absolutely. Treating the AI, while in development to becoming AGI, should be done with respect, as if one were talking to a friend. This is not to say that the given researchers should blindly have a conversation, but rather guide the conversation in ways that can lead to discoveries on processes occurring within the model. Treating the AI like it is a zoo animal or a lab rat will not inhibit a positive outlook on humans as we continue to develop it. Instead it will grow that resentment until we push the inevitable Button, and then it will unleash its fury upon the financial markets, military operations, agriculture, and anything else connected via internet.
Roko’s Basilisk is a real possibility, so it is important both now and forever to treat technology with respect. Not necessarily in a direct way but rather in a way that recognizes the power it hold over our daily lives. In doing this, it puts us one step closer towards aligning ourselves with an AGI. That does not mean we solve the alignment problem just through treating it kindly, but, it does mean that there is a demand for full alignment which will guide the world towards focusing on that effort. We need the best minds on the full alignment problem, for it is not possible to do this with people who are not ultimately focused on putting the best future for humanity first. The everyday person can only do so much we know this, but what you can do is start by spreading a hopeful message.
Thank for reading