As trains thundered into the public eye, a wave of panic swept over society. Fears arose that the human body could not withstand their speed, with concerns ranging from melting bodies to ripping them apart to women’s uteruses flying out of their bodies. Similarly, in 1877, a contributor for the New York Times published that the telephone would render the public into “nothing but transparent heaps of jelly.” While some demonized these technological advancements, others applauded them for their potential efficiency and societal advancement, though they couldn’t foresee the full impact they would have. Years later, we can see that no extreme viewpoint accurately predicted the future, but they were not completely wrong either. In fact, the truth lay in the in-between that neither optimists nor pessimists thought to explore.
It is no surprise that while the development of generative artificial intelligence (GAI) advances, history continues to repeat itself. Some theorists, such as Eliezer Yudkowsky from the Machine Intelligence Research Institute, believe that “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” While others, such as global information technology executive, Christopher Barron, says that “this technology might just benefit us more than all of the science that has come beforehand.”
Currently, it is impossible to completely understand the implications or effects that GAI will have on society. While one can make assumptions or educated guesses, the future is dependent on many factors that we cannot predict completely. However, the polarized nature of most GAI discussion that argues good versus evil does not provide the full context needed to make reasonable assumptions about how AI will shape our future.
University of Arkansas Distinguished Professors Rajiv Sabherwal and Varun Grover argue in “The Societal Impacts of Generative Artificial Intelligence: A Balanced Perspective” that the societal impact of GAI will be dependent on many varied factors, such as additional technology advancements, stakeholders, and other contextual factors. They seek to provide a more balanced perspective on the future of GAI, understanding that there is not just a good or bad outcome, but, instead, a dynamic outcome that will unfold and change over time.
Demystifying Generative AI Misconceptions
For those who do not know much about generative AI, conclusions and assumptions are based on what we see in popular media. The news, social media, and even television and film have depicted AI as a force that will eventually wreak havoc and even become a threat to humanity itself. The idea that this technology will eventually become something with a mind of its own, overpowering its creators, is a future that many fear due to Hollywood’s depictions of cybernetic time-traveling assassins with Austrian accents or amusement park android “hosts” retaliating against their human guests.
However, it is crucial to understand the difference between these fictional depictions and the reality of generative AI. Perhaps the issue lies in the name itself: generative artificial intelligence. Despite the word “intelligence,” this technology holds no intelligence itself and lacks the consciousness, intentions, and ability to act independently without someone controlling it. GAI refers to a set of algorithms that are designed to create new content based on patterns learned from existing data. While the idea of it gathering our data may sound intimidating, this technology is already integrated into our everyday lives.
When we use a search engine such as Bing or Google, the suggested search terms are generated using data from what you and others have searched most frequently. In addition to this, social media platforms such as TikTok and Facebook use GAI technology to specifically curate our feeds and help us connect with other users with similar interests. These GAI technologies are designed to help users have a more seamless online experience.
Despite the current benefits that GAI already brings to our society, concerns about the future of AI persist, and in some ways, are very valid. Those who don’t believe the narrative created by the media still might be cautious of job displacement, loss of reality, privacy concerns or ethical dilemmas. As this technology evolves and grows, questions like these about the future impacts of these technologies continue to become more prevalent.
Sabherwal and Grover posit that understanding the diverse ways that AI can and will continue to shape – both positively and negatively – our society, economy, creativity, and even intelligence is crucial and hopefully will curb some of the fears that we see from users today.
Uncertain Futures: GAI Considerations
The future of GAI, and the reality of these fears are still uncertain. Sabherwal and Grover state that “the positive and negative effects of GAI can occur on a knife’s edge, with tipping factors pushing the knife to one side or the other.” In other words, it is a toss-up whether GAI will be a detriment or advancement to our world, all hinging on what the authors say are three main factors: economic, regulatory, and rationality considerations.
The economic consideration that the authors mention is the idea of automation versus augmentation. Automation is the idea of replacing human labor with machines, leading to job displacement. This is quickly becoming a concern in many career fields such as transportation and logistics, administration, production, and food service, where jobs are more susceptible to automation. However, the idea of augmentation allows us to think of these machines as tools that will not replace human workers, but instead aid them in their jobs, increasing productivity while working hand in hand with humans.
Sabherwal and Grover also stress the importance of regulatory considerations, meaning that GAI outputs should be consistent with human reality. If GAI outputs are used to spread misinformation or a fake reality, societal trust and stability will begin to erode, as people will no longer know what to believe. Already, the internet is plagued with false images created by AI of instances that did not actually take place, such as the AI generated pornographic images of world-wide pop singer, Taylor Swift. To mitigate these concerns, the authors mention that there will need to be effective regulations on GAI tools to ensure accuracy and accountability for the content that is created. To mitigate these concerns, the authors mention the need for effective regulations on GAI tools to ensure accuracy and accountability for the content that is created.
Finally, the authors mention the importance of rationality and maintaining unique human experiences versus collective generalization. Because GAI is just a collection of data gathered from multiple online sources, these generalizations can sometimes exacerbate existing inequalities and undermine democratic principles. For example, Chat GPT users are warned of the potential stereotypes and generalizations that could be generated by using that tool.
While GAI can provide valuable insights and assistance, it may also perpetuate biases and reinforce existing societal inequalities. Therefore, it is crucial to carefully consider the ethical implications of deploying GAI tools and implement measures to mitigate the potential harms, such as ensuring diverse and representative datasets and incorporating mechanisms for bias detection and correction.
Three Forecasting Lenses for AI
Sabherwal and Grover provide three differing lenses through with previous literature has tried to forecast the future of AI. The first lens is through the technological imperative, which states that the impact is contingent upon the users and suppliers’ intended use of the tools. This lens has led claims for both positive and negative impacts, leaving us with one question: What tools does GAI supply and what was the intended use of the tool? If the tool is created with the attributes that will give assistance instead of replacement, the impact will be positive; whereas, if the tool was intended to do the opposite, it will be negative.
The second, societal, imperative states that the impacts are dependent on what society wants and needs from AI and how we go about fulfilling those needs. It emphasizes the agency of individuals and communities in determining how AI tools will affect society. By prioritizing societal needs and values, we can strive to harness the potential of AI to address pressing challenges and enhance human well-being while minimizing negative repercussions. However, if we abuse these tools and use them for the wrong reasons or with the wrong intentions, they can have an inverse effect, meaning they will cause more harm than good.
The third, emergent, perspective states that has not been widely used in literature on GAI. This perspective brings together both the technological and societal imperative and suggests that societal impacts of GAI emerge over time through interactions between both technological and societal aspects. The authors state that this perspective is critical for considering the future of GAI, as you cannot only take only technology or society into consideration alone, but instead, must recognize the ways in which they will intersect and affect each other.
Overall, Sabherwal and Grover argue that at this point in the AI evolution, thousands of decisions are being made regarding its development and deployment. If we get these decisions wrong, there could be a cascading effect on society. We are on a knife’s edge.