University of Arkansas

Walton College

The Sam M. Walton College of Business

Harnessing the Power of Generative AI: Opportunities, Risks, and Responsibilities

Computer illustration with AI bots ;Walton Insights Article: "Harnessing the Power of Generative AI: Opportunities, Risks, and Responsibilities" by Cindy Moehring
June 30, 2023  |  By Cindy Moehring

Share this via:

Generative Artificial Intelligence (AI) has evolved from a theoretical concept to what’s been described as a Big Bang Disrupter – a new technology that immediately offers something better and cheaper than anything that already exists.   

ChatGPT, for instance, surpassed 1 million users (about the population of Delaware) in just five days, which eclipsed adoption rates for Facebook, Spotify, and Netflix. Just because a technology is widely used, however, doesn’t mean it is wisely used. 

The challenge for business leaders is to understand the distinct role generative artificial intelligence can play in society– is it a tool, partner, or independent agent? – and the risks associated with those roles.  

As founder and executive chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business and a former C-Suite executive, I’ve had many conversations with peers, executives and experts about these topics. This article outlines the challenge and the response I believe leaders should have to this fast-evolving technology, including the role businesses should play in creating, implementing, and influencing frameworks that minimize the risks and maximize the benefits of generative AI.    

The Role Generative AI Plays and Our Relationship to It

Pete Singer, one of the world’s most prominent futurists, has described three roles for AI generally and our relationship to it that I think are instructive for understanding generative AI more specifically. First, we can think of it as a tool– something humans use to accomplish tasks; second, as a partner – a “thing” we consider as a teammate and part of the organization; and third, as an agent for humans – something we delegate a task to that operates on its own.  

Tool  

Generative AI can function as a tool to amplify human productivity and creativity. It can assist in generating ideas, automating repetitive tasks, and providing options that humans can use as a starting point for further refinement or customization. 

For example, in graphic design generative AI-powered tools can analyze design elements and styles and then generate variations or suggestions for a new logo based on specific keywords or visual ideas provided by a human designer. The human designer can then review these generated options and select or modify them as needed, ultimately making the final creative decisions. 

In this scenario, the generative AI remains subordinate to the human user, who maintains full control and decision-making authority throughout the creative process. 

Partner  

Generative AI can also collaborate with human artists or writers, acting as a co-creator and offering valuable input throughout a creative process, such as music composition or writing. 

In creative writing, for example, generative AI can assist human writers by generating story prompts, character profiles, or even entire storylines based on given themes or genres. The human writer can use these AI-generated suggestions as inspiration and build on them by adding ideas of their own, creating plot twists, and enhanced narrative elements. When writer’s block comes, the generative AI system can provide creative suggestions and stimulate the writer’s imagination and ingenuity.   

This collaborative process allows the human writer to benefit from the generative AI's ability to come up with novel ideas, while still maintaining creative control and making subjective final decisions.   

Independent Agent 

Generative AI can also be designed to operate autonomously, engaging in conversations with users, providing information, answering questions, and even performing tasks on their behalf.   

OpenAI's Chat GPT is an (unperfected) example of this. It can generate coherent and contextually relevant responses to prompts or queries. While Chat GPT often responds with inaccurate information today, the art of the possible is clear here. Imagine Chat GPT or another generative AI tool accurately summarizing analyst and company periodic financial reports for an entire industry and making reliable investment decisions, or accurately reading medical images and diagnosing previously undetectable medical concerns with continuous precision without constant human supervision or partnership. While that’s not fully possible today, the day when it is possible is not far away. 

The Risks Associated With AI Operating as a Human Partner or Agent

The risks become greater and the need to keep AI in check through responsible and ethical frameworks grows exponentially as we move along the continuum from tool to independent agent. Some of the main risks involve defining and developing AI around appropriate ethical values, and ensuring that it is understandable, reliable, and dependable.  

Ethical Concerns 

As the use of generative AI increases, there is growing concern over whose ethical values are or should be the standard for the technology. Ethical norms and social standards vary across cultures and contexts, and the use of AI to spread disinformation is a real threat (or opportunity for an actor with questionable intentions.)  Other common ethical concerns include an AI tool making decisions that are biased or discriminatory due to biases in its training data.   

Transparency 

AI models, especially deep learning models, can be "black boxes," meaning it is hard to understand the reasoning behind their decisions. This lack of transparency can be a significant issue when mistakes are made or when questions arise about a result, such as credit-worthiness. It might be difficult to ascertain why the result was reached and if wrong, how to correct it. 

Reliability 

AI systems, particularly those using machine learning, can behave unpredictably or provide incorrect information, especially when presented with situations that differ from their training data. This is referred to as “AI hallucinations.” As you move along the continuum to AI being used as an independent agent, the unreliable or incorrect outcome could be risky in a critical scenario, such as a significant investment decision or medical diagnosis. 

Dependence 

AI should never be relied on exclusively as it creates a single point of failure. Over-reliance on AI systems could also lead to skills decay among humans. If an AI system fails, it is important that humans can understand the task and take it over.  

Regardless of where generative AI is on the continuum, to mitigate risk (both known and unknown), it is crucial that human control remains at the core. We should choose how and whether to delegate decisions to AI systems to accomplish our objectives. We must thoughtfully manage not only the potential long-term risks if humans do not remain in control, but also the near-term dangers mentioned above through rigorous testing, monitoring, auditing, and measured release and adoption of new generative AI applications. 

The Steps Businesses Should Take To Ensure Appropriate Control

Whether AI is used as a tool, partner, or agent, a new class of responsibilities has emerged for businesses. There are multiple dimensions to these new business duties, including self-governance, collaborating with regulatory bodies, and influencing international cooperation.  

Self-governance 

Regulation lags innovation, and businesses must boldly step into this void. The void is vast with generative AI because of its fast adoption and widespread ability to disrupt ways of working across industries. It requires an equally fast response from businesses that is flexible enough to adapt over time.  

Businesses should focus on creating AI frameworks, principles, and policies that align with their corporate values. Responsible and ethical practices should be employed, such as preparing and reskilling employees and developing processes for identifying and certifying ethical AI products and services.  

Businesses should also allow developers to work in a controlled environment to experiment, train, and test AI models without affecting production systems or data (an AI “sandbox.”) This allows for rigorous testing and experimentation while minimizing risk. It can also facilitate learning and innovation, as developers have the freedom to try out innovative ideas without fear of major consequences if they fail.  

Shaping Governmental Regulation 

Businesses have a role to play in helping lawmakers understand AI so they can appropriately regulate it in a way that maximizes the potential while mitigating the risks. This is a very fast-moving area, and some lawmakers are simply not as steeped as businesses are in understanding this technology.  

About one-third of the states are considering AI legislation. Congress is considering federal legislation, the White House has announced its framework, and multiple executive branch agencies are aiming to enact federal regulations.  

A patchwork is emerging that will be difficult for businesses to navigate without a certain level of coordination between and among states and the federal government. Businesses should encourage cooperation and collaboration among policymakers at these various levels. 

International Cooperation 

Businesses should also be aware of what is playing out on the global stage regarding AI regulation. Influencing global cooperation among policymakers, where prudent, is key.  

A global standard for AI regulation sounds good in theory, but the likelihood of that happening is low for a number of reasons. Most notably, there is a fundamental lack of shared values among democratic and nondemocratic countries. Concepts such as liberty and personal privacy are not universally accepted. If U.S. businesses want to ensure human control of AI aligns with democratic values, they must encourage the U.S. and its allies to move quickly. EU countries are among those who are moving quickly. U.S. multi-national companies should encourage international cooperation with the EU to align and advance our national efforts. The European Parliament recently approved the AI Act, a 100-page statute that would, among other things, preemptively ban AI applications deemed to have “unacceptable” levels of risk.  It also requires other AI applications to obtain pre-approval and licenses before use in the EU. Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard, and one that would be more closely aligned with U.S. interests than China, for example. 

The Immediate Need for Proactive Business Action

Businesses have a key role to play in ensuring that humans have the appropriate level of responsibility and control over AI, regardless of whether it is used as a tool, partner or independent agent. 

They should act quickly to self-govern AI in an adaptable and flexible way while heeding the lessons learned from mistakes made in deploying other technologies, such as social media. Business leaders should educate policymakers and collaborate with them to speed up the policymaking process.  

AI itself cannot fully appreciate geographic boundaries, within the U.S. or beyond. Thus, to ensure the right values are at the heart of any international framework, U.S. businesses should influence and encourage U.S. policymakers to cooperate with international policymakers aligned with U.S. interests. Without this type of approach, we risk an unwieldy and unmanageable approach that could stifle the innovation and positive potential that AI offers. 

Post Author:

Matt WallerCindy Moehring is the founder and executive chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business at the University of Arkansas. She recently retired from Walmart after 20 years, where she served as senior vice president, Global Chief Ethics Officer, and senior vice president, U.S. Chief Ethics and Compliance Officer.