University of Arkansas

Walton College

The Sam M. Walton College of Business

The Human Need for Ethical Guidelines Around ChatGPT

Business man using a smartphone with AI asking "Hi! How can I help you."
March 03, 2023  |  By Cindy Moehring

Share this via:

Artificial intelligence (AI) products like ChatGPT have created the next inflection point for technology. At each previous step, from the dawning of the internet to the advent of personal computers, mobile devices, and the rise of social media, we’ve learned our ethical lessons in hindsight. It’s time for the business world to take those lessons and apply sound ethical principles to generative AI before it’s mainstream. 

Corporate self-regulation and proactive application of ethics to emerging technology is a must. The law simply cannot keep up, and the absence of laws is where ethics plays its most vital role. The application of values and sound ethical principles allows companies to build and maintain a bond of trust with their stakeholders – employees, suppliers, partners and customers.  

Business is the most trusted societal institution in the 2023 Edelman trust barometer, outranking NGOs, the government, and media. Thus, companies have a golden opportunity to build on this positive momentum with how they responsibly create and use generative AI.

As guidance, companies should consider common ethical AI principles such as fairness, reliability, explainability, transparency, privacy, safety and responsibility, accountability, and transparency.  

Is the technology fair and impartial?

Chat GPT is trained on and learns from very large public data sets, and, given that bias exists in our world, we shouldn’t be surprised that the tool sometimes returns biased answers. This reality, however, should be a caution when reviewing or using the tool’s results. Human editing for bias and correct use should remain the standard. 

Additionally, the training data for the tool was largely derived from Western sources in English so it may present a Western perspective and bias. While Chat GPT attempts to provide a balanced perspective, it needs more time to become a tool with a true global perspective. Global companies should be aware of this, and managers should review information the tool provides before using it in a global context. The perspective or tone could be mismatched for the intended audience and therefore seen as culturally biased.  

Is the tool reliable?   

ChatGPT does not always provide reliable or current information in response to questions. Examples of the tool returning inaccurate information include mixing up characters in a movie and getting the setting of a book wrong. Also, the data set ChatGPT was trained on ends in the year 2021, so don’t ask it about “Top Gun: Maverick” or “Black Panther: Wakanda Forever.”  

While the tool may be good at writing first drafts of emails, letters, advertising copy or social media captions, it should be seen only as a starting point for research or more complex information – not the end point.     

Is the technology explainable and transparent? 

Chat GPT is not explainable in its current form. It does not give citations for the information in the replies it provides, unless you specifically ask it to. The chat feature embedded into Microsoft’s Bing search engine, however, is built on the same architecture as ChatGPT, and it provides citations for its replies.   

This transparency allows companies using Bing’s chat feature to check the sources for reliability and confirm the information is accurate, balanced, and fair. Companies using Chat GPT should check the information against reliable sources. Regardless of the tool used, final written work products should be edited and reviewed by individuals, and they should be transparent about using Chat GPT or similar products.   

Is the technology safe and responsible?   

Microsoft CEO Satya Nadella says their tool is safe and responsible.  He has acknowledged, however, that safety and responsibility for the technology is not a one-time event but a continuing process.  

For example, a New York Times reporter had a long conversation with Bing Chat in which the bot stated its desire to be destructive, human and powerful, and professed its love for the reporter, even after the reporter attempted to change the topic. Microsoft responded quickly by limiting the number of questions per session and questions per day that an individual can ask. 

Companies using AI can do their part by setting guidelines around their use of the technology. This governance should have as its foundation an articulation of AI ethics principles that are specific to the company and tie back to its overall core mission and values. For example, the tools should be for business use only. Furthermore, companies should ensure that their employees avoid sharing confidential information, including intellectual property or personally identifiable information.   

How should privacy be taken into account?  

Privacy is an evolving topic when it comes to new technology. There is no federal privacy law regulating AI, although the American Data Privacy and Protection Act is under consideration in Congress. The most proactive privacy move by a tech company has been Apple’s move to allow individuals to control their app tracking permissions so that apps cannot sell or distribute an individual’s information without consent.   

As AI evolves and merges with other disciplines such as neurotechnology, companies will need to address new types of privacy rights such as “mental privacy” with their employees. For example, sharing your food or music preferences may be noncontroversial but what about your brainwave data? The technology exists for neurotech wearable devices to track fatigue, monitor attention and focus, and even spot early-stage cognitive decline. That type of information could be very valuable to companies, particularly those in industries such as transportation (think truck drivers and airline pilots).  It's also helpful to individuals monitoring their own health. 

Companies should be proactive with policies that respect employee rights and be transparent with their employees and other stakeholders. They should know what information is being collected, why and for how long. Among other things, proactive company privacy policies should state that only data required for its intended purpose will be collected and that it will be kept for the shortest amount of time necessary.  

Waiting for a law to be passed in this area is not a winning strategy for companies to maintain trust with stakeholders. Companies should instead stay abreast of laws that are enacted and tweak their policies where necessary for alignment.   

Who is accountable?    

One of the biggest risks when new technology is designed and implemented is identifying the responsible party when things don’t go as planned. This is another area where governance comes in.  

Companies would be well-advised to define the rules of the road up front through a cross-functional AI committee. Gone are the days when new technology is seen as just a project for the information systems department. Business managers and governance personnel (e.g., HR, Legal, Ethics and Compliance) should also be involved.  

The committee should define, for example, the process for designing and implementing an AI tool, roles and responsibilities for each cross-functional group, questions that should be asked and answered by each group represented at each step of the process, output that would be out of line for the technology’s intended use, the protocol for removing it from use and the backup plan if it can’t be redeployed. Companies that pause at the beginning to set these parameters will save themselves time and money in the long run and avoid, among other things, ethical missteps that can cause them to lose trust with their stakeholders. 

These principles and more are discussed in the book Trustworthy AI by Beena Ammanath, and we can now see how they apply to the current (and fast-evolving) state of generative AI.  The application of these principles to ChatGPT also highlights many of the issues that companies will need to address as they move forward.   

Leaders should embrace this new world of human-technology hybrid work responsibly rather than banning it outright. After all, putting the genie back in the bottle has never been a winning strategy – especially when it comes to technology advancements. The corporate world should harness the opportunity to show the rest of society what ethical development and use of AI actually looks like.   

 

Matt WallerCindy Moehring is the founder and executive chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business at the University of Arkansas. She recently retired from Walmart after 20 years, where she served as senior vice president, Global Chief Ethics Officer, and senior vice president, U.S. Chief Ethics and Compliance Officer.