University of Arkansas

Walton College

The Sam M. Walton College of Business

5 Practical Tips for the Ethical Use of Artificial Intelligence

AI technology

July 16, 2020 | By Cindy Moehring

Share this via:



The ever-increasing and pervasive existence of technology in our lives makes the ethical use of it more important than ever.

Artificial intelligence powers everything from self-driving cars to the review of resumes to credit card applications. If the technology isn’t designed to function ethically, then fundamental issues such as discrimination, privacy and safety could get much worse.

The belief is that artificial intelligence can do a better job than humans at making decisions because objective, unbiased machines can crunch large, complex sets of data faster than any human. Human decision-making, on the other hand, is full of conscious and unconscious bias that affects results.

Algorithms, however, begin with existing sets of data created by humans that often are already problematic. For instance, when bias based on race or sex is introduced at the beginning of the process, unknowingly and unintentionally, the results are flawed because they are based on incorrect, irrelevant or missing information.

The process of machine learning, where the machine teaches itself based on its data, can expand these problematic issues exponentially. This could result in large-scale and serious unintended consequences for various stakeholders -- employees, customers and even communities. All of this, in the end, results in significantly decreased trust in you and your company, and that impacts the bottom line.

Let’s take a look at three recent examples of what can go wrong if ethical AI is not employed.

First, in 2018 a woman in Arizona died after she was hit by an Uber self-driving car. Engineers looking into what caused the accident explained that the autonomous car wasn’t programmed to recognize the human form on the road outside of a crosswalk.

That this variable wasn’t accounted for in the design of the AI was a deadly mistake. This “what if someone jaywalks” scenario should have been discussed with engineers during testing, so that could have been corrected before the car was ever put on the road.

Second, AI resume scanning software can inadvertently inject bias or flag irrelevant data as indicia of high-performance.

One employer that was focused on reducing turnover found that employees living closer to the office stayed with the company longer. But screening based on distance from the office turned out to be a proxy for race, which resulted in a lack of diversity.

Another employer found that a vendor had built a resume-screening tool that tagged being named “Jared” and playing high school lacrosse as factors predicting success. The system didn’t have enough data to learn from to make it accurate and non-biased, so the employer ended up not using it.

Third, Apple recently faced backlash over, at the very least, perceptions that its new Apple credit card discriminates against women by offering them lower lines of credit than men.

The company now faces protracted litigation and regulatory inquiries over whether this is true. Apple’s response, unfortunately, added to the confusion because, according to news reports, no one from the company could adequately explain how the algorithm worked or how the results were reached. Apple’s financial partner for the card, Goldman Sachs, attempted to help by noting that the algorithm didn’t even use gender as an input.

That overlooks the fact that other factors considered could have been a proxy for sex.

The Berkman Klein Center for Internet & Society at Harvard recently released a report that provides a guide and maps consensus in ethical and rights-based approaches to principles for AI. Eight themes emerged from the research, including the importance of “explainability,” privacy, accountability, safety, non-discrimination and human control. One theme – “explainability” – was especially noted for its connectivity to many of the other themes because it stood as the “prerequisite for ascertaining that [such other] principles are observed.” Some experts have even noted there may be new jobs in the future, such as “explainability” engineers and AI algorithm-bias auditors.

It is far better to play offense on an issue such as ethical AI and be able to explain how it works up front than to play defense on the use of a technology that is still somewhat unfamiliar to people. If trust isn’t established at the outset, the perception that your company is acting unethically can hurt you as much as the reality.

Companies must innovate to advance. So, what can you do to ensure you innovate in a trustworthy way? Here are five practical tips you can employ to assure your stakeholders, including your board of directors, that you are using AI ethically.

  1. Know where and how AI is being used in your organization. Map it out so that you have a log of it. Review it and update it frequently.
  2. Ensure your use of AI aligns with your company’s values. Create an ethical framework for talking about this with your stakeholders. Common language should be used across all levels of employees to help ensure alignment of purpose and integrity of data among all internal and external stakeholders.
  3. Recognize that creating ethical AI isn’t just an engineering or information systems project. When deciding to use AI and machine-learning in your business, have a cross-section of departments involved in the process, including human resources, legal and compliance. These groups are necessary to ensure AI is checked for bias, completeness and accuracy, among other things. These checks can help prevent AI from raising the company’s risk profile to an unknown or unacceptable level.
  4. Set governance standards and guardrails for the entire process – creating the AI algorithm, testing it, monitoring it and auditing it. This allows for calibration, or re-calibration, where necessary to make sure the algorithm remains aligned with the intended objectives and the company’s values.
  5. Assign responsibility and accountability to individuals for correcting any unintended consequences discovered during the process. Fail fast and move forward. Technological innovation is the name of the game, and the winning formula is to innovate with integrity.

Post Author:

Matt WallerCindy Moehring is the founder and executive chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business at the University of Arkansas. She recently retired from Walmart after 20 years, where she served as senior vice president, Global Chief Ethics Officer, and senior vice president, U.S. Chief Ethics and Compliance Officer.