University of Arkansas

Walton College

The Sam M. Walton College of Business

Season 1, Episode 9: Use Explainable Technology

Cindy Moehring and Matt Waller
July 22, 2020

Share this via:

In the age of technology and artificial intelligence, it’s more important than ever to ensure the algorithms are transparent, explainable, and understandable. Technology can’t work without humans, human programming, and human intervention. This episode gives practical tips on how algorithms can be used effectively and ethically. Your resource for practical business ethics tips, from the Business Integrity Leadership Initiative at the Sam M. Walton College of Business.

Podcast:

Episode Transcript

[music]

00:03 Cindy Moehring: Welcome to this edition of The BIS, The Business Integrity School, your resource for practical tips from a business ethics pro who's been there. I'm Cindy Moehring, the Founder and Executive Chair of The Business Integrity Leadership Initiative at the Sam M. Walton College of Business. Joining me today is Dr. Matt Waller, Dean of the Walton College.

00:22 Matt Waller: Business principle number six is that algorithms need to be transparent, explainable, understandable.

00:28 Cindy Moehring: Yes, right.

00:29 Matt Waller: This is a new one...

00:30 Cindy Moehring: It is.

00:32 Matt Waller: And we can see why it's so important today because we've already seen problems with it.

00:38 Cindy Moehring: Yes, yes, we have seen problems, most recently, and I think most dramatically, we saw it in the loss of a human life in 2018 when an Uber self-driving car hit and killed a woman in Arizona, and they had to go back and figure out why, why in the world would a self-driving car that... Again, go back to the basic principle: Cars are supposed to be safe, like planes are supposed to be safe, like we talked about before. So somebody violated another business ethics principle by not making sure that they lived up to the terms of their contract, building a safe car, but in this case, it had to do with the way that the artificial intelligence was programmed.

01:18 Cindy Moehring: So when they went back to look at, why in the world would this car actually hit a human and why was it not safe, they ended up finding out it's because they had not programmed the algorithm in the car to recognize a human form on the road, unless the human was in a crosswalk. And in this case in Arizona, the woman that got hit was jaywalking, I've jaywalked. [laughter] I would hate to think that my life is in danger by some self-driving car on the road because I didn't stay within the four corners of the crosswalk, sorry, but...

01:47 Matt Waller: Well I have to say, if I knew there were a lot of self-driving cars around, I always jaywalk, I would be scared.

01:52 Cindy Moehring: I'd be very careful about it now, yeah.

01:55 Matt Waller: This has happened in other ways too. There's evidence that algorithmic-based credit extension, determining how much credit to give someone can be biased. And there's examples of where men were given a lot more credit than women, in some cases, of the same household, but many times of the same wealth and different variables like this.

02:21 Cindy Moehring: Yeah. And so the tech companies have got to be able, the companies that are behind those, that invention or the extension of credit or banks, they've gotta be able to explain in non-discriminatory way how the result ended up being obtained, and if they can't, then you've got a real problem.

02:39 Matt Waller: This is very interesting because a lot of the artificial intelligence, neural networks, pattern recognition technologies, they're taking all this data and they're finding patterns.

02:55 Cindy Moehring: Right.

02:56 Matt Waller: There's two issues with this. Sometimes they're finding randomness, and so not only could it hurt people, but it could hurt the company. In other words, it's not gonna be repeated in the future. What they've picked up is not logical. It's randomness.

03:17 Cindy Moehring: Right.

03:18 Matt Waller: But it looks logical for some reason.

03:20 Cindy Moehring: Yes.

03:21 Matt Waller: And if you find a pattern and then fit it, and then re-apply it to the old data, it might look like it fits real well, but that doesn't mean it will be a good predictor of the future.

03:34 Cindy Moehring: Perfect example of that is there was a software company that was reviewing resumes and had been programmed with one of those illogical situations where they recognized the name Jared and playing field hockey as two indicators for high performance. Makes no sense, right? So that had to have been an outlier for it... [laughter]

03:52 Matt Waller: I guess so.

03:53 Cindy Moehring: In a place that doesn't make any sense. Like, why would you want to apply that going forward...

03:56 Matt Waller: Yeah.

03:56 Cindy Moehring: And look for resumes with somebody who's named Jared and whether or not they played field hockey as a determination for high performance? That's one.

04:03 Matt Waller: So that's just taking data and trying to fit a model to it, and then using that model for the future.

04:10 Cindy Moehring: Right.

04:10 Matt Waller: But even in that case, if you ever use these models, you have to determine when the algorithm stops fitting the data. Because it takes a long time to fit the data, so they can use different methods for fitting the data. And so still human decisions are involved in this.

04:34 Cindy Moehring: Humans... Yes, definitely.

04:36 Matt Waller: But I think... I'll tell you what really surprises me about this. This explains why theory needs to be used. Again, theory describes, explains and predicts any kind of phenomenon. So when these companies are simply using a bunch of data fitting it, they might be fitting bias and then perpetuating bias in the future, they might be fitting randomness and putting that in the future, in either case, it's not better for people, and it's not better for the country right?

05:08 Cindy Moehring: No, no.

05:09 Matt Waller: So that's why theory... They need to be able to say, people with... We would expect that people with higher incomes and lower debt will have better credit because they have more money to pay it back with, etcetera, etcetera. You can create a theoretical framework to then develop the model.

05:28 Cindy Moehring: That's right.

05:28 Matt Waller: And I actually think that that is needed in this kind of a situation.

05:35 Cindy Moehring: Yeah, because otherwise, if you take, particularly if it's like HR information, personnel information, and you're feeding that data set in, you're gonna naturally have some bias, if you will, unless your employee base is 100% kind of perfect, if you will, from a diversity perspective, right? So it isn't necessarily bias that you are gonna want to maintain going forward, you wanna take that bias out. So you've gotta inject theory, take out human bias or data sets that aren't reflecting what you want to be reflected in the future in order to get it right. And the reason that's really important is because of the deep machine learning that goes on, which it makes the... So it learns from the data sets that it has, and then it can go a gajillion times faster than our brains can possibly work. And so the effects of potential discrimination or unsafe algorithm that's been programmed all of a sudden proliferates and become quite large.

06:32 Matt Waller: This is interesting in companies like Google, Amazon, and some others are hiring lots of economists with advanced degrees...

06:42 Cindy Moehring: Yes.

06:42 Matt Waller: To do their data analytics, because you could take someone that's just really good at computer programming and statistics, but they're not gonna necessarily have the theoretical framework to do the best job of creating a model. You do need models, but you need people to be able to figure the models out. And so I see... Even Walton College, this new Master's of Science degree that we've created, it's called a Master's of Science in Economic Analytics, part of the reason behind that is you do need strong data analytic skills but it's gotta be backed up with theory.

07:23 Cindy Moehring: Yeah, and it may even be that we see in the future, I've seen people refer to it as new careers, like algorithm bias auditors who are actually going in after the algorithm has been created. You gotta monitor it to make sure that it's operating the way that it's designed so you have to audit it to make sure that it is operating as designed, so there could be new jobs in the future like that.

07:48 Matt Waller: All models should be audited as they're applied.

07:53 Cindy Moehring: They should, they should.

07:54 Matt Waller: But, I think that a lot of times they're audited for things like forecast accuracy or something like this where there's. This is...

08:00 Cindy Moehring: Right, we're talking about ethics, auditing it to make sure that it isn't having discriminatory effect or that in fact it is actually safe. We talked earlier about the Edelman Trust Barometer report that came out in 2020 and there's a factor in that report that relates to this that I found quite interesting. I think people are a little afraid that machines are gonna take over the world and so over 80% of the folks that the Edelman companies talked to in their report this year, said they wanted to have their CEOs, they wanted to hear from their CEOs, on the topic of ethical AI. They really wanted to know that the technology that was being employed was being employed in an ethical way. So I think we owe it, business leaders owe it to their employee base to make sure that they're in tune with their employees and understand that they actually care about this issue and they want it explained to them just like the public wants the algorithm to be able to be explainable when it makes decisions.

09:03 Matt Waller: So, what are some practical tips for this particular business principle?

09:09 Cindy Moehring: Yeah, I think there's a couple of practical tips that makes sense here and one is, first of all, having a product design agile mindset when the algorithm is being created. So it would be wrong for companies to think about this as just an engineering project or just an information systems project. You actually need to pull together that scrum team and you need to have a project design agile mindset with people from all of the different departments that can really make sure that from an ethical perspective, the AI is being programmed the right way. And then the second step, the second practical tip, I would say is making sure that you actually set governance for the process, for who's gonna be involved and how you create it so that you test it before you roll it out so that you don't hit a pedestrian who happens to be jaywalking like Uber did. And so that you can monitor it afterwards and be very clear about what the governance is gonna be for that process. And then you set yourself up for success in the way it's actually built.

10:06 Cindy Moehring: Thanks for listening to today's episode of The BIS, the Business Integrity School. You can find us on YouTube, Google SoundCloud, iTunes, or wherever you find your podcasts. Be sure to subscribe and rate us and you can find us by searching The BIS. That's one word. T-H-E-B-I-S. Tune in next time for more practical tips from a pro.

Walton College

Walton College of Business

Since its founding at the University of Arkansas in 1926, the Sam M. Walton College of Business has grown to become the state's premier college of business – as well as a nationally competitive business school. Learn more...

Business Integrity Leadership Initiative

The initiative strives to foster a culture of integrity, and promote thought leadership and inquiry among scholars, students, and business leaders to address the ethical challenges inherent in our increasingly complex business world. Learn more...

Stay Informed

Engage with our initiative on social media, and get updates from our email newsletter.