University of Arkansas

Walton College

The Sam M. Walton College of Business

Season 5, Episode 10: Exploring Ethical Dilemmas of Emerging Technologies with Wendell Wallach

March 31, 2022  |  By Cindy Moehring

Share this via:

Wendell Wallach, renowned bioethicist, author, and Senior Fellow of the Carnegie Council, joins Cindy Moehring to discuss trending ethical concerns of emerging technologies including AI, bioethics, and the metaverse. The conversations covers the morality of machines, big data, scientific advances brought on by AI, autonomous machines, and the future of the metaverse.  

Podcast

Resources From the Episode

Episode Transcript

Cindy Moehring  0:03  
Hi, everyone. I'm Cindy Moehring, the founder and Executive Chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business, and this is theBIS, the Business Integrity School podcast. Here we talk about applying ethics, integrity and courageous leadership in business, education, and most importantly, your life today. I've had nearly 30 years of real world experience as a senior executive. So if you're looking for practical tips from a business pro who's been there, then this is the podcast for you. Welcome. Let's get started.

Cindy Moehring  0:41  
Hi, everybody, and welcome back to another episode of theBIS, the Business Integrity School. And as you know, we are in season five discussing all things tech ethics and emerging tech. We have a really wonderful special guest with us today, Wendell Wallach. Wendell, hi, how are you? 

Wendell Wallach  0:59  
Hi Cindy. Thanks for having me. 

Cindy Moehring  1:01  
Absolutely. You guys, you're gonna be blown away when I tell you about Wendell and his background and you are going to love this episode. Wendell is a recognized expert for the ethical concerns proposed by emerging technologies, particularly AI, and neuroscience. But listen to some of the things that Wendell has done. He's currently a senior fellow at the Carnegie Council for Ethics and International Affairs, he co directs the AI and Equality Initiative, super important for what we're going to be talking about today. He's also a senior advisor to the Hastings Center and a scholar at the Yale University Interdisciplinary Center for Bioethics, where he chaired technology and ethics studies for 11 years. He's written a couple of really cool books that we're going to talk about in our podcast. And he also received the World Technology Award for Ethics in 2014. The World Economic Forum appointed Wendell, co chair of its Global Future Council on technology, values and policy for the 2016 to 2018 term. And he's currently serves as a member, a member of the AI council. So with that kind of a story background Wendell, my goodness, I feel like we could probably talk all day.

Wendell Wallach  2:14  
We could. That would bore your lesson.

Cindy Moehring  2:21  
I don't know. I don't know about that. It seems like everyone is so interested these days in just AI, emerging technologies and what it means to the world. But before we get into all that, can you just tell the audience Wendell a little bit about yourself and how you got so interested in this topic and kind of ended up where you are today?

Wendell Wallach  2:42  
Sure, I would be happy to. I've had a very idiosyncratic career. And to give you a sense of the path that brought me to the ethics of emerging technologies and how the paths of convergence of so many, so many different factors in my life, probably, it's gonna force me to go into what's ancient history for many of your listeners. So I'm very much a product of the 1960s. I don't even think of myself as a product because I, perhaps, played leading roles in a number of different movements and Civil Rights in the protests against the war in Vietnam, in early attention to climate change and other concerns. I had particular interests in meditation and introspective practices, spent a couple of years in the 1970s in India. As I sort of emerged out of that and started looking for a career I fell into the microcomputer industry, largely because I wanted to get some books out of my head. I hated typewriters and I hated white out. And word processes were just coming into being and being affordable for for the average citizen. Well, my timing was impeccable again. And I was in the microcomputer industry about six months before the IBM PC came along. And so as that took place, and people caught on to the importance of computers, I knowing very little was already an expert because there weren't enough people around to talk about these things. So I ended up starting a few computer consulting companies. But around 2001, I realized that I'd never had gotten these books out of my head. That had been my intention in getting in micro computers in the first place. And I also realized that I had lost touch with a lot of what had developed in our understanding of human psychology. Many of you would not know this, but human psychology in the 1950s and 60s was dominated by a trend called Behaviorism which largely said we couldn't know what goes on in the head, it's a black box. But the cognitive sciences came along in the 1970s and said, Well, no, that's not totally true and neuroscience and computer science. And a lot of different trends had us focusing on trying to understand what was going on in human psychology. And that, of course, converged with what had become my approach to introspective practices and meditation, which I affectionately referred to as first person cognitive science. Trying to understand how the mind worked. But when I was dropping out of running my micro computer companies, I realized that a lot had happened in the cognitive sciences that I did not understand. And therefore got involved with Yale University and its bioethics center, one thing led to another. And I was invited to the first meeting of what was called the technology and ethics study group. Years later, I inherited the chairmanship of that.

Cindy Moehring  6:06  
Moves fast!

Wendell Wallach  6:08  
It moves fast. Suddenly, there's this plethora of different subject areas so much that it was that even though I've always been what I call a transdisciplinary scholar, it was too much to get a handle on and I tried to focus on a little bit and fell into writing this first book, or at least a few few articles that led to the first book, which was more on machines teaching robots, right from wrong. Many of you may be aware that Isaac Asimov had sort of changed the course of science fiction by suggesting there could actually be such a thing as a good robot, if they were programmed with these three rules. But what was happening in the early 2000s were people actually reflecting on, well could we actually have moral robots? Could we actually have computational systems that could make ethical decisions. Could they be sensitive to ethical considerations? And moral machines became the first book that mapped that that new field of interest? So that started with focusing on well can machines make moral decisions, but obviously, I was also very engaged in looking at ethical issues that were coming up in nanotechnology, biotechnologies and geoengineering and neuroscience in a vast array of issues. So this was a very transdisciplinary period. And it made me one of a really what was probably only a few dozen, or maybe 100, or 150 people in the world who cared about all of this. So here we are 20 years later.

Cindy Moehring  7:43  
Here we are!

Wendell Wallach  7:45  
Suddenly there are 10s of 1000s of people declaring themselves experts in just AI ethics alone. People focusing on so many specific issues, algorithmic biases, lethal autonomy, autonomous weapons, the use of big data, and the use of data to manipulate us, surveillance technologies. So suddenly, there is this vast interest in at least the AI aspect of emerging technologies. So I think we'll see an explosive interest in biotech also in coming years.

Cindy Moehring  8:25  
Ah, okay. Do you think that machines can actually be more moral than humans? Is that possible? I mean, machines don't have them have emotions? 

Wendell Wallach  8:36  
Well, this is really a great debate out there. Can machines be more and more moral than humans? Will they be more intelligent than us in all respects? And if that's the case, then what's going to happen? Is there going to be intelligence explosion, where we humans a sort of insignificant in comparison to the machines, what would that mean in terms of geopolitics? So this has been going on as a topic within science fiction now for decades. And there's a lot of movies around that. But I was looking at a very explicit question in terms of, will sensitivity to moral considerations and the ability to make moral decisions. That's where it gets really tricky. In fact, it is quite arguable that moral decisions are among the most difficult thing that we humans do at that cognition. But there's also arguments that we don't do it very well. And some of those arguments I embedded in some of the new cognitive sciences, that point out how we are bad decision makers in certain respects, we have what are called cognitive biases, which are errors we tend to make consistently so we're very bad at statistics when most people think about biases, they think more about the prejudices but cognitive biases go beyond prejudices to look at errors we make. So looking at that, some people have complained, well, being better decision makers in humans is actually a very low bar. 

Cindy Moehring  10:17  
Interesting. 

Wendell Wallach  10:19  
But I go in sort of the opposite direction. I think when you really look in depth at what is involved, at least in moral decision making, it's not simple. It's not just a simple, simple thing of having better reasoning capacity or having more information. It really requires all kinds of sensibilities that we don't really know how to build into machines at this point. And if they don't have those sensibilities, well, they may be smarter than us in some respects, but they won't be, they won't be cognizant of some of the more critical issues that we want factored into our ethical decision making. Now that said, that's more about how we're going to have machines that are superior to us in ethics in all respects. When we wrote moral machines, we weren't interested in just all respects, we were also thinking about the near term, we were thinking about machines that just function in environments where perhaps they encounter situations that their designers or programmers did not know they were going to encounter. Would they be able to be sensitive to the moral considerations that came into those spaces? Or how would they make decisions? Could they have what you might call moral algorithms that would help them in making those decisions. So we were looking at the near term, not the far term. And in fact, I sometimes get a little impatient with those who say the far term is inevitable, and therefore, that's the only thing we should be focusing on. Now, I'm concerned with near term issues, such as you not necessarily knowing whether who you're talking to, on your phone is a robot or a human? Whether we should be weaponizing AI, whether we should have autonomous weapon systems that can target and can destroy with little or no immediate human involvement. So these are really near term challenges. Corporations using their algorithms may be doing things that are really biased or insensitive to the individuals that are interacting with or trying to manipulate for either marketing or political purposes.

Cindy Moehring  12:46  
But is there advancements that have occurred since you wrote the book that has shaped your opinions a little further at all, one way or the other, since the book was written?

Wendell Wallach  12:57  
Most of the challenges we point out had not been met yet. We've just seeing baby steps toward trying to get computational systems to make moral decisions. And we might talk about that as we get further along. The big change was what is called the machine learning or the deep learning revolution. And that that really took place, you can say, I mean, it's just a question of at what point you point to, but roughly around 2015, it became evident how important this was going to be. The fact is, we anticipated that in the book, and even the deep learning revolution, the machine learning revolution that took place in that period, it had already been speculated about, in fact, had created a hotbed of activity about 25 years later, when there was this focus on neural network, a kind of architecture for computers that didn't focus just on one central processing unit, but tried to bring a lot of computing units together, each one, simulating the activity of an individual neuron. This was a kind of architecture, it created great enthusiasm in the 1980s and belief that we were going to see massive breakthroughs in artificial intelligence that sometimes referred to as an AI summer and then followed by an AI winter when those successes really didn't occur. And the problem was really we did not have the computing power and we did not have massive databases that could be analyzed for important correlations. So we did not have the capacity to do the high level of statistical learning that became central to the deep learning revolution. Many computer scientists surrendered the field, they left it but a few stayed behind. And they gave birth to a new architecture once the power was there and the databases started to materialize, where you could have machines that would look at a massive quantities of information and discover fascinating correlations within. This is a rudimentary form of learning. That's just like kind of learning that your child goes through. You went through. These are specific kinds of learning, but they have created an explosive advance in artificial intelligence, and explosion of interest and all kinds of applications, some of which have beat the world's best chess players and the world's best GO players, more recently, an application called AlphaFold, which has solved one of the biggest problems in bioscience.

Cindy Moehring  15:54  
Tell us about that one a little bit? 

Wendell Wallach  15:55  
Sure, sure. Well, there is something called the protein folding problem. So as many of your listeners will understand, we are, we are built from proteins. And DNA is largely, it's largely a code for the manufacture of proteins. But the difficulty is of proteins can get so large have so many molecules in them, and they can attach in various ways that it was almost impossible to think through what their three dimensional structure would be. And this is a protein folding problem. Scientists have been attacking it for decades, and not been very good at it. Deep Mind, which is a division of Google or was bought up by Google some years ago, and is a program that got tremendous attention AlphaGo, because it beat a beat all the world's GO players. Go is a game more difficult than chess but it's more popular in Asia than it than it has been in the West. They came up with this program, AlphaFold. And for all private purposes, it has solved the protein folding problem, shown the structure for 10s of 1000s of proteins. And that will lead to all kinds of breakthroughs in proteomics and healthcare and genomics over the next few decades. And that really all just got published in the last year, year and a half or so.

Cindy Moehring  17:26  
That's a big deal. I mean, that's certainly a way emerging technologies are being used for good.

Wendell Wallach  17:32  
And in some ways, this is the finding that truly justified that artificial intelligence could make a positive difference. There's always this question about well, is all the promise of artificial intelligence real? I mean, let us be clear, artificial intelligence is a toolkit of different applications. And they can be used in every field of human endeavor and are being used, it's not really one thing. For all the predictions about the positive benefits of AI, there's also been ongoing concerns about the negative effects of AI. Therefore, whether the positive would outweigh the bad, the negative.

Cindy Moehring  18:17  
This advancement certainly would be a ticker on the side of good, right, positive advancements.

Wendell Wallach  18:23  
Very positive, very positive. Yeah.

Cindy Moehring  18:26  
So let's explore the idea of autonomy a little more generally. And I'm very taken with the metaverse and the, the talk that's out there about that a little bit now and, you know, avatars, alternative realities and and, you know, essentially it being promoted as a reality in which humans can interact through their avatars and that the avatars could end up being smarter than we are, make better decisions, they don't get tired, they'll last longer in meetings, and so you can think about, wow, a lot of advancements there. And I know you wrote an article about this recently in Fortune magazine. So I would love it if you could share with us a little bit about your thoughts that you had in that article and just talk about autonomy generally as it relates to this Metaverse and avatars and what you see as the possibilities and also the dangers.

Wendell Wallach  19:27  
Well, that's a great question, but there's actually so much in it that if you don't mind, I'm going to take it apart. 

Cindy Moehring  19:33  
Please! Please.

Wendell Wallach  19:36  
A lot of these things have a history and they aren't, they don't all converge on the metaverse as the metaverse has not come to be. So the question of autonomy and what kind of autonomy is good or acceptable has now been a central issue  in AI systems and robotics and now in the metaverse in terms of what kinds of autonomy should we be embracing, and when is autonomy potentially destructive. So autonomy is potentially destructive when you have an algorithm that can feed you the news that you want to see but give you a very imbalanced perspective on what is or, or what is not true. And the focus on autonomy probably started to really build up with two issues, one self driving cars and other lethal autonomous weapons. So with self driving cars we have been promised them for over a decade. And though we have cars like Tesla's that can be autonomous in certain situations, there are other environments in which we don't know how to ensure their safety. That's why you don't have full autonomy yet. But but there's this issue that even if if it's not perfect, will self driving cars be safer than humans, so that perhaps we might even accept them, even if there are some situations which they might kill somebody in a situation that an attentive driver would not. Simply for the reason that they can be attentive all the time, and we get distracted and that's, that's a big reason for it. So we have been through self driving cars, and through lethal autonomous weapons, which are weapon systems that can pick their targets and destroy them with little or no immediate real time human engagement. We've been looking at what might be the benefits or dangers of autonomous systems. And I've been involved in that discussion now for about 15 or 20 years, including testifying at the UN, as to why we can't really predict the activity of autonomous weapons, and therefore, they are very, they could be potentially very dangerous. Even if it's not our intention to use them in in destructive ways. It could take actions that we had not anticipated they might take. So that's his broad area of autonomy. And now we're getting into thinking about it much deeply, more deeply through social media through robots in the home, what should they be able to do? What shouldn't they be able to do? But a lot of that now is converging over what is being treated at least by Google, um, excuse me by Facebook and Microsoft, as the next great Paragon, the next step forward in computing  with what they are treating as the birth of the metaverse. It is not the birth of the metaverse. It's what corporate America has hoped for many years now would be the next stage in their capturing our time, our activity, our engagement and the metaverse goes back to a book, a wonderful piece of science fiction by Neil Stevenson called Snow Crash. I recommend it to any of you who haven't read it if you, if you want to, you know, get into classic science fiction, Snow Crash and another book called Neuromancer that is kind of the, the best of what became what became this kind of renegade, turn in science fiction over the last 20 or 30 years. But in any case, Neil Stevenson invented the metaverse fictionally. 

Cindy Moehring  23:38  
There we go. Got it. 

Wendell Wallach  23:39  
He named it. And that so fascinated, many people that they tried to create virtual environments within the internet. And the first and perhaps most important of those is a universe called Second Life where you could live through an avatar, a little cartoon character who represented you. And, and you could, you know, you could buy an island and build your, your dream house on it or buy different outfits for your avatar, to interact. The Metaverse became a place in Second Life became a place where people would have corporate meetings and you know, they had a company where you had employees all over the world, understand this is all before the pandemic, they would meet virtually. So, corporate America has understood that the metaverse looks like the next stage in the development of computers. They need us to buy into it. They need us to be so excited about it, that we will buy into it and both Mark Zuckerberg and Satya Nadella who's the president of Microsoft, I don't want to conflate these two companies are very different companies, with very different core cultures, very different ethics, actually. But they both took the recent opportunity. It started with Zuckerberg facing criticism of Facebook. They said, well, let's just rename Facebook. 

Cindy Moehring  23:39  
Yeah, right. Right. 

Wendell Wallach  23:39  
So he named it Meta, and introducing Metaverse and not to be undone by him, Satya Nadella, brought out what has been Microsoft's strategy for that. And the part I criticized in my blog, was that both of them played up how acting through our avatars, cartoon images, we would improve human connectivity.

Cindy Moehring  25:43  
Yeah, I know! 

Wendell Wallach  25:44  
And that's what I challenged in the editorial that was published by Fortune magazine. I said, you know, I mean, even look at us right now, we have never been in the same physical space, we don't have that tool probably embodied sense of each other. But even just in our interaction, now, there is tons of nonverbal communication going on. And a little cartoon character that gives limited emotional expression does not convey that. My critique was, at least as far as connectivity, as far as human connection, that we aren't gonna get out of the metaverse. We might get exciting educational and entertaining, entertainment opportunities. You might even have some experiences that that make you reflect deeply on yourself. Most of us who have interacted through virtual environments, know how there's a kind of disorientation that takes place when you come back to real life. It makes you also question to what extent real life is a simulated universe. I don't want to get into that too much. But it just brings up a whole kind of philosophical reflection, which might lead some insights and some self understanding about who you are and why you've been the way you've been and whether you really need to take on all the conditioning and habits that that you were brought up with. So I'm not trying to damn the metaverse, but I am saying that it's exciting for corporate America, because if they can get us captured in the metaverse, they stand to not to make big bucks, largely because they will have more and more data about us including biological data and our heart rate and our temperature. To really get into the metaverse, you need to don virtual goggles or personal helmet, you need to don haptic gloves. And all of this is getting corporations more and more information about us that they can then sell to advertisers or promoters of different political or ethical viewpoints in their campaigns to capture our attention and direct our behavior.

Cindy Moehring  28:16  
Wow. So much there to unpack my goodness, we could talk all day just on that topic alone. 

Wendell Wallach  28:22  
But I think I should go one step further, if you don't mind. What you're talking about the autonomy aspect of the metaverse. It's not just some metaverse. Again, this has been coming up. Stuart Russell has been particularly good on this. Stuart Russell wrote the textbook together with a VP of Google, that nearly everyone who studies artificial intelligence learns from. And Stuart Russell said, Well, we now all have these virtual assistants in our phones, I see my phone is not actually showing up well on the camera. So what if these virtual assistants, what if it would go out and autonomously perform all kinds of tasks for you? Such as you say, you know, get me on the quickest flight to to London. And then he started imagining rather frightening scenarios where one autonomous virtual assistant would collude with other virtual assistants, and they would, they would perform these actions but with no understanding of the ethical implications of the way they were performing the actions. So I'm not gonna get too dramatic with it. But I mean, there were some areas where, you know, one person would be killed so that you would have a seat on the plane, kind of.

Cindy Moehring  29:53  
Yeah, just thinking about how if they were autonomous, they could work together and what we as humans would correct we call unethical ways, right, to achieve the overall objective.

Wendell Wallach  30:06  
So this is the science fiction edge of all of that. But the real time aspect of that, the near term aspect of that is as we go into increasing autonomy, what kind of decision making capabilities should we be giving machines? And do they actually have the intelligence to make the kinds of decisions that we might abrogate to them? Will they be doing that in ways that will have biases, including, you know, real prejudices? Will they be doing that in ways that can be harmful? Will they factor in the kinds of information that we would expect at least an aware human being to factor into the choices and judgments they make? So it's this broader question about when and where is it appropriate to defer or to abrogate your own responsibility and delegate it to a machine?I and others are saying, that's much more serious, and most of us are looking at.

Cindy Moehring  31:16  
Yeah. Which I think leads right into my next question is probably the subject that you discussed more in your, in your latest book, which is titled The Dangerous Master, like how to keep technology from slipping beyond our control. That was the title of the book. So I'll just ask you the question. The short answer, the cliffnotes! So how do we keep technology from slipping beyond our control? Is it wrestling with some of these hard questions that you, that you just mentioned, and, and kind of making decisions on the front end?

Wendell Wallach  31:46  
It's wrestling with some of these hard questions, and it's through ethics and through engineering and through governance. So engineering, you know, for example, can we engineer real safety, real sensitivity to to ethical considerations and computer systems? Can we either eliminate algorithmic biases? Or can we at least be aware of the biases that would be intrinsic to the machines? So that would be the engineering side of it. On what we cannot engineer, can we develop ethical standards for which systems do and do not get deployed? So that's been this explosion of principles for AI ethics that has come over the last few years. Very timely and what is just beginning to be developed is, can we put in place effective governance? Or do the corporations have so much power right now that we all are captive of what serves their interests? And that's a national problem. But then the international problem is, can we put governance in place for technologies that will have an international impact? Can we put in place standards? In writing a Dangerous Master, I set a task for myself. And that test was how in 100,000 words or less, could I share an understanding of the ethics and governance of emerging technologies so that you, meaning anybody on this podcast, any intelligent human being who cares about these issues, would know enough to engage in an informed way into the conversation and have informed attitudes or at least know the basics so they could look deeper at these issues? But what happened is I started the book talking about what could go wrong. And there's so many things that could go wrong, but I would say I scared the bejesus out of a large percentage of the readers of this book. Scared them in a way where they realize that some became anti technology. I'm not anti technology. But it's just a fair warning for any of you, be prepared to get exposed to all kinds of things going on in biotechnology and nanotechnology and geo engineering and neuroscience, a whole flock of fields. Where not just small things, but also big things could go wrong, which would disrupt our lives and the course of, of human history. And that brought me more into what has been my more recent focus on governance, which is largely that from where I sit, the destabilizing aspects of emerging technologies are becoming as big a threat as climate change to international security to global security. And then the old days when we thought about the security, we thought almost entirely about defense, how do we defend ourselves against the Soviet Union, for example. And now they're those who are trying to re stoke up that same fire by catalyzing an AI arms race.

Cindy Moehring  35:20  
Okay, I've heard of that. Yes. 

Wendell Wallach  35:22  
With China. But what many of us are trying to say right now is the biggest threats to international security are not necessarily military. And if we focus in an imbalanced way on military security, we undermine the capacity for the international cooperation that is going to be necessary to tackle climate change, the destabilizing effect of militarization, weaponization, of AI, biotech and other emerging fields of research, put in place appropriate standards for for social media, and other technologies that can be weaponized in ways that undermine the quality of life for all of us. And undermine a viable future for our children.

Cindy Moehring  36:23  
Right. But do you think in this world we live in today that there is room and a realistic possibility of creating that type of international governance, kind of the rules of the road that all would agree to, to abide by? Or is that more of a dream?

Wendell Wallach  36:41  
Well, that's that's kind of the $64,000 question. That's a $64,000 question. You really have to be as old as I am to get what that goes back to. One of the original quiz shows on on American TV was $64,000 Dollar Question. So it's now the 64 million or billion dollar question. I don't want to be too melodramatic here. But I would say if we can't, we're all damned. And when I say we're all damned, I don't I'm not trying to talk about, well, existential risks where humanity will disappear. But consider the possibility that the world population is 6 billion in 2015. And it's bordering on 8 billion at the moment. So for me, that's just a way of pointing out to people that what I think we all understand anyways, we're in a very delicate moment in history. And either we respond to the challenges on the on the table, or we are going to witness horrific events. It may not be that you personally witness horrific events. I'm in this weird, luxurious position that I live, first of all, in New England, so that if civil war breaks out, or the guns start coming out in certain states in America is less likely to happen in the northeast. We are not having the most dangerous impact of global climate change on us. We have a fairly high standard of living. So I live in that part of this world that lives in luxury, that is privileged. But that's not true of half of the population, either in our country or anywhere in the world. And so the people are going to be hurt at the same people are always hurt, by all of these policies. But just imagine that today, we have 68 million people on the planet who are affected by extreme heat events. That is projected to be 1 billion people by 2050. And could be more if we stay on the course that we are. Now think about how much resistance there is to immigration going on in nearly every developed country of the world. Add to that, that we have hundreds of millions of more refugees, not from wars, but from climate change alone. How destabilizing that's going to be, how that's going to create warfare, how that's going to create tension within the more prosperous countries on whether to be benevolent or whether to be in inwardly turning and just protect our own societies. So we are in for a rather difficult period, which comes back to international cooperation. There are citizens and our leaders come to understand the dangers of what we're confronting and the necessity of putting in place new forms of international cooperation. Or we're all going down that hellish road together, some of us may be protected from it by a privilege. But that doesn't alter the fact that you're going to witness it every night on TV. And you're going to feel the tensions that that precipitates. My concern is at the moment that there's aspects of corporate America and the tech industry that are stoking the fires of this AI arms race between the United States and China. And the question I have, is this really necessary? You know, regardless of whatever critiques you have of China, or even China has of America, should we be looking over our shoulders and just ratcheting up tension step by step, because we make a new move based on what yesterday's move was by the on the other side of the planet? Right. So the tension that has always been in international governance is whether the need for international cooperation outweighs the destabilizing effect of militarization and whether the militarization is necessary, whether the militarization itself is creating the very problem that it pretends to be solving, whether it's actually creating or speeding up an arms race, which would not happen on that score without it. I'm not saying that the Chinese leadership doesn't insert ambitions that we find incorrect, perhaps worthy of us going to war with them. But that's very different than creating a new, a new cold war with China, which apparently, some of the leaders and some of the more conservative leaders of the tech industry have decided we have to do is in America's interest. And by the way, they're all going to get rich by the defense contracts and and of the industries that they own big pieces of.

Cindy Moehring  42:12  
Yeah, interesting. Well, Wendell, this has been a fascinating conversation. And I usually end with a question that, quite frankly, you've already answered, and more than once, which is do you have any great recommendations for where somebody could go to continue their learning? So instead, I'm going to ask you a different question. You've been around  this topic for a while. And I've had a very interesting career. So if if you had a piece of advice that you would like to leave students with, primarily, what would that advice be?

Wendell Wallach  42:43  
To the best of your ability, try and develop a bit of self understanding of what the buttons are you have and how they get pushed, because they're all going to get pushed by technology over the coming years. And the technology is going to be used for other people to kind of dictate to you what you believe and don't believe. But the second part of that is, with self understanding, listen to your own lights. I've lived a very idiosyncratic career, and yet, going through strange highways and byways have found myself in a rather fascinating seat at, at this point in my life. I know that it's hard sometimes to make choices that don't necessarily give you immediate security. But I would say follow your own lights. If you really want to do that, though, you may have to make some choices, you may have to forego, let's say, some of the security's and instabilities, if you like I did, find your way in fields before they were hot. And therefore before anybody would pay you for doing what you're doing. But you'll have a fascinating life. So I, I would just say, don't be too quick to jump into what today's opportunities are. Because if you look, if you look just a few years ahead, you'll see that so many opportunities are coming to being that you could be a leader in if you jumped on early. And the second part about that is be a little careful, though. There are jobs, there are thing you can do that are blind alleys that are very difficult to get out of. So if you can stay within fields where you become a lifelong learner, where your your capacity to explore new realms, new forms of expression, new new tasks is opened up for you.

Cindy Moehring  44:47  
Great advice, Wendell. You are just a wealth of information. And we actually could talk all day but I need to let you go. And we will end it there. But thank you so much. You've given great recommendation for your own books, other books to read, given us a bit of history, a peek into the future and, and help this all kind of make sense of it a bit when it comes to autonomy and morality and ethics, big questions, governance that we all need to wrestle with. So, thank you for all of that. I appreciate it very much.

Wendell Wallach  45:19  
Thank you for inviting me. 

Cindy Moehring  45:21  
All right. Bye. Bye. 

Wendell Wallach  45:23  
Bye now.

Cindy Moehring  45:28  
Thanks for listening to today's episode of theBIS the Business Integrity School. You can find us on YouTube, Google, SoundCloud, iTunes or wherever you find your podcasts. Be sure to subscribe and rate us and you can find us by searching theBIS. That's one word, the B I S, which stands for the Business Integrity School. Tune in next time for more practical tips from a pro.

Matt WallerCindy Moehring is the founder and executive chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business at the University of Arkansas. She recently retired from Walmart after 20 years, where she served as senior vice president, Global Chief Ethics Officer, and senior vice president, U.S. Chief Ethics and Compliance Officer.





Walton College

Walton College of Business

Since its founding at the University of Arkansas in 1926, the Sam M. Walton College of Business has grown to become the state's premier college of business – as well as a nationally competitive business school. Learn more...

Business Integrity Leadership Initiative

The initiative strives to foster a culture of integrity, and promote thought leadership and inquiry among scholars, students, and business leaders to address the ethical challenges inherent in our increasingly complex business world. Learn more...

Stay Informed

Engage with our initiative on social media, and get updates from our email newsletter.