University of Arkansas

Walton College

The Sam M. Walton College of Business

Episode 101: Professor Amber Young Discusses Her Paper, “Avoiding an Oppressive Future of Machine Learning”

December 09, 2020  |  By Matt Waller

Share this via:

Young sits down with host Matt Waller to discuss how her expertise in information systems and interest in using technology for social good led to her future-oriented research. 

Amber Young is a Ph.D. program director and assistant professor of Information Systems at the University of Arkansas Sam M. Walton College of Business. Young recently wrote a theory paper titled “Avoiding an Oppressive Future of Machine Learning” for publication in MIS Quarterly. She sits down with host Matt Waller to discuss how her expertise in information systems and interest in using technology for social good led to her future-oriented research.

Episode Transcript

[music]

00:06 Matt Waller: Hi, I'm Matt Waller, Dean of the Sam M. Walton College of Business. Welcome to Be EPIC, the podcast where we explore excellence, professionalism, innovation, and collegiality, and what those values mean in business, education, and your life today.

[music]

00:28 Matt Waller: I have with me today Professor Amber Young, program director for a PhD program, and assistant professor in Information Systems. And today we're going to be talking about her paper, which is titled "Avoiding an Oppressive Future of Machine Learning." First of all, where is that published?

00:51 Amber Young: It's forthcoming in MIS Quarterly.

00:54 Matt Waller: Well, congratulations. That's a huge success.

00:58 Amber Young: Thank you.

01:00 Matt Waller: A lot of academic journal [inaudable] articles are titled in a way that's pretty boring and don't catch your attention, but your title really caught my attention when I saw it, "Avoiding an Oppressive Future of Machine Learning." That sounds scary, so what is this oppressive future we need to avoid? And what is the role of machine learning?

01:26 Amber Young: Oppression occurs when there's unjust control of a person's thoughts or actions. And depression is not new. Oppression is this universal issue that all people and all scientific fields should care about. People have been oppressing one another for a long time, and groups of people oppress other groups of people, that isn't new, but the technologies that people are using to oppress one another are new, and technologies are making the process of oppressing people far more efficient and far more ubiquitous, which is really scary. And most of us are familiar with the book "Nineteen Eighty-Four." Yeah, we may not be to that point yet, but if we're honest, the current trajectory of surveillance and entanglement with technology has been heading in that direction for a long time. And, of course, it's...

02:17 Matt Waller: That is scary.

02:19 Amber Young: It is. And this pandemic has only accelerated surveillance as we try to stay safe and contact trace. The response to the Black Lives Matter movements and the movements in Hong Kong has been to increase surveillance and try to squash these protests. So, there's this inherent tension because tracking data can give us freedoms like the freedom to reopen safely and end pandemic lockdowns. The data can also be weaponized to take away our freedoms and our autonomy.

02:44 Matt Waller: Well, historically, governments have tried to use any technology they can to really invade the privacy of citizens, and so based on history, this would be particularly scary to think about because of the... Any government, whether it be ours or another, I suppose now they could even... It could be foreign governments as well, not even your own government. Is that right?

03:11 Amber Young: Definitely, definitely. And it's not just governments that want this data. It's also corporations who have political agendas and business strategies to pursue. They wanna do things like put people into marketing segments with surveillance capitalism. So, there are serious attempts by governments and corporations to measure many aspects of our life experiences. And people used to say, "Oh, I'm sure they aren't watching me. I'm not that interesting, or I'm not that significant." And, yes, if it took a great deal of effort to surveil and measure your experiences, maybe no one would bother, but now we have artificial intelligence, which means that there is relatively little cause to monitoring people, and measuring their experiences and behaviors, and then managing them by using that data in actionable ways.

04:00 Amber Young: For example, YouTube's machine learning recommendation system learned that when it recommends normal content and then slowly ratchets up to increasingly extreme content, yours will spend more time getting sucked deeper and deeper into the rabbit hole, and this means more ad views and more revenue for YouTube. So, it is actually the norm for social media platforms to optimize on engagement outcomes rather than social good, which leads to all kinds of societal challenges, like big news and conspiracy theories and propaganda, and machine learning is how they uniquely identify what types of content are likely to hook individuals. I'm vulnerable to one type of manipulation or persuasion, and you're vulnerable to another, machine learning system can figure out what our vulnerabilities are.

04:49 Matt Waller: Well, when I think of machine learning, I think of statistical methods like regression, where they try to find a linear or non-linear relationship between predictable model, between independent variables and a dependent variable. Isn't machine learning doing something similar to that, trying to use independent variables? Or are they looking at time series patterns?

05:15 Amber Young: That's a great question. Machine learning is a type of artificial intelligence that relies on different algorithms, like neural networks, rather than being explicitly programmed using code. It's a so-called smart system. We would say that it thinks rather than executes commands, and this thinking process is complicated for users to comprehend, because there are so many different algorithms involved. And we know what data is input, and then we can see what the outputs of the systems are, but the process in between is pretty blurry.

05:49 Matt Waller: The Walton College has started an initiative that we refer to as the Business Integrity Leadership Initiative, and the head of that is a woman named Cindy Moehring, and she retired from Walmart recently after 20 years, where she was Chief Ethics Officer of Walmart. And she wrote a blog on Walton insights, or blog site, called "5 Practical Tips for Ethical Use of Artificial Intelligence." One of the things that she wrote in there is that you need to understand... Companies need to be able to explain what the artificial intelligence is doing, or machine learning, but it seems like that's really hard to do, isn't it?

06:41 Amber Young: Yes, it's really challenging. And increasingly, we're starting to think maybe it's not even possible to have complete human control and complete human understanding. It's certainly not very practical in most cases. And my team and I have identified four ways that machine learning systems are fundamentally problematic, and one of them relates to what you were saying about you need to understand them. And so that's the first one, is that there are issues with machine learning systems being a black box, that we can't easily see the model rates and how the variables are weighed or why some variables are given more weight than others.

07:25 Amber Young: Another way that machine learning systems are fundamentally problematic is that they acquire data over time, and this backward-facing orientation is problematic because it trains machine learning systems to recreate current conditions and perpetuate existing problems rather than optimizing for a desired state. Another way machine learning systems are problematic is that they optimize on organizational outcomes for the companies that deploy them without factoring in the individual needs and the humanity of users. And then finally, machine learning systems often retain feedback from users in the form of trace digital data, which is collected indirectly. For example, keystroke data often without user knowledge about that data is being collected, or what data is being collected or how it's going to be used.

08:18 Matt Waller: Could you give an example of how machine learning systems can create challenges for an organization?

08:24 Amber Young: Yeah. Let me give you an example of a real world black box machine learning system, and then I'll contrast it with how a codebase system would operate. Some large corporations, we're using a black box marketing bot to purchase ad space online, and this marketing bot would crawl the web, all of the web, even the dark web, buying ad space from websites. And the bot use machine learning to optimize click-through rates and web page views for the corporations, and the marketing professionals using the bot could see those click-through rates and the page views, but they couldn't see where the ads were being placed or exactly how the bot was making decisions about where to buy ads, which turned out to be on child pornography sites. And as the bot learned that the metrics it was designed to optimize on were maximized on this type of site, it began seeking them out and buying more and more ad spaces.

09:20 Amber Young: So, these corporations, these marketing professionals were unknowingly purchasing advertising space to fund child exploitation, which is so, so horrible. And they were not trying to do the wrong thing, they were not trying to be a part of that. But here's the problem, we've gone so caught up and asking, "Can we build that?" But we haven't asked, "Should we build that?" And if so, what are gonna be the consequences, and what kinds of protections and limitations do we need to build into the system. But even building protections and limitations into the systems have gotten more complicated, because we used to code systems. So, a human wrote code, telling the machine what to do, and it did it, but also without optimizing strategically on the outcome variables of interest. And that's not how we make all the information systems anymore. Now systems are trained on data so that the algorithms can improve over time, but often these systems are trained on flawed data with bias so subtle that we can't even identify the bias.

10:24 Amber Young: And we can no longer just code in limitations for the system because we aren't coding them. So, we have to train the system to do the right thing, which requires us to proceed and train against anything that can go wrong. No one at a corporation would sit down and write a code that buys ads on child exploitation sites. And even if someone did write that code, someone else in the organization would have noticed it when they were testing or debugging the code, and likewise, when YouTube started optimizing for engagement, it wasn't with the intent to polarizing people on fostering extremism. But this is just really challenging, and there's a lot that can go really wrong. How do we train a system to be decent, to recognize oppression, to protect the humanity of people? Those are really big questions and they are not easily answered, but we have to try.

11:15 Matt Waller: So, should machine learning be abandoned in favor of codebase development?

11:22 Amber Young: Not necessarily. If machines can be tools of oppression, they can be tools of emancipation to combat oppression, but we can't let the development of artificial intelligence outpace the development of ethical design rules and standards for artificial intelligence. People have been trying to understand how to stop oppression for a long time, and there are all kinds of theories, thankfully, about power and oppression and how to promote freedom. And in particular, in the field of education, they have ideas about how to act out emancipation in the classroom. So, we thought, "Hey, this is relevant because we wanna teach the machine learning system how not to be oppressive."

12:02 Amber Young: In our theory, we engage in a thought exercise about how partnering an emancipatory pedagogy of users and machine learning systems could combat oppression. So, what we design is a socio-technical system, where you have the machine learning platform or system that is oppressive, and you have the individual user who needs protection, maybe an individual user representing an organization, and then this new technology, which we propose and present design rules for. So, there are things that the machine learning system needs to do to make this work, things users need to do, and also a role for the new technology we propose.

12:41 Amber Young: So, we created these design rules, and the first drawing, on our theory, is that those who are oppressed and those who are oppressing need to come together to learn about each other. Relating this to art context, the machine learning platforms are designed to optimize for organizational outcomes. If the oppressor, in this case, the machine learning platform will not, probably cannot, change the partner in this emancipatory process, then we need a representative from the oppressor group to stand in. That's the technology we're designing. We call it an emancipatory agent. It's basically a machine learning virtual assistant that can interface with the machine learning platform of an organization and also interface with the user to represent the user's interest whenever the user engages with the machine learning platform.

13:32 Amber Young: One rule is that, when creating an EA, EA is for emancipatory assistance, they should be designed for richness of preferences, so this means collecting primary data rather than trace digital data on emancipatory outcomes. And this can accommodate the complexity and dynamism of the user's interests. The EA will learn what the user finds acceptable and what the user finds oppressive, and then let the user know when their interests are in jeopardy. The EA can also warn of trade-offs associated with choosing different preferences or revealing certain data. Another rule for designing the EA is that they need to be able to recognize and resolve conflict. When user's preferences or interests conflict with the demands of the platform the user is trying to engage with, the EA can facilitate compromise or negotiation across parties. And then if that fails, the EA can coordinate users for collective action.

14:29 Amber Young: The next rule for designing an EA is that they should facilitate personalized storytelling. Users should be able to share or withhold different aspects of themselves with different organizational platforms. For example, if you wanted to do an internet search for information about a medical issue, but you don't want the search engine platform to know what you're looking out, the EA can search and return hundreds of results for different medical conditions, and this [14:55] ____ the platform with search data, and the EA can provide you with your desired search results without the platform knowing which EA searches you were actually interested in. It can also interface with the platform to identify inaccuracies in the data the platform has collected about the user. This allows individuals to control and potentially monetize their own data, rather than having platforms monetize user data, which is how it works now. We don't make money off our own data, social media companies make money off of our data.

15:28 Amber Young: And then finally, the EA should educate the user about alternatives and options, so EAs can help combat fake news by promoting diversity of perspectives and fact-checking, and then can also collect data about users' understandings and help the users freely explore the information environment, but also to do so with information about motivations and bias and sources of the information they consume. Ultimately, the goal of the EA is not to control the user's experiences or behaviors, but to protect the users from behavioral and mind control by machine learning systems.

16:11 Matt Waller: That's pretty far out. Wow. How did you get interested in this topic, Amber?

16:21 Amber Young: Well, I'm really interested in how technology can be used for social good. It's an exciting time to be an IS researcher. The stakes have never been higher, and I wanna know and help others understand what the potential fallout of different systems are, and how we can protect ourselves. But more than just protecting ourselves, I want to start thinking about how we can use these technologies for social good proactively to build a brighter future so that maybe we could get closer to a Utopian future than a 1984.

17:01 Matt Waller: Who would have guessed we'd be talking about this 20 years ago, right? I guess people did, but it was... Back then, it was really in the future, and we're here now. And things seem to be advancing very quickly. What's interesting about this, Amber, too, is that, compared to a lot of fields of study in the business school, this one becomes really clearly overlapping with other topics like philosophy even.

17:34 Amber Young: Definitely.

17:35 Matt Waller: And so I would think that it makes it even more complex to do research on this topic. When you're talking about things like avoiding an oppressive future of machine learning, that starts getting into philosophy, I would think, political science, economics, government issues, as well as the complexities you have to understand about the information systems themselves, like machine learning. You picked a challenging area to work in. And the reason I mention that is, as we both know, part of the challenge in a review process for the academic journal is you get reviewers with different expertise, and so I would think that it would be more challenging to find, for an editor. 'Cause usually, for those that don't know, a professor does research, they send the manuscripts into a journal, and the editor decides which other professors to send the manuscript to. I would think, one, it would be hard for the editor to find good reviewers. And, two, when the reviews come back and you have to address them, I would think it would make it more difficult to even know how to address them 'cause you don't know what perspective they're coming from. Did you find that with this process?

19:03 Amber Young: Oh, yeah. It has definitely been a learning process for me in my first theory paper, and it's a future-oriented theory, which is just not common in academe. Usually we're responding and lagging, so to have a future-oriented theory published was definitely an interesting process. But we are in a low paradigm field in information systems. These are big questions that philosophers have been addressing, and sociologists, for decades, and computer scientists have been asking what can be done. But information systems, I think, our role is looking at it from a managerial and philosophical perspective, as well as from a computer science perspective, to see what should be done. Actually, we had such an interesting and complicated review process that the editors ask us to add a whole section at the end of the paper about the challenges of writing this kind of theory to help future authors writing similar theories, because the field of Information Systems does wanna encourage more future-oriented theorizing, so our paper includes a list of hopefully helpful insights that we learned throughout our process.

20:27 Matt Waller: Thanks for listening to today's episode of the Be EPIC podcast from the Walton College. You can find us on Google, SoundCloud, iTunes, or look for us wherever you find your podcast. Be sure to subscribe and rate us. You can find current and past episodes by searching, beepicpodcast, one word, that's B-E-E-P-I-C podcast. And now, be epic.

Matt WallerMatthew A. Waller is dean emeritus of the Sam M. Walton College of Business and professor of supply chain management. His work as a professor, researcher, and consultant is synergistic, blending academic research with practical insights from industry experience. This continuous cycle of learning and application makes his work more effective, relevant, and impactful.His goals include contributing to academia through high-quality research and publications, cultivating the next generation of professionals through excellent teaching, and creating value for the organizations he consults by optimizing their strategy and investments.




Walton College

Walton College of Business

Since its founding at the University of Arkansas in 1926, the Sam M. Walton College of Business has grown to become the state's premier college of business – as well as a nationally competitive business school. Learn more...

Be Epic Podcast

We're sitting down with innovators and business mavericks to discuss strategy, leadership and entrepreneurship. The Be EPIC Podcast is hosted by Matthew Waller, dean of the Sam M. Walton College of Business at the University of Arkansas. Learn more...

Ways to Listen

Listen on Apple Podcasts
Listen on Spotify
Listen on Google Podcasts
Listen on Amazon Music
Listen on iHeart Radio
Listen on Stitcher