University of Arkansas

Walton College

The Sam M. Walton College of Business

Episode 269: Augmenting Artificial Intelligence Through Cross-Functional Perspectives with Varun Grover

April 03, 2024  |  By Brent Williams

Share this via:

This week on the Be Epic podcast, Brent sits down with Varun Grover, Distinguished Professor of Information Systems and George & Boyce Billingsley Endowed Chair in the Sam M. Walton College of Business at the University of Arkansas. They explore the profound effects of AI on individuals, businesses, and society. Varun highlights areas where AI can enhance lives through applications in healthcare, education, and more. However, he cautions that without understanding how AI reaches conclusions, trust remains elusive. Brent and Varun discuss improving AI through specialized models, explainability, richer inputs and contexts. Varun asserts that augmenting human skills rather than automation ensures AI uplifts roles and fuels innovation. Listeners gain insight into AI's trajectory and its role partnering with instead of replacing humans.

Podcast Episode

Episode Transcript

Varun Grover  0:00  
You never have innovation through automation, you will only have innovation if there are ways in which the AI and the technology can work together.

Brent Williams  0:12  
Welcome to the Be Epic Podcast brought to you by the Sam M. Walton College of Business at the University of Arkansas. I'm your host, Brent Williams. Together, we'll explore the dynamic landscape of business and uncover the strategies, insights and stories that drive business today. Well, today, I'm lucky to have with me Dr. Varun Grover, and Dr. Grover's the George and Boyce Billingsley Endowed Chair and Distinguished Professor of Information Systems in the Walton College. Varun, thank you for joining me today.

Varun Grover  0:46  
I'm delighted to be here. Thank you.

Brent Williams  0:48  
Well, I think we're going to touch on some interesting topics today. But maybe before we do that, I would love for our audience to get to know you better. I've gotten to know you over the last five years, I think you've been at the Walton College since 2017. But tell us a little bit about you your background, and before coming to the Walton College.

Varun Grover  1:09  
Sure. So. So I'm a techie at heart in the sense that I did my undergraduate degree in engineering, electrical engineering, and, and that's where my interest in technology started coming about. And I had the good fortune of getting into a very competitive engineering school that allowed me to think systematically about technology. So when I, when I graduated, I kind of felt that I was in a very narrow silo, I understood technology gyrations I understood, you know, technical concepts, but I didn't really understand the world, the broader context. And so I decided to do an MBA and then go into my PhD in information systems. And then after that, I've been basically in academic jobs, I was at USC, and then I was at Clemson. And my primary goal or responsibility in those jobs was to, to build the PhD program, and conduct research and set up centers, and have some outreach, dealing with important problems in the Information Systems domain. So I've been integrally involved in this in this field, and the field has evolved, because the technology catalyst is kind of driving changes. So if you think of technology in the 1960s, and 70s, and 80s 90s, and now, now we're in this digital world, that is a lot of people really don't know how to deal with it, and companies are still struggling. And now we've got AI coming in, that's adding a new wrinkle. And so what I like about the field in general, is that we don't have a dearth of problems to study, there are lots of challenges faced by by companies faced by individuals, and now the impacts transcend to society. So there's a lot of really, really interesting issues. And that keeps me excited about the field.

Brent Williams  3:18  
Yeah. Well, you know, Varun, you're, you are a researcher, you're a teacher, you know, you serve the University and the College through the normal mechanisms. But you have spent the majority of your career really focused on research. And as you say, the your work and PhD programs are really advancing the research of these institutions. Just when you think about business research, maybe more generally, how, how does it impact the world? How does it impact others? And how have you viewed that over time?

Varun Grover  3:53  
Yeah, that's a really interesting question. Because even business research has has evolved. If you think about in the 1950s, business research was not much of anything, it was more, you know, go out maybe some outreach and case studies that you did with with businesses, but then the scientific aspect of business research came in the 1960s, 1970s. And we kind of went overboard with the concepts, scientific concepts of theory and became somewhat disassociated with practice. So my thinking is that in in a, in an applied business discipline, it's really important that we create interventions to practice. Any research that we do, should not be esoteric and isolated from the real world. And so when we talk about when we think about research projects, traditionally our approach would be, you know, look for research gaps in the literature and then try and fill those gaps. I think the more productive approach is to To identify real problems faced by practice, and then to convert those practical problems into research problems. So it's more like engaged scholarship, you engage with practice at a deeper level, and try and look at their problems, and then translate those problems into research problems so that you can create interventions, as a result of your research that actually make a difference in practice. 

Brent Williams  5:27  
That's right, what a good way to explain that I just couldn't agree more about at least in my own research that that's where the research questions that I've been most interested in then that ultimately, I think the research that has the biggest impact for my own career comes from same for you. 

Varun Grover  5:44  
Absolutely

Brent Williams  5:45  
Well, I'll, I'll brag on you a minute, because I don't think you would brag on yourself in this way. A couple of the interesting statistics about you as a researcher, I think you've published more than 400 articles in your career, over 100 in the top journals, and you are listed as one of the 100 most cited, not just information systems, but business researchers in the world, what an accomplishment. And so you have truly over a career made a real impact in the academy and in practice. 

Varun Grover  6:24  
Thank you. Thank you Brent. 

Brent Williams  6:26  
Well your research, yeah, I told you this before, I'm kind of borrowing from your bio here. But you know, at a real broad level, you know, as I saw it described focuses on the effects of digitization on individuals on businesses and organizations and even society, tell us a little more of a high level before we get into some of the more detailed questions.

Varun Grover  6:50  
So I've always been interested in how technology creates value. And information technology is particularly of interest to me. So when we look at computing technologies, in a business context, what are the aspects of these technologies? And the and the context around them, that lead to value value at the individual level? So how can technology enhance individual productivity? How the value at the group level, value at the organizational level? How do you actually create competitive advantage through these technologies? How do you create greater profitability, more revenues? And then at the market level? And that is how do you make markets more efficient through technology. And then now even more so at the societal level. So with technology, and particularly digital technologies, penetrating every aspect of what we do, there are issues of value, and the value could be positive or negative. So now we are seeing you know, this, there's two phases of technology, you have the intended consequences to create positive value for companies. And then you have sometimes unintended consequences that create problems for society. So I like to study at every one of these levels, I like to study different questions regarding the value proposition of digital technologies.

Brent Williams  8:27  
Well, this has to bring probably to anyone's mind, you know, when you talk about the enter the the intersection of society and digital technology, as we sit here today, the conversation around artificial intelligence, AI, is everywhere. All right, you know when chat GPT popped on the scene, it really just brought it to the consciousness, even though this has been developing for years, if not decades, it really brought it to the consciousness. You're not an AI researcher, I know. But but I do think that you're probably in a unique position to think about this intersection between this technology, and people, businesses and society. So maybe just I'll just I'll stop right there and kind of say, all right, as you're observing this phenomenon of AI, what's kind of going through your mind and what are the observations and questions that that are becoming apparent to you?

Varun Grover  9:28  
Yeah. As many people are fascinated by AI, so am I, as it's not, it's not my primary area of research, but clearly it falls within the domain of digital technologies. And like you said, ChatGPT came on the scene and it blew away people because not because it wasn't necessarily the most accurate or more most interesting output, but it gave people the sense of the possible if this is the starting point, and it can generate such interesting outputs, then where is the world going. And I remember the first time I played around with GPT, I asked it to write a story, I said, you know, here's a little girl walking up a hill, and there's a, there's a big rock at the top of the hill, write a story for a five year old. And it wrote a pretty compelling story that I actually read to a five year old. And then I said, okay, redo the story for a 10 year old and it reformulated the story, change the language, and again, fairly compelling story for a 10 year old, then I asked her to do one for an adult, and it did it. And then I pushed it to be more creative, in the ending, make it more interesting, make it more controversial, make it more enigmatic. And every time I did it, it came up with interesting changes. And I said, wow, you know, this is actually doing it from the corpus of data it's been trained on, and yet, it has this layer of creativity that we don't quite understand how it's getting there. And this was the early version of ChatGPT. More recently, I was playing around with now there's many, many of these chatbots, AI based chatbots. And I noticed one of the differences. So I was playing around with a chatbot called Pie, which now I believe has been acquired by Microsoft. And it I found it to be extremely conversational. And what was different about the earlier versions in this version is that it had empathy. It actually could relate to me, and it was taking cues from my text, and putting that emotional response back to me. 

Brent Williams  11:56  
Interesting. 

Varun Grover  11:56  
And that was very interesting. So it's like, it's almost like we're coding emotions into these training sets. Now, to provide far more conversational far more personal AI, personal intelligence reactions. And I stopped thinking about this, it is absolutely fascinating where this can go. Because if you can have that level of interactivity, and conversational and empathy with with individuals, you can put this front end on pretty much any intelligence, and you can access the world. So customer interfaces, so providing excellent customer service, that make it make it conversational, so you can align with what the customer wants. Personal learning it for in the education sphere, companionship for old people, you know, if you actually put a physical manifestation, like a robot, and then you have this AI back back end, you can you can maybe alleviate some loneliness in the world. And then so clearly, there's this tremendous implications. And then you see the applications of AI. Some of the things that we've often talked about is in the medical field, in the medical field, you can feed in radiology reports, and it can find abnormalities, that's kind of low hanging fruit. But then when you have this kind of conversational AI, you can put that in front of medical knowledge and you have a doc in a box. And when you are considered the shortage of primary care physicians, this would be a very interesting way to layer our medical interactions, where you have the first layer of interaction where you can feel comfortable talking, even though it's not a person, you can talk talk to the box, and that can do a diagnosis and write up a report. And then it can go to the next level for verification with a human and then go to the next level for a referral to a specialization. It can change the medical field quite quite significantly. And we've seen applications of AI in supply chain and manufacturing with robotics, in the financial sector with wealth planning. And so when you have these interfaces, you basically can interact and get it more aligned with what you want in terms of your wealth goals. And, and recently, I was reading an article that described IBM's Watson that was working with a fashion house, and they actually trained the AI on videos of models walking down runways, and social media included, and the AI was quite adept at identifying fashion trends, and new fashion clothing designs, based on what they observed through this corpus of data. So it's remarkable at the diversity of areas that AI can have have an impact, which is why it's so profound. And in some sense, also a little scary.

Interesting. Well, you mentioned profound and scary. You also, you talked about a wide potential set of potential impacts on people business society at large. I'll, I'll kind of dive into two and ask you a question. So you mentioned wealth management, and you mentioned health care. And as I was sitting here listening, I was thinking, oh, those are two pretty personal topics to me, like, one, you know, how am I managing my money and two managing my health? And I know that I think that a lot of your research in digitization and how it affects people in organizations and society looked at least at trust at some level. So what have you learned about the way that trust, I don't know, I don't know if it mediates those two. Tell me what you've learned and how you think that that's going to play out with AI?

Yeah, this is a really interesting question of trust. Because historically, when we looked at information systems, we programmed these systems, we could actually code them, we said, so if we had a, we had to make a computer play chess, we'd say, okay, this is the way the queen moves. This is the way the bishop moves. These are the rules of chess, we put it into a program, and then it will play chess. So the earlier AI systems were actually programmed systems, they will rule based systems. So there was AI in the 1970s. But it was usually referred to as a snake oil of the 1970s and 80s. Because it over promised and under delivered, you could only have some gaming applications. So yes, you had medical diagnostic systems, but they were based on rules, the rules were coded in. So if a patient had temperature over 98.6, the patient had fever, if their white blood cell was over a certain level they had an infection, and it would, you'd kind of give it the symptoms, and it will pull out the rules and come up with the diagnosis. What's changed now is we have had the perfect storm of processing power. With these GPUs, graphic processing units, Navidia is the big player there, we have massive amounts of data, and low cost of data storage. And we have with this process of processing power and data, we can kind of look for patterns in the data that are and, and identify these patterns and make predictions based on these patterns. So the large language models that we see of today, are really next word predictors, if they although now, we can actually feed them videos and images. So they're multimodal models. But the language models are, really next word predictor, so they don't have any raw intelligence of their own. But they're kind of looking for these patterns in the data. And so while the old systems were programmed, the newer systems are digging out these patterns, almost like your neurological connections in the brain. It's actually creating these connections. But we don't really understand how they're creating these connections. It's almost like asking a person, say, How did you come up with that decision? Yeah, and what what neurons were connected in your brain to come up with that decision. So we don't really understand it. And so if we don't fully understand it, it's like Warren Buffett says, Never invest in anything you don't understand. So here, we are relying on the outputs. And we don't really understand how the outputs came to into existence. It's massive amounts of data, millions of parameters in the model. And so without understanding it, why should we trust it? And if we can't trust it, then why should we use it? And so I think it's a fundamental question of technology and value, which is my major stream, that we need to be able to trust these systems. Because another way to think about this is typically, if you look at scientific research, the idea of actually looking at patterns from data was considered voodoo science for many years. 

Brent Williams  19:51  
Yeah

Varun Grover  19:51  
This induction you don't look for patterns. 

Brent Williams  19:54  
Yeah

Varun Grover  19:54  
You come up with a theory or a statement and then you go on to get data and test it. 

Brent Williams  19:59  
Yeah, that's right. 

Varun Grover  20:00  
But what are large language models doing? What is AI doing? It is really creating and testing theory simultaneously without any scientific underlay. So when I think about this, I think, you know, I've always grown up through my information systems career, thinking about data, information and knowledge as a trichotomy. So you have data, which are raw facts. And then you put the data together, and you get information which informs you. So you get interesting reports that reduce uncertainty or informs you. And then when you apply it, and you use organize the information, with experience, you get knowledge. What we've done with AI, is we've removed the middle layer, we've gone data is knowledge. And so we're basically taking massive amounts of data about the world. And we are proclaiming that this is actually creating knowledge. So we don't really need to understand the world to know it. It's like we're just take, so that layer has gone. So that's why this idea of trust is so fundamental, because we don't really understand how these connections are made. And so I I was thinking about, you know, as you why do people trust other people?

Yeah. So it's just thinking exactly the same question. 

And so so and do the others elements actually translate to AI. And I think people trust other people, because they may have a connection with them, they actually relate to them, they have empathy, or they relate at some subliminal level. So they say I trust you. People may trust people, because they understand where they're coming from. So if they make a decision, if you talk to someone, they make a decision, you can ask them. Okay, what's the basis of this decision? And do I trust you to explain to me how you came up with this decision. People trust people, because they are predictable, they are reliable, they may give accurate outputs. So if someone is erratic, we probably won't trust them. But if someone is more stable and predictable, and comes up with good, good outputs, we tend to trust them. And people trust people because we might share the same social experiences. So I think these elements of we trust people, because we can connect with them or relate to them. We trust people because we can understand where they're coming from, we can ask them, we trust people because they're reliable and not erratic. And we trust people, because we have a shared social world, is generally why people trust people. These elements are not very prominent in AI. And so, so I think that's why trust is a kind of an important issue. Although they're changing.

Brent Williams  21:30  
Well, that's what I was gonna ask you, you know, how do you see that changing and evolving? The one that I could see that you just mentioned. Well, the model is getting more reliable, let's say less erratic. That one may change, you know, but and you even mentioned the ability of, of maybe I don't think you said it this way. I think you were mentioning Pie, the the product you were working with, that it had a feel of empathy, which might mean you could emotionally connect to it more easily. So some of these things seem to be evolving,

Varun Grover  23:44  
I think so I think that from whatever little observation I've had over the last year and a half, it's not been that long since ChatGPT was removed, released. But this idea, I can see improvement in empathy, I think that element is there. So we are starting in on that trajectory in terms of understandability, there is a whole field or subfield of AI. That is called Explainable AI. How do you get take this neural connections? And how do you actually convert it into a language that humans can understand so they can explain the rationale for their decisions? Now, it may not be that important for radiology reports. But if we're dealing with a business discipline, you know, often you want to understand the rationale for the decision. So Explainable AI is actually being worked on. And and it's important, and it's it's particularly important because AI is all about data and connections and patterns. AI is not particularly good at reasoning. So if we tell an AI that Look, if they're looking at a pattern, x is the parent of y, it will not naturally infer from there or deduce from there, that y is the child of x. That reasoning ability will only come about if it's in the corpus of data. So the question then becomes, you know, how do you get this reasoning out of the AI, and that's where a lot of the research on Explainable AI is, give you another example. We can put millions of chess games played by world chess champions and feed it into the AI and it'll figure out the rules of chess and what works and what doesn't work. So that it will basically be a really good chess player. But if we change the board from an eight by eight board to a nine by nine board, then the training doesn't really help because it needs more than that it needs some kind of reasoning ability to play chess well on a nine by nine input. So I think that that idea of how do you get that reasoning to come into you know, the box or the corpus of intelligence of AI is is something that is being worked on through Explainable AI. So empathy? Yes, we're seeing improvement there. Explainable AI, we're not quite there. But there's certainly a direction there. And like you said, absolutely the corpus of training data. So most of the models, we had an AI foundational models, like from open AI. And from Google and Microsoft, these models are trained on massive corpus of data from the internet, the internet data is noisy. And it is, you know, it has a lot of a lot of problems, which means it can be a pretty good general, it can provide general advice on a lot of things. But if you're looking for very specialized advice, sometimes these models hallucinate and create these arbitrary outcomes, because they're not fully trained in those specialized areas. So I think one of the trends is how do you actually curate data for training and make these models more accurate and better? And how do you move away from the big tech companies which have the resources to train these large foundational models, to smaller models that are done through open source where you can, individual companies may not have to invest millions of dollars in training these models, but they can train them on their own data, 

Brent Williams  27:49  
Right.

Varun Grover  27:50  
And, and become, it can become more more personalized and more accurate within that domain. So I think that's the third component of trust, where these models are being trained on better data. And sometimes even instead of the large models, we're getting smaller and more specialized models that are more accurate. And I think that's a trajectory that will improve trust. And finally, in terms of context. So I think that context is also improving, because the earlier models had very little that you could put into the model in terms of input. Now, some of the newer models, you can put entire books and and say, you know, so your context window, the number of tokens that go into the context is becoming larger and larger, which means you can provide a richer context to your decision making. And then you can have conversations to get the alignment of get what you want from the AI. And so that alignment can continue by probing and interacting with the AI. So I think on the third aspect, the context aspect, we're also seeing improvements. So I will trust the box, because I can connect with it. I can trust the box, because I know where it's coming from. It can explain its reasoning to me, I can trust the box. Because it gives me good outputs, accurate outputs, and I can trust the box because it's grounded in my world. And so all those aspects are advancing. And so I think that on the technological front, we are seeing a trajectory that's positive for trust.

Brent Williams  29:47  
I'm gonna I'm gonna ask you about to human. Maybe let's start in the business context, right. So let's say the models are improving, they're becoming more trustworthy. They're certainly becoming more commonplace. They're still probably some of the like, there's the excitement. There's some of those some people who think it's scary. Probably each of us land somewhere on that continuum differently. But in the business world, you know, the role of the human in business, does it, change it slightly? Does it change? Fundamentally? You have any thoughts to share on that? 

Varun Grover  30:28  
Yeah I think that's, that's, that's a million dollar question, or maybe a billion dollar question. or now given, given the market of AI, maybe a trillion question. 

Brent Williams  30:39  
If you can answer this, we can raise a lot of money for the Walton College.

Varun Grover  30:43  
But I think I so, how should AI, so if if we are using AI, to automate jobs, if we're using AI to, to automate and replace jobs, so you can, you know, you can do an accounting job better, or you can do a legal job better. Then what is essentially happening is we are devaluing labor. We're devaluing human skills and labor, because we're essentially replacing labor with the technology. And that's going to bring the cost of labor down. So substitution of jobs with technology is probably not the most innovative direction, it's probably appropriate. And probably the incentives for corporations to make money are in the cost cutting sphere. So we're certainly going to see a lot of that. But I think that where we're going to see the human and, and the technology work better is through augmentation. You never have innovation through automation, you will only have innovation, if there are ways in which the AI and the technology can work together. So if you automate a lot, you're basically devaluing the human if you augment you're up valuing the human, because when you when you automate, your competition is between the AI and the human, when you augment your competition is between the human with the AI, and the human without the AI.

Brent Williams  32:24  
Right.

Varun Grover  32:25  
And so you're upskilling. So I think that we are at a point of inflection, where we are making 1000s of decisions regarding AI. And it's really important that if corporations are going to invest in AI, they should consciously think about how to augment human skills through AI, rather than only automation. Because automation will probably give you the short term bang for the buck in terms of reduction in costs. But the longer term impact on innovation will only take place if you give people the discretion to leverage AI and become and use it innovatively. So if there's there's a series of tasks in your job, and there's a few that can be automated, that's fine. But then think about what are the tasks in your jobs that can be augmented. That's where the innovation comes in. And that's where the value is created.

Brent Williams  33:27  
Well, Varun, thank you. Thank you for sharing your knowledge and that knowledge that comes from decades of work and research and I know I learned a lot and I'm sure our audience did as well. So thank you for joining me today. 

Varun Grover  33:44  
Thank you, Brent for having me. 

Brent Williams  33:45  
On behalf of the Walton College thank you for joining us for this captivating conversation. To stay connected and never miss an episode, simply search for Be Epic on your preferred podcast service.

Brent D. Williams Brent D. Williams is the Dean of the Sam M. Walton College of Business at the University of Arkansas. With a deep commitment to fostering excellence in business education and thought leadership, Dr. Williams brings a wealth of experience to his role, shaping the future of the college and its impact on students and the business community.




Walton College

Walton College of Business

Since its founding at the University of Arkansas in 1926, the Sam M. Walton College of Business has grown to become the state's premier college of business – as well as a nationally competitive business school. Learn more...

Be Epic Podcast

We're sitting down with innovators and business mavericks to discuss strategy, leadership and entrepreneurship. The Be EPIC Podcast is hosted by Matthew Waller, dean of the Sam M. Walton College of Business at the University of Arkansas. Learn more...

Ways to Listen

Listen on Apple Podcasts
Listen on Spotify
Listen on Google Podcasts
Listen on Amazon Music
Listen on iHeart Radio
Listen on Stitcher