Cindy Moehring chats with Vivienne Ming, a self-described mad scientist, theoretical neuroscientist, AI expert, entrepreneur, author, and is well-known for her research and inventions. The pair discuss the responsible and ethical use of AI as a tool. AI is not able to solve problems that humans don’t know the answers to first. Therefore, the best practice for solving impossible business problems is combining creative people with sophisticated artificial intelligence.
Resources From the Episode
- Ming’s Financial Times Article: “Human insight remains essential to beat the bias of algorithms”
- Ming’s article on Diversity-Innovation Paradox
- Socos Newsletter
- Remote Work
- This Is Not the Industrial Revolution
- Explore economic working papers of the National Bureau of Economic Research
Cindy Moehring 0:03
Hi, everyone. I'm Cindy Moehring, the founder and Executive Chair of the Business Integrity Leadership Initiative at the Sam M. Walton College of Business, and this is TheBIS, The Business Integrity School podcast. Here we talk about applying ethics, integrity and courageous leadership in business, education, and most importantly, your life today.I've had nearly 30 years of real world experience as a senior executive. So if you're looking for practical tips from a business pro who's been there, then this is the podcast for you. Welcome. Let's get started.
Hi, everybody, and welcome back for another episode of TheBIS, The Business Integrity School. I am really excited to tell you today that we have a self-described, described mad scientist with us, Vivienne Ming, and she is going to lead us through an incredibly insightful conversation that I know you all will be jazzed about. Hi, Vivienne. Nice to meet you and see you again.
Vivienne Ming 1:03
It's always wonderful to have really fascinating conversation. So it's great to be here.
Cindy Moehring 1:09
Great. Let me tell you guys, just a little bit about Vivienne and then you'll understand why it is such a treat to have her with us today. In addition to being a self-described, mad scientist, she's a theoretical neuroscientist, an AI expert, an entrepreneur and an author and she's frequently featured for her research and inventions. In the Financial Times, The Atlantic, The Guardian, Courts, The New York Times, you won't believe this, but she's actually now co founded her fifth company, we're going to hear a little bit about that journey and what the current company does. Socos Labs, Socos is an independent Institute exploring the future of human potential. Before that Vivienne was a visiting scholar at UC Berkeley's Redwood Center for Theoretical Neuroscience. And she has also invented some really cool AI systems of using AI for good, she's created systems that do things like help treat her diabetic son and predict manic episodes and bipolar sufferers weeks in advance, and even reunite orphan refugees with extended family members. So welcome, Vivienne, my goodness, you, I have to say, are living the dream of I know, many students today and many entrepreneurs, can you just share with us a little bit about your personal journey? How did you get to where you are today and founding your fifth company?
Vivienne Ming 2:28
Yeah. Well, you know, as you unravel my story, and we probably don't want it to take the entire discussion time. But oh, it could, you'll see that most of what people are probably imagining is actually the second half of my life. And strangely, I'll start there, at the start of the second half. So I was an academic, I did my PhD work at Carnegie Mellon, where I studied cognitive psychology and computational neuroscience, really loved the idea of using machine learning, artificial intelligence, but to understand people. And I didn't think of it as just a scientific curiosity, I wasn't expecting a world to emerge in 2000, whereby in 2020, artificial intelligence becoming a dominant technology in a lot of fields in the world, from medicine, to logistics, to just ad targeting. So it was really dumb luck, that I happen to be walking into this field. But it was also, I'm not an artificial intelligence researcher. I study brains. I just happen to use machine learning to do that. And it turns out, brains are really interesting to inspire new forms of machine learning. So for me, it's always been a tool, something that you apply to a problem. I'm thrilled that people can geek out about convergence proofs and complexity theory. I love stuff like that. But to me, it's just fun. Ultimately, this stuff is meaningful, because someone might be alive because you made a choice that otherwise what's possible. That's what really moves me. So I was doing my PhD, I joined my staff, Stanford, Berkeley. And then my wife and I had an idea to start an educational technology company. You know, I got my AI chocolate and her education and psychology, peanut butter, which if you're old enough to get the reference, then you're a good person. If you're not well, you can you probably can guess what I think about you. So i i started this company, and I didn't really have a background in the business space, I'd been a scientist, I'd been other things, I'll come back around to that. But I, we wanted to make a difference. This could just have been an academic experiment, but we saw a way to listen to machines talk to each other. And from that, without ever giving that formula to them, we could actually tell how well they would do in the class. And our deep belief was, why wait till a student fails in class? Or even gets a B? If we could, as soon as possible, identify where students had miscon, missed concepts, and I didn't want to replace the teacher. Could we actually create a little dashboard that would allow teachers to explore what's going on in the black box of their students heads? And I was so proud of myself, it was cool. We publish scientific papers, not necessarily what you're supposed to do with a startup. But that's what I did. Boy, were venture capitalists not interested in education. They weren't interested. I mean, they thought artificial intelligence was the coolest thing in the world, they have no idea what it was back then. And their only thing they could see that would be worthwhile with it would be looking at credit card fraud. So the only tool was if we completely start a new business plan, retooled to focus on credit card fraud, and fire myself as CEO. One guy said, I've got two million dollars for you, and I've got the perfect CEO, as I'm literally sitting across the table pitching my company to him. So you know, it was, I have founded a bunch of companies. Since then I've been the chief scientist of several others, it was a good idea. I can say with great confidence. In retrospect, I may not have truly known what I was doing. But we had a good idea that could have made a big difference. But no one could see the value add of either me as an entrepreneur, or making a difference for at risk kids and education. And you know what? They're right. Of course, we could have made more money in fraud detection, than an education, education is not a great place to go make money. There's money to be made there. There's a sustainable business. But I love being a scientist, I love being an academic. So I wasn't chasing the next billion dollar deal, I was looking at how something I can build would have a positive impact on the world in a sustainable way. And could grow big, I mean, positive and sustainable at the scale of a billion kids. But boy, was that a hard thing to get across, before Coursera, before Khan Academy, before AI was an everyday concept in the business world, these are hard things to get across. And it did not help, now for the big reveal that it was co founded by a couple of married women, that one of the cofounders despite having a couple of very fancy smancy PhDs and scientific publications, had spent a fairly long time in the 90s homeless. You know, I am not anyone's, I am not Hollywood Central Casting idea of what a geek tycoon looks like. I turned up, most successful founders aren't that. That's right. But trying trying to go out there and raise funding through that is hard. And, you know, in the end, we had a successes we had a small acquisition wasn't my dream, but was an outcome. Once you make money for people, suddenly doors open up. Every company I've done since then was so much easier. But it also really opened my eyes to how hard, if you are even a little bit different than the stereotype for an entrepreneur, how hard how, how much extra effort goes into just passing through those first barriers. But I will admit, I got kind of hooked. I started a net tech company, then I started another one. And then I got started in a company doing sort of social connection in business instead of who do you know, ala LinkedIn, it was who should you know? You go into the big conference in your field. Who should you meet while you're there? That would really make a difference. That was one of my coolest projects, though. It was the initial meeting where every company has it but not that one. I got poached out of that to be the Chief Scientist of a company called Gild, one of the first ever doing AI in hiring. And oh my goodness, is that a fraud ethical field, but also incredible, from both a business and a technology standpoint. So, I'll close with this, I've had the chance to start these companies, I've been the Chief Scientist at three others. I've actually started my sixth company, fairly recently. It's called Dionysus, and we're in public health, again, the things you could do with artificial intelligence that would make money, lots of money, don't include education or public health. But it turns out, if you want to make a big impact, the overwhelming positive effect on mortality and morbidity over the last 40 years has been from public health. I, you know, it's great that people have discovered drugs and come up with new surgical techniques. But overwhelmingly more people are alive today, because of improvements in public health. So we're looking at things like how chronic stress directly, causally leads to ischemic heart disease, type two diabetes, major depression, exacerbation of bipolar disorder, the things that ruin lives over decades. You know, again, probably not the sexiest business idea. But this is an area that will actually affect people in a positive way.
Cindy Moehring 11:30
Vivienne Ming 11:30
And if I wanted to be a billionaire, I'd already be a billionaire. This, this is the world I want my kids to grow up in.
Cindy Moehring 11:37
Exactly, exactly. And you're making a huge difference. So being in education, I have to go back and ask you about that first business idea that you had not to take the teacher out of the classroom, but to help them see where they were missing concepts. What happened to that idea?
Vivienne Ming 11:51
Well, you know, some of it was me learning things, you know, as an entrepreneur, scientist, who could totally geek out. And one of my advisors in grad school told me what his advisor told him, which was always meant to keep, you know, my head up, science is a story. And that story should be about an hour long. But science is a story. So it means to one, of course, is be able to communicate the work that you do to more than just the six other people in the world that understand it. But the other is that it's not about one single discovery, or one paper, it's about the story that explains your entire field. Everything you're doing is actually building that story.
Cindy Moehring 12:39
Vivienne Ming 12:40
It's like a story as a metahypothesis of the whole way the world works. So but but still, you know, I am a scientist. I'm moderately clever. And I walked into this space with the attitude of I'm so much smarter than everyone else. I've invented this image, and actually identify students of a missed concept and reviews it. And the general feedback we got was, Oh, my God, this is amazing. And terrifying.
Cindy Moehring 13:16
Vivienne Ming 13:17
And what the hell am I supposed to do with it? You know, I've already got up three hour, Well, I've got to get paid for 40 hours a week, I work 70 hours a week. When am I supposed to learn how to use this incredibly confusing thing? So, you know, one of the things I learned the hard way over the years, both in philanthropic work and in entrepreneurship is, one, you're never as smart as you think you are. Just throw that out, right off the bat. But also, it wouldn't even matter if you were so if i was 100% right. If they didn't use it, I didn't make a difference...
Cindy Moehring 13:59
Vivienne Ming 14:02
And so we found ourselves going back and back and back. I've worked on a lot of products over the years, Education is the one has, in a sense never ended. From that original piece of work. Analyzing students conversations, just literally we could pick up on for example, a university students just talking to each other online, right, to see whether they understood the material they were talking about and who might have missed concepts and so forth. And we built dashboards and all this sort of stuff. We have walked that back all the way to Mom and Dad, do you have 20 minutes free tonight? If you do, here is the way we think you should spend those 20 minutes. Here's an activity that we believe will have the biggest impact on your young child's life. So we've instead of a dashboard for a teacher to explore, you know, I learned the lesson. They want answers. They don't want more data. I mean, I wish people wanted to explore. In this process, it's not thinking, what is the problem to be solved here? And the problem is, you're teaching situations that you've got command constraints on your time. Yeah. So instead of creating new things to do, how do I make the things you're already doing easier and more effective? And let's be honest, if the choice is between easier and more effective, we can send what increased the university admission rate for your students by 50%. Or we'll cut your grading time in half, exact same product, cut the grading time in half will win every single time. You know, it's an Yeah, the university stuff is a nice to have. But the grading time, like they feel that. So I guess the way I always describe it, is I invent broccoli. And then I have to spend years figuring out how to bake my broccoli into a brownie. People get a brownie, they have distribution channels for brownies, they got they know what it is. It wouldbe great, if I could give people the raw broccoli, because it'd be good for them, that would be good for them. But that doesn't actually fix the system that we're trying to address. So we built this stuff. And it's evolved over the years, we run a system, which is in desperate need of a recent overhaul, which we're working on right now called Muse, and it does what I got to, which is we really resolve down to alright, let's go, right who cares the most about the long term outcomes of a kid, and when can you have the biggest impact? Early in their life directly to caregivers, a parent, a grandparent, a foster whoever it might be. And make it super easy. We're gonna ask you one question a day. And we'll give you one activity a day. And that's it. We won't even allow you to do more than that. You know, the most at-risk kids, the most at-risk families are the ones that are the least able to make use of the kind of technology I'm talking about. So make this something. And this is a general truth, I think about technology and product development in general. Build a product where your desired outcome is not a possible outcome. It's not, I can imagine a world in which people put in the extra effort, it is the inevitable outcome. People will inevitably choose to make use of it. And it will have that positive. Because our world is particularly in the technology space, is overpopulated with people who I think, you know, I'll just pick an obvious name. I don't think in a million years, Mark Zuckerberg saw Facebook as something that was going to erode public trust. And he didn't intend for it to be and it doesn't have to be. And yet in the end, in certain places, it absolutely has that impact. And I think he and many of the people building it are still in that space of, I can imagine a world in which this is an undeniable, good. All right, but we only live in a world where the inevitable outcomes dominate. And that's the kind of product you have to build.
Cindy Moehring 18:45
Yeah, yeah. Very interesting. Okay, let's dive a little bit deeper into this whole topic of responsible and ethical use of AI. I love how you framed it up at the beginning about it being a tool. So that sort of like human-in-the-middle is what comes to mind for me and the thought of not handing over to a machine a problem that you don't already know the answer to and expect that somehow, it's just going to come up with the answer. I think you actually wrote a great piece on that in the Financial Times, which I'll drop in the show notes for everybody to read. But let's talk specifically about that, about bias in our promotions and our hiring. We know it exists. We know that we all have implicit biases as humans, right. So as much as we may try to root it out, it's there. And the risk with AI seems to me is that it's only going to exacerbate that because we haven't yet figured out how to completely root out bias in in hiring and in promotions, right. A perfect example is what happened with Amazon and their, you know, their secret tool they had built. So what do you think the answer is here to this to this problem? And is it a problem that AI can truly help us with or not?
Vivienne Ming 20:01
You know, I will start again with the simple statement that: AI is just a tool. It's not a magic wand. I am not Hermione Granger. I can't magic problems away using AI. I'm gonna say a lot of AI people, at least in their early days and those early promises, that's exactly what was being sold. You got a business problem or anything else. Here's this magic, which will make it go away.
Cindy Moehring 20:29
Right. It didn't work.
Vivienne Ming 20:31
It's it's numbers and equations. So of course, it can't be biased, which is a truly absurd notion. I know that machine learning as a complex field is can be a little obscure and obtuse. But the fact is, when problems get complicated, there's a universal truth, which is, in fact, all of these systems have bias. In fact, it's impossible not to have bias, whether we're talking about human cognition, or complex machine learning, or a rat, it doesn't doesn't matter. The definition of AI that's most useful, if a little wonky is it's any autonomous system, making a decision under uncertainty. There's no right answer. There are better answers. But there's no right answer. So fundamentally, in a world with no right answer, there's bias. There's different ways, for example, to measure what is a better and a worse answer. So right off the bat, you can appreciate that, even if we were talking about two people, they might have different criteria for what makes a great employee or what makes a great loan, or what makes a great product recommendation.
Cindy Moehring 21:45
Vivienne Ming 21:45
And so they're gonna recommend different things. Well, guess what? AI is just as idiosyncratic as unique as the rest of us. Now, it is, it is that in a very different way. Let's be clear, artificial intelligence is not the same as natural intelligence, maybe someday, these things will converge. I do work in neural prosthetics, i.e. I build cyborgs. So I have that vision of a world. Or maybe AI itself truly becomes intelligent. But what AI can do that we are not so good at is it can take every single number into account.
Cindy Moehring 22:22
Vivienne Ming 22:23
But that doesn't mean it's unbiased. It just means when a human recruiter looks at a paper, and this has been well researched for literally a 100 years, they have about as much attention as three variables, your name, your school and your last job. So your average recruiter, that's it, you know, maybe five seconds, a quick look. And you think that's terrible. But that's almost exactly what a grandmaster in chess does. When they look at a chessboard, they don't look at 1000, different possible moves, generally thinking, they are looking at three possible moves, just as the Grandmaster turns out, those are often the three best moves, if you really crunch the numbers. Whereas a lot of AI is going to look, you know, it's gonna do massive computation and explore lots of possibility.
Cindy Moehring 23:20
Vivienne Ming 23:21
But it also has its biases based on how it's been trained, much like us. So a recruiter looks at three pieces of information, but I can make a machine learning system, and I did at Gild that looked at 400,000 pieces of information, and didn't even take five seconds, it took a couple 100 milliseconds to process that information.
Cindy Moehring 23:43
Vivienne Ming 23:44
So now we can see two very different things going on, you know, this massive glut of information and trying to infer something unknown about the world, should I bring this person in for an interview versus three pieces of information? And maybe let's call it a lot of intuition. Yeah. You know, which is better? And I would contend neither. The simple truth is that we know there is a lot of human bias in the hiring process. It's fairly easy to measure that bias. I'm willing to bet that some meaningful percentage of your viewers are sort of shaking their head and saying, oh, we're hearing about bias in hiring again. There's bias and everything. Like I said earlier, we're human, we have biases. I'm not saying you're a bad person. I'm just saying you're human.
Cindy Moehring 24:37
Vivienne Ming 24:37
We have, we have biases in favor of certain schools, we have biases in favor of a depth of a voice. We have biases, sometimes not so much about whether someone's a man or women but just how feminine or masculine they are. Turns out being a feminine presenting man is is actually really bad. So there's all of these subtle biases that play into our decisions, including on a resume. Turns out people really love an Ivy League school doesn't mean that's not predictive, but they love it way more than the value it actually predicts. So we've sort of encoded the simple rules heuristics to get by. AI doesn't do it. AI can crunch much broader numbers. But here's the problem with those broader numbers is, because it is math, it needs to get referenced against something, it needs to be told what the right answer is. So I've got 400,000 numbers, but what's the equation? What am I trying to generate? What am I regressing it against a training a deep neural network? Well, for Amazon, it was will you get a promotion in your first year at Amazon? And now we can get at what the problem might be, which is, so where do they get those right answers from? They get it from Amazon's hiring history. Well, we can all understand. The same is true of any tech company, Amazon's hiring history, and its first year promotion rates are wildly biased in favor of men.
Cindy Moehring 26:10
Vivienne Ming 26:11
And so strangely enough, when an AI is trained on that data, it becomes even more biased than the original recruiters that generated that data. We see the same thing, by the way, in health. Some very well intentioned doctors put together a model to look for people that should be patients that should be brought back for checkup, after a health heart scare, and they made a very reasonable assumption, the amount of money spent on the patient in that subsequent years is an indicator of how sick they were. So we really want to bring back the patients that we spent that were more sick prophylactically before they become sick, so the ones that have more money spent. So here's the terrible story. That model was implemented, it was used in practice. And then a paper came out in nature, saying, Oh, by the way, much more money is spent on average on white patients versus black patients for the exact same conditions. And there's a variety of reasons which can even be purely regional, by patients, disproportionately, for example, live in portions of the south where healthcare costs are lower than on the coasts. So just simple things like that. I am sure there was well documented sources of more explicit bias as well. But even simple things like regionality mean, more money spent on average on white patients and black patients for the exact same conditions, which meant white patients were brought back by this model at a much higher rate than black patients. And so the authors of this paper went through and calculated 1000s of people died, who wouldn't have had the model been trained more equitably. No one intended for this model to have this bias. Nobody wanted these people to die. No one was a villain here.
Cindy Moehring 28:09
Vivienne Ming 28:10
But it is a recognition that just because it's an equation, yeah, just because there are lots of numbers, doesn't mean that it's right. All of these numbers have their own implicit bias behind them. And you know, we so for me, in the end, it's a tool. Where do we get the best results consistently over and over again, including, from a business perspective? We get it when we create, take creative people and sophisticated artificial intelligence and bring them together. When we substitute for people,
Cindy Moehring 28:45
Yeah, yeah, with models that have to be trained, right? I mean, you have to step back and take that time for the human to, to train it. I heard that in some of what you were saying, you know, both in the hospital example and the hiring example. If you spend a little more time upfront with humans, kind of, monitoring, if you will, modeling it, doing the training of the, of the, of the algorithm before it's released, then, then it could be a better tool. Might not be perfect, but it could be better. Right?
Vivienne Ming 29:12
It may not be perfect, but then here is the other side of the story, which is going to harken back to my comments about education. So we built that tool. We built a tool that was transparent, which means it explains why it makes the hiring recommendations it does or sourcing in our case. It's very specific and identifies specific things that are driving its recommendations, why they are a fit for your company versus another company. It even pre templated a personalized letter that you could send to build a relationship with this candidate. And almost nobody used any of that. When we did our user testing with our very fancy AI that was meant to make recruiters better, when we did our user testing, we found that they gave you five seconds. And when they opened up your profile in our system, they looked at your name, your school and your last job. Not every recruiter, but the vast majority of them.
Cindy Moehring 30:16
So user adoption was a huge problem.
Vivienne Ming 30:19
Well people used it for some reasons that we embedded in here, you know, we made it very easy to discover people that you may not have come across otherwise. But the funny thing is, that's why people bought our system, it was the a whole reason. You know, if you're an elite company, sure, you already know, everyone that's coming out of Stanford and Harvard, or Cal or Michigan. It's everyone that's not, that is the big question mark. And that's why people paid us a lot of money for our product.
Cindy Moehring 30:53
Vivienne Ming 30:55
But then, once the recruiters got into the system, yeah, they hired you, because you went to Stanford or Harvard. And that may feel like a little bit of a heartbreak. But I come back to the fact that I own this. You know, I'm, why would they change their entire way they have learned how to be a recruiter just because this thing comes out, particularly in HR, which can be a very risk averse part of the business world, like, you hire someone that 20 years from now becomes a senior executive, no one's going to come back and give you a bonus. But if you hire someone who on day one is a disaster, you take a risk on someone that doesn't have credentials then you are at a real risk of losing your job. So that dynamic builds up. Nobody wants to learn this new tool, nobody wants to invest. And when I say nobody, I'm being too strong, there were people. So with all of these things, if I'm saying, we shouldn't substitute for people, we should make them better. You should not only be better when you're using a new technology, you should be better than where you started when you turn it off again. So when I build a product nowadays, or my philanthropic work, or any of it, that is always my starting question. How do I make certain that this person is immediately better in some way that matters to them? And that even when they stop using it, they remain better at whatever it is they were trying to do. Which is a hard constraint. But again, it is those inevitable outcomes that are really meaningful. If I can't solve that problem, I don't know that I've truly solved anything.
Cindy Moehring 32:44
Hmm. Interesting. So let's, let's, let's, let's riff on that for a minute. So let's, let's take that, that idea of what you just said, I'm not going to be a better person when I actually turn off that tool and stop using it, right. And so promoting inclusion and getting people to, you know, think more open-mindedly all kinds of studies showing all the benefits of that, right. And then we go into this COVID world where we've all been working at home for the last two years, and you know, going on more, so not everybody, but a lot of people are, right. So it would seem to me that that would, could, potentially throw the a big juggernaut into a company's ability to follow through on their desire of promoting inclusion for everyone when everyone's like working at home and sort of siloed. So how, I think you may have done some research in this area, I'm really interested in hearing your thoughts on that. And like, how have companies dealt with that issue? And what did your research show about how to deal with that issue?
Vivienne Ming 33:48
Cindy Moehring 33:49
Do you work better when you turn off your zoom?
Vivienne Ming 33:52
Exactly. You know, I I have this little organization here called Jocose Labs, and it works as follows. People bring me a problem, or sometimes we come up with it internally. And if I think my team and I can make a unique difference on that problem, then I pay for everything. And we try and come up with a solution. And if we succeed, we give it away. And those problems come to me often in the form of emails, like "Dr. Ming, my daughter has 500 seizures a day. Or my son can't enter REM sleep," which is doesn't sound like a huge issue, but it's astonishing that that kid was still alive. Or in this case, Facebook and Amazon separately reached out to me and said, "We've never had to do inclusion or innovation with everyone at home before. We've never even had everyone at home much less these two specific things. What do we do?" And I obviously I got this these messages like in March of 2020, uh, So you get the sense, I'm a terrible entrepreneur. I mean, most of my time is spent running an organization where people give me their problems, and I charge them nothing for the answers. But in this particular case, that seemed like a problem we're solving because everyone in the world was experiencing the same thing. And I thought, what I'd be able to do is just leaf through a bunch of existing research and offer some insights. There's nothing. You know, there's one notable case of a Chinese company, a large one that went entirely remote. But essentially, there was no substantial data on remote work in general, much less these two questions, innovation and inclusion.
Cindy Moehring 35:44
Vivienne Ming 35:45
So in the end, we just had to break some new ground. So we put some existing research together, and then we ran some experiments with some of our partners. And in a way, as a mad scientist, remote work is a fascinating experiment. Because unlike in the office, I get to see everything you're doing. Every time you speak to someone else. There's a record of how long it was, often about what it was you were talking about. I've had the chance, you know, in COVID world, there's contract tracing. Well, actually, researchers have been doing contract tracing to reconstruct social networks, you know, in body social networks, inside schools and companies for a while, I've had the chance to do this with some big companies as well. But here, it's super easy. So we're able to see all of these interactions. And we found, let's start with the innovation. But trust me, inclusion turns out to be fundamentally related to it. On the innovation side, we've found that the most innovative, First off, there's an idea that you can innovate, if you're not hanging out at the watercooler together, we found that there's no special privilege to smashing everyone together on a university campus or a corporate campus or anything else. In fact, in some ways, that can be a drag on the system.
Cindy Moehring 37:08
Vivienne Ming 37:09
What we found, what predicted the most, the greatest innovation was small teams, flat hierarchies within those teams. So even though there might be a senior vice president and a new hire, you know, member of the National Academy of Science and an undergrad student, when they're actually collaborating those hierarchies disappear. Because otherwise, what's the point of having the grad student there?
Cindy Moehring 37:38
Vivienne Ming 37:38
If they're afraid to say something, then they're not adding value.
Cindy Moehring 37:41
Vivienne Ming 37:42
So flat hierarchies, on small teams. A lot of fascinating work with technology tools. So collaborative documents, wikis, things like Google Docs, not so much chat. Turns out chat doesn't seem to improve innovation in any way. It's, it's actually sharing, like having a centralized shared mind.
Cindy Moehring 38:13
Vivienne Ming 38:15
And so we see these flat hierarchies, small teams, integrated, they have complementary diversity, which is a bit of a term to unpack. So this is a well known thing in collective intelligence, in general. Two of the biggest predictors of collective intelligence at any scale, is how diverse that population is, for example, slightly more women than men, slightly more women than men is universally a strong predictor of collective intelligence from a small team to a whole community. And the other core ingredient is psychological safety. Which I'm going to nuance a little bit and give it my own specific definition. It is that I feel free to pitch a crazy idea knowing that even if I'm wrong, it's okay. I'm not out of the team. I'm not low, less in people's eyes. That is a huge predictor of collective intelligence is because that's the whole point of being different is that chance to bring a different idea. And when you upset that, then you realize the inevitable result is most of us are going to be wrong most of the time.
Cindy Moehring 39:34
Vivienne Ming 39:34
And it's what we collectively arrive at that is truly important. So these are the things of complimentary diversity. We are different in certain ways, but we have shared qualities that build trust. Small teams, flat hierarchies. Turns out those exact same things when we're tracking everyone on Zoom and Google Meets and Microsoft Teams or whatever other craziness people are making use of, all of those same things become predictors of inclusion. And it's a little complicated operationalizing it but let's do one that I've worked on for with for a while, do people with similar work histories have the same probability of promotion? Why don't you take say race or gender other outcomes. So for example, gender, almost immediately women with similar work histories to men are, sorry, let me put it in these terms. Women who are rated higher in performance are also systematically rated lower in potential and less likely to get promoted into management and executive positions.
Cindy Moehring 40:46
Vivienne Ming 40:47
And, and there's a lot of nuance there. But it's really replicated finding again, and again, and it's a, it's a meaningful, and it's not like, women never get promoted. You know, it's not that kind of a glass ceiling. But it's a substantial kind of drag. It accounts for about 40% of the difference in promotion rates between men and women.
Cindy Moehring 41:09
Yeah, she's really good at what she's doing. So why would we not want her to be able to keep doing that? She's so good at what she's doing.
Vivienne Ming 41:15
Yeah, absolutely. So you know, so we see these kinds of things creep in. So that becomes an pretty reasonably objective way. If we're looking at people's work histories and we're seeing very different promotion rates, despite very similar work histories. Something weird is going on in this system. So it turns out the organizational structure that best predicts similar promotion rates, small teams, flat hierarchies, complimentary diversity, but one additional and fairly important element, which is then opened a whole new can of worms for us, is equitable time on camera. And this is an interesting thing is true of us in office subtly but it is a big effect on a camera. The amount of time that your face is on the screen in front of me, not that you're in the Hollywood Squares background, but that you're the principal speaker that I'm paying attention to, count the minutes. And that all things being equal, that becomes a significant predictor of work opportunities and promotion rates. Again, controlling for work history and job performance. People that get more face time, are getting more promotions.
Cindy Moehring 42:44
Managers need to really think about that then like, not just to make, because if you have a small team, and the makeup of the team isn't diverse, then you're not going to have a diverse, you know, face on the camera either. So it starts with having the right members of the small team, but then making sure that almost it's like equal time, if you will, but that everyone is seen, truly seen, and heard for the same amount of time. That's really interesting.
Vivienne Ming 43:09
Yeah. And, you know, I'm saying this, it's, this isn't, again, an explicit form of bias. This is, in fact, a fairly low level phenomenon happening in our brain. It's really a brain hack or bad kind of brain hack.
Cindy Moehring 43:21
Vivienne Ming 43:22
Where we're just sort of accumulated minutes, and our brain starts kind of up voting people just because of accumulated minutes. But then it interacts. Because we're much more likely to spend time with people that are similar to us. And yeah, similar in race and similar in gender. Turns out, though, again, this is a pretty low level human phenomenon, similar in smell, similar in terms of the intestinal flora and fauna. I mean, you may be amazed along how many different dimensions, we sort of lazily prefer similarity, even though it doesn't actually present, obviously, any business advantage. I just told you, diversity actually includes, improves collective intelligence when it's paired with trust. So this is the can of worms that are open for me. I'm a neuroscientist. Why is it, if for decades now, we've been talking about for example, the value of diversity? And diversity is so clearly, so replicably, from boards to lab experiments, such a component of innovation. Why does it persist? So I wrote about something called the Diversity-Innovation Paradox. It is well, it's very easy to demonstrate that as you increase the diversity of a community, innovation improves. Some colleagues of mine at Stanford computational linguists, made an AI that analyzed every single dissertation written in every field since 1977. It may be the only entity in the entire world that read my dissertation, because I sure as hell didn't. And it found the more of an outlier, you were in your field, the more innovative your research was, the more novel the scientific impact. But the less cited, and the less likely you were to get subsequently promoted. So we have these deep biases that have persisted and persisted. And I really wanted to know what was going on. So I started looking at the neuroscience of trust. And this is something I think a lot of managers need to think hard about.
Cindy Moehring 43:22
Vivienne Ming 45:23
What we found was, if something's, if someone's very similar to me, not only do I have these circuits involving frontal networks, and reward centers in your brain, like the nucleus accumbens, and your dopaminergic system, I have this complex network. And when someone is very similar to me, not only does that network essentially activate, and with no real effort, it's what we would call automatized, I actually get a little reward, I get a little bit of dopamine, a little bit of endogenous opioid trusting someone that's similar to me, is easy. And it even makes me feel good. As people get progressively more different, different culturally, different in race, different in socio economic status, we can track all of these differences, and it holds true, the rewards disappear. And the effort, the effort to deploy that same level of trust, to an otherwise similar person goes up. And those frontal circuits that are supplying that effort, are the same ones you're using to get your job done. The same ones that are firing, because you're trying to hit a deadline, the same ones that are under pressure, because the board needs you to deliver, so you're competing for the exact same resources, to do the thing you're trying to do, and to recruit the people to help you do it. Well, now you begin to see why our problems persist. If I'm putting all this effort into my job, as I should, and it pulls away the resources, I need to see someone for who they truly are. Then I am going to essentially undertrust, I'm going to undervalue people that could actually provide the value I need to my teams. And I think until we can see that and work hard to overcome it. We're going to continue to put together teams which are easy. They'll deliver, they'll get there. But they, won't you'll actually miss out on all of the additional value that could have been created.
Cindy Moehring 47:51
Yeah. Wow. That's an incredible piece of research that that that is, if somebody wanted to go find and read about that a little bit more, Vivienne, where would that be?
Vivienne Ming 48:01
So we're putting together some free technical versions of this. But if you want to have some fun, if they go to Socos.org, you'll find our little newsletter there. And we write about all this stuff. In particular, this was in a piece appropriately titled "Remote Work". But my dirty secret is it had sub chapters on, I mean, all these things I write start out with, like, I'm going to write this little 600 word essay and 100,000 words later, and multiple chapters, I wrote one called "This Is Not the Industrial Revolution" about artificial intelligence in the future of work. And again, that line just came from something I said to a jerk on the stage at the Milken conference once and, and this one similarly, so it ended up being like nine chapters long and there's chapters on innovation and inclusion, which quite frankly, are true, regardless of whether we're talking about remote work or not. Yeah, this idea of the Diversity-Innovation Paradox of the, what we call you know, looking at distributed innovation, how you, how you merge, get the right people to meet one another. And create those sparks. And specifically on the neuroscience of trust, if you want to, if you can stand the occasional dirty joke cuz I just can't help myself and the story of how we got there with this work, then you can visit the site, find the piece on remote work, see some of the other ones. And yeah, it was, I mean, it's, it's, again, I enjoy the things I get to do. I mean, I must, cuz I pay a lot of money to do that.
Cindy Moehring 49:46
Yeah, you do! And it's incredible and you're so much value to society and leaving an incredible legacy. It's just fabulous. So I've got to ask you one last question, you've been so generous with your time. And I really appreciate it. I know the audience does, too, you're putting out an incredible amount of great work. But when you need to turn to other resources for inspiration and to learn a little bit more, and to dig a little bit deeper, where do you go? What else could you recommend to the to the audience from the expert as to, I don't know, podcasts or documentaries, or books or white papers? What where do you go for inspiration?
Vivienne Ming 50:25
So I go to a few different places. But I'll say this, the first thing I do is I go to the problem. As as a naive person, recognizing I don't know it, I don't know the solution. I start with the problem. So the Make a Wish Foundation. I once, we once did a little collaboration with them, which evolved into, you know, could we build a little system that helped give a nudge to their wish granters, to such that when that kid says, "I want to go to Disneyland", the wish granter could say, "Well, what if you brought your three best friends with you will pay for that, too." And the reason for the nudge is because it literally increases the survival rate of the kid. So can we nudge the wish to increase survival rates? So that is a perfect example of where I start on every project, which isn't the data because they didn't have any. And it isn't so much the background, although there are a couple of existing papers to say that granting the wish does improve survival rates for these kids. It was watching the wish granter go out and ring doorbells. Right but think about the two places where I've already admitted failure, the teachers and the recruiters. The problem was not that I didn't have some cool ass AI that could save the world in some hypothetical way. The problem was, I built something that they didn't want to use. And so until you actually go out and observe the world, the actual human problem, not the data problem.
Cindy Moehring 51:22
Vivienne Ming 51:23
...you don't understand what it is you're solving. Once you've done that, where do I go? I read blogs of people I disagree with. Now, I don't mean people that are provocateurs. And they're just, you know, speaking nonsense. But I read conservative economists, whose views of the world are different than mine. But remember, when I said Science is a story, if my story doesn't include their research, then my story isn't right. So what it, what it explains my research and their research at the same time, I never get to just say no. So I love obviously, I'm talking economists and scientists, that may not be the thing for everyone else, but follow a couple of people on a blog or Twitter, who are reasonable. But on the opposite side of an issue for me,
Cindy Moehring 51:48
Vivienne Ming 51:19
Because you'll get a very different view of what real insights into problems once you confront that. And after that, oh, gosh, I mean, I read a lot of science and scholarship in general, I read a lot of economics papers, the National Bureau of Economic Research is just puts out tons of working papers, and you can find so many fascinating topics covered in the economics world. Obviously, I still read a lot of of about neuroscience because I sit on the boards of a couple of different neuro tech companies out there trying to build cyborgs, although in our case, Alzheimer's, cerebral palsy, these are the things we're trying to address. So essentially, I just keep my eyes open for new problems.
Cindy Moehring 52:50
For new problems. I love that advice, though. Look, understand the problem, the human problem that you're actually trying to solve, and then open your mind and your perspective to differing points of view, so that you can make sure that your solution to that well defined human problem actually is going to be received, you know, well.
Vivienne Ming 54:16
And let me be entirely clear, I am always right about everything. It's it's a it's a curse, but I somehow bear this burden. So I'm not saying I just, you know, I'm I accept everything everyone says, or can't we all get along? It's okay to disagree with people. But you have to understand why you disagree with them. And frequently, occasionally, not that I would ever admit it, they're actually right about something. And I need to update the way I'm thinking about the world to explain why I didn't understand this problem appropriately. Yeah. Because I think the the number one reason big, new ideas fail, whether it's inventing or science or, you know, business in general, is you didn't actually understand the problem you're trying to solve. That's right. You thought you did, right. And generally speaking, it's because you thought it was a simpler problem than it actually is. My story is, no matter how complex you think the brain is, it's more complex than that. And everything, as far as I can tell, this has generalized to anything involving human beings at all. So just get comfortable with the problems being messy with the fact that maybe there aren't perfect solutions to problems. The question is, can you make a positive difference? Even if it's an imperfect solution? Sometimes that's all you have available to you.
Cindy Moehring 55:52
Right, right, right. I love it. We're going to end it there, Vivienne. Thank you so so much. This has been an enlightening and uplifting conversation about responsible and ethical use of AI. Yes, we talked about some problems that can be created. But there's a lot of good that can be done in the world too, when it's done in the right way. And when it's married together with humans, and you think through it in the right way. And remember that it's a tool, as you said, we'll end where we started, AI is a tool.
Vivienne Ming 56:19
Absolutely, let me just say AI, I wouldn't spend so many hours of everyday working on this stuff, if I didn't think it can be a force for good in the world.
Cindy Moehring 56:32
Vivienne Ming 56:33
We end up with problems when we think that it's going to solve our problems for us. Yeah. And in general, I would generalize that, if you think someone else is going to solve your problems for you. Then I, whenever someone comes to me and says, "Dr. Ming, you know, what are we going to do about x?" I immediately think, well, apparently nothing. Because you think someone like me is gonna walk along and magically solve that problem for everyone else. We make progress when we all push towards something that's going to change the world in a positive way. This is not a it's not a world of superheroes. It's a world where we make effortful, incremental progress. And that makes it look better.
Cindy Moehring 57:28
Yeah. Well said, thank you so so much for your time and sharing your thoughts and wisdom and knowledge with us. We really, really appreciate it. Thank you.
Vivienne Ming 57:39
It was a blast.
Cindy Moehring 57:40
It was. Alright, Bye bye. Thanks for listening to today's episode of theBIS the Business Integrity School. You can find us on YouTube, Google, SoundCloud, iTunes or wherever you find your podcasts. Be sure to subscribe and rate us and you can find us by searching TheBIS. That's one word T-H-E-B-I-S which stands for the Business Integrity School. Tune in next time for more practical tips from a pro.