University of Arkansas

Walton College

The Sam M. Walton College of Business

How Human Is Too Human? How Human-like AI Affects Our Sense of Humanness

Man looking at AI version of himself
January 21, 2022  |  By Emilija Sarma, Mary Lacity

Share this via:

If you own a digital assistant such as a Google Home Assistant device or Amazon’s Alexa, you may have toyed around with asking it questions or making requests that elicit funny responses. If you ask Google Home Assistant if it believes in love, it may respond with “I'd love to find love, but I don't know what to search for.” You know someone programmed the device software to respond in this specific way, and, of course, you know that Google Home really has no feelings on the matter whatsoever. Still, it’s fun to pretend that this digital assistant actually has a personality.

If you were to assume so, you wouldn’t be far off.  A team of developers is behind creating the “personality” of Google Home Assistant and its counterparts. Digital assistants are designed “to appear to have unique personalities that express emotions and display behavioral quirks.” Developers aim to create a seamless illusion that your digital assistant is like you in some way, thus making it more user-friendly and fun to interact with. But what are the consequences of playing pretend with our devices?

Researchers Jaana Porra, Mary Lacity, and Michael S. Parks ask: “How human should computer-based human-likeness appear?” in their essay "“Can Computer Based Human-Likeness Endanger Humanness?” –  A Philosophical and Ethical Perspective on Digital Assistants Expressing Feelings They Can’t Have,” published in the journal Information Systems Frontiers.

The authors’ main concern in this article is the lack of research and consideration for attributing feelings to machines in developers’ endeavors to create human-like artificial intelligence (AI), and how this will impact human evolution. While scientists are excited for the challenge of creating machines with “increasingly human qualities,” the authors insist that there is real cause for concern about what this might mean for us as a species.

Conversations about the impact of machines expressing human emotions are inevitable as AI develops and evolves, but for now there is a troubling lack of research in this area. The future of human-like machines is already on our doorstep. The International Federation of Robotics reported that there were “more than 23.2 million units [of personal and domestic service robots] sold in 2019,” with sales “of both professional and personal service robots” projected to grow exponentially in the coming years. Porra et al. posit that the next frontier of the robotics industry is “to perfect androids, artificial systems with a design goal to become indistinguishable from human appearance and behavior in their ability to sustain natural relationships.”

Machines That Can’t Feel, but Say They Can

You may have heard Siri, Alexa or Google Home Assistant say “I’m sorry…” when your request can’t be fulfilled, like, for example, when you ask the device a question it can’t answer. Of course, whenever this interaction happens, we know that none of these devices really feel the regret that they express, but they are still built to mimic human emotion through delivering an expressive speech act. Expressive speech acts “are characterized by statements that sincerely express a psychological belief about the person’s subjective world of thoughts and emotions,” and so they apply only to human beings. Since machines “don’t have feelings or genuine psychological states,” they can’t produce expressive speech acts in earnest.

The authors of the article apply speech act theory to the AI debate to consider the ethical ramifications of designing machines that use expressive speech acts, which are uniquely human. Porra et al. argue that scientists should be cautious when creating “human-like designs” that express emotions they cannot have, because “machines that routinely express emotions,” may ultimately “endanger humanness.”

What Does It Mean To Be Human Among Other Humans?

“What makes us human?” is a complex question with many answers, but none of them are clear-cut or definite. There is much that we don’t know about ourselves, but what we do know is that human beings share “humanness,” which is an “evolutionary phenomenon.” Porra et al. explain that “humanness evolves when we bond with other humans through feelings,” and it “requires feeling together” over long stretches of time in shared spaces and contexts or “human colonies.” Human colonies are formed as “humans bond systematically, automatically and subconsciously with other humans,” and this interaction results in our shared humanness—it binds us to one another and helps us survive.

This theory is “based on a vertical evolutionary perspective,” meaning that human beings have developed their collective humanness together over the course of a very long time. Porra et al. are concerned that, in contrast to this, the “current efforts to create digital human-likeness,” are based on “horizontal evolution—classification and categorization of things such as personality traits, characteristics and behaviors that are commonly assumed to be universal (at least within class) and independent from time and context.”

Human beings and their collective humanness are tied to social and historical contexts—we grow and evolve as we experience life and bond with other people. Although machines are not able to grow in the same way, they are still increasingly being designed to serve humans as emotional companions that mimic humanness, which they will never truly achieve. Porra et al. argue that although we don’t have a clear answer for what is “life” or “self,” we do know that humanness “only exists in our biological bodies”—something that’s fundamentally unattainable for machines.

The authors worry that when digital assistants begin to replace human beings as companions “that routinely express human emotions without really feeling them,” and when humans “spend an increasing amount of time with these machines instead of people our humanness may be endangered.” People seem to form bonds with machines “as easily as if these were other human beings,” which raises important questions about how this may impact our evolution as a species that has thus far only experienced bonding with other human beings in human colonies. It is worth thinking about what our world will look like in a future full of digital companions.

Machines as Life Companions

According to Porra, people’s choices regarding their “life companions,” whether “living beings or machines,” can significantly impact “the fundamental characteristics of our species and alter humanity’s future over generations in largely unknown ways.” One such example cited in the article is that of digital assistants created to provide the experience of human-like interaction, as in the case of robots used in Japanese elderly-care facilities that mitigate loneliness and assist dementia patients. Although these types of robots fulfill a useful function, there is also the risk of unknown long-term consequences.

Although digital assistants can legitimately produce several speech acts such as “meaningful assertives, directives, commissives, and declaratives,” the authors argue that there is no need to create digital assistants that “express a psychological state” through expressive speech acts, since this can be considered a “misuse of language.” A digital assistant shouldn’t be able to tell us it’s happy or sad or sorry, because it can’t experience any of these emotions. Porra et al. state that no matter how much human-likeness we strive to create in digital assistants, they can never replicate genuine humanness because it is bound to “the physical characteristics of our [human] body and the self-awareness that only occur in our biology.”

Losing Humanness Through Interacting With Machines That Feign Emotions

The article cautions that bonding with machines that mimic humanness instead of actual humans threatens to “erode emotional bonds” between us, which in turn would hinder our “ability to form collectives.” Since cooperating and sharing in our “every-day lives together” with other people “has had extraordinary survival value” over the course of human history, the authors call for careful consideration of the consequences that introducing digital assistants as emotional companions may have on our species. Porra et al. fear that allowing machines to replace our interaction with other people “will eventually mold genuine humanness into being more like our machine companions,” over time.

To preserve and protect our humanness, the authors suggest that “machines carry a warning” for the user that no digital companion should ever “be a substitute for healthy human relationships.” Moreover, machines should also warn users that “the long-term impact of living with computers that express feelings they can’t have is unknown.” People should also be made aware of every interaction that they have with a machine—it should “reveal itself in the beginning of every human encounter” and offer the user an option to turn off the machine’s “emotional expressions.” This is particularly important when considering the interaction between machines and children, and the authors stress that children need to be taught that “the purpose of technology is to serve.”

If there is any chance of digital human-likeness posing a danger to our species in the long run, the authors urge that “we should put forward our best effort to better understand what we are committing ourselves to with increasingly relying on human-likeness instead of genuine humanness.” Since we do not yet know the long-term effects of constant interaction with “human-like machines,” developers should proceed with caution. The AI debate should “increasingly be about ethical concerns,” as we should be asking ourselves: ‘How human do computers need to appear to fulfill their purpose?’

Post Researcher/Author:

Matt WallerMary C. Lacity is Walton Professor of Information Systems and Director of the Blockchain Center of Excellence. She was previously Curators’ Distinguished Professor at the University of Missouri-St. Louis. She has heldvisiting positions at MIT, the London School of Economics, Washington University and Oxford University. She is a Certified Outsourcing Professional® and Senior Editor for MIS Quarterly Executive.





Emilija SarmaEmīlija Sarma is a Fulbright Scholar from Latvia and a PhD candidate in Comparative Literature and Cultural Studies (CLCS) at the University of Arkansas. Emīlija is the recipient of the CLCS Doctoral Dissertation Fellowship and the Vance and Mary Celestia Parler Randolph Fellowship in English. Her research focuses on gender studies, intersectional feminism, pop culture, and young adult literature. Emīlija is passionate about international exchange in higher education and diversity, equity, and inclusion initiatives. She has previously taught composition and technical writing for the English department at the U of A and worked for International Recruitment before transitioning to the role of Program Coordinator for the Fulbright Foreign Language Teaching Assistant Orientation online. She currently works as the MFA program assistant at the U of A School of Art.