
Our emotions, knowledge, and beliefs affect how we perceive and interact with the world, as well as how we consider the future. As more people connect online, these perceptions are shaped by individuals and groups we may identify with yet never meet.
Ideological groups—comprising individuals united by shared beliefs, values, and goals—leverage social media platforms, particularly Twitter (now known as X), for various purposes. These groups fulfill members’ psychological needs for meaning, self-esteem, social identity, and certainty by shaping their interpretations and responses to events.
The research, “Us vs. them: moral, cognitive and affective language in group identity tweets” by Walton College’s Shaila Miranda, along with Ares Boira Lopez, Joseph W. Stewart, Hairong Song, Marina Mery, Divya Patel, Cecelia Gordon, Bachazile Sikhondze, Shane Connelly, and Matthew Jenson, examines how nuanced affective, cognitive, and moral language in group identity tweets differs between violent and non-violent ideological groups, as well as between left- and right-leaning groups. In doing so, it offers insight into how these groups construct social identities, mobilize support, and legitimize their causes.
The Digital Divide
Through the echo chamber that is social media, ideological groups either promote radicalization, hate, and violence, or encourage community-building for prosocial goals like peace, justice, and human rights. Both approaches extend beyond direct followers, influencing the broader public and targeting the young and vulnerable.
Ideological groups leverage Twitter to cultivate strong group identity, sustain membership, and foster intergroup hostility, regardless of political affiliation or violent disposition. Likewise, they adopt an “Us vs. Them” mentality, using first- and third-person plural pronouns (e.g., “we”/“us” and “they”/“them”) and varied language to separate in- and out-groups. This creates division, prejudice, and aggression toward outsiders while intensifying group affiliation.
Understanding this mentality, the dynamics of extremism, and the role of social media in fostering group cohesion and ideological division requires examining the differences in affective, cognitive, and moral language use and how ideological groups utilize them. More importantly, it requires gaining insight into the effect that specific language use has on online audiences and group members.
Affective language targets emotion, cognitive language targets understanding, and moral language targets judgment and obligation. These language types are powerful tools to strengthen social identity and foster group cohesion, regardless of whether they are used in violent, non-violent, left-, or right-leaning groups. Each ideological group, with its nuanced purpose, discourse, and mission, uses them differently and for various outcomes.
Violent Groups:
- Tend to use more discrepancy language in identity tweets, driven by existential threat and the desire to protect or further their worldview by force.
- May employ affective language to evoke anger or fear, cognitive language to justify violent actions for self-defense or cultural preservation, and moral language to justify violence as an in-group protective imperative.
Non-Violent Groups:
- Tend to emphasize trust, positive emotions, ideological clarity, and justice and equality promotion in tweets.
- May use less aggressive affective language by focusing on injustice or exclusion, cognitive language for worldview promotion, and moral language to emphasize individualized care, fairness, and justice.
Left-Leaning Groups:
- Tend to focus on care (virtue and vice) in tweets.
- May leverage intense affective language to rally disadvantaged populations against perceived oppression and inequality, expressing solidarity and resistance.
- Might use cognitive language to emphasize causality, insight, and explanation for societal issues, and moral language to stress individualized foundations – care and fairness – to justify activism and critique existing power structures.
Right-Leaning Groups:
- Tend to use sanctity (virtue) more in tweets, focusing on loyalty, authority, and tradition.
- Typically apply affective language to defend traditional values, national identity, and social order, cognitive language to emphasize the causes and consequences of perceived societal threats like erosion of national identity or cultural heritage, and moral language to support their cornerstone of traditional value preservation and resistance to threats of perceived societal stability.
Emotions, Reasonings and Judgments as Tools
Building on these pre-existing understandings of language use per differing ideological groups, the researchers question how affective, cognitive, and moral languages in group identity tweets differ between violent and non-violent groups, as well as between left- and right-leaning groups.
Affective Language
Affective language expresses or elicits emotion, evoking responses like anger, fear, or trust. It amplifies engagement, reinforces in-group cohesion, and prompts emotional reactions toward out-groups, which is why ideological groups use it to generate affective commitment.
Researchers evaluated anticipation, disgust, fear, joy, sadness, surprise, trust, anger, general negative affect, and general positive affect. They found that non-violent groups leverage higher trust and positive affect more than violent groups in their group identity tweets, compatible with their communicative emphasis on hope and positive emotions. However, political ideology does not significantly predict this language use; ideological groups, regardless of their place on the political spectrum, often convey mostly positive feelings in group identity tweets.
Cognitive Language
Cognitive language employs terms indicating reasoning, explanation, uncertainty, and causal attribution. It helps groups structure or manipulate event narratives to fit their worldview, often by attributing blame, highlighting uncertainty, or contrasting alternative viewpoints. Ultimately, groups strategically use this language to shape cause-and-effect narratives. To understand its use, researchers studied insights, cause, discrepancy, tentativeness, certainty, and differentiation.
Violence classification significantly predicted discrepancy use: violent groups employ “hedging” language, such as “would,” “could,” “can,” or “want,” more than non-violent groups in tweets. Again, political ideology does not significantly predict cognitive language.
Discrepancy cues often signal aspiration, dissatisfaction, or a call-to-action, aligning with violent group rhetoric to highlight perceived injustices or inadequacies. Groups likely use it to envision a revolutionized future, address grievances, reinforce calls for radical change – or to justify violent actions as “bridging the gap” between reality and an envisioned ideal. Ultimately, discrepancy strengthens narratives, galvanizes followers, creates a heightened sense of urgency, and sets strategies apart from non-violent groups.
Moral Language
Moral language frames actions, events, or groups based on right and wrong, or justice and injustice. It applies moral values to legitimize group actions and condemn out-groups. In other words, ideological groups tend to frame issues, behaviors, and beliefs as virtuous or corrupt. Researchers investigated moral language using care virtue and vice, fairness virtue and vice, loyalty virtue and vice, authority virtue and vice, and sanctity virtue and vice.
First, no significant difference exists in moral language use based on the propensity for violence in ideological groups. However, political ideology primarily predicted language use, particularly of care virtue, care vice, and sanctity virtue. For instance, left-leaning groups had higher care scores, using virtues like compassion and vices like neglect more than right-leaning groups, which scored higher in sanctity virtues like purity.
Thus, left-leaning groups, with moral foundations of care and fairness, emphasize compassion, inclusivity, and addressing inequalities. They foster in-group solidarity and drive transformative agendas focused on systemic change and protecting marginalized groups. Contrastingly, right-leaning groups typically prioritize purity, tradition, and moral order, working to preserve cultural, religious, and societal norms with protective discourse. They tend to frame issues to defend sacred ideals from perceived threats, reinforce in-group cohesion, a shared sense of moral duty, and reverence for tradition, while tapping into steadfast emotional and moral convictions.
Critical Media Usage
To summarize, there are no significant differences in negative affective language use—and minimal differences in cognitive language use—between violent and nonviolent ideological groups in group identity tweets. However, while the propensity for violence does not predict moral language use, political ideology is a driver. Overall, violent and nonviolent groups leverage similar levels of negative language when trying to appeal to social identities, contrary to previous research claims, but they do differ in their general website and social media communications.
Social psychology studies have found that “in regard to intergroup conflicts, individuals tend to view their own group as more honorable and knowledgeable that other groups, these comparisons can lead to dehumanization and aggression towards members of other groups, this can and has led to a variety of negative outcomes, including discrimination, violence, and war.”
Identifying how each of these ideological groups uses language advances our understanding of how they construct and communicate group identity online. In turn, we can better protect vulnerable online users from extreme or biased information and perceptions, especially because some ideological groups leverage psychological and social needs to shape identities, as well as strategically implement language to cultivate membership and mobilize support.
Recognizing distinct messaging patterns allows practitioners to develop targeted counter-messaging strategies and intervention efforts to disrupt group narratives, mitigate polarization, and prevent radicalization. Observing patterns in language may be useful as a diagnostic tool for identifying groups with greater inclinations for violence, while counter-messaging can help address what moral foundations resonate with different groups, potentially reducing the appeal of extremist ideologies. That said, online users, whether communicating or viewing messages, should consume online content more critically and with media literacy skills, recognizing the subtle ways groups use language to encourage group identity and perceptions.
This material is based on work supported by the U.S. Department of Homeland Security under a grant from the National Counterterrorism Innovation, Technology, and Education Center (NCITE) [grant number 10557670]. The views and conclusions in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of the U.S. Department of Homeland Security.