ChatGPT became the fastest-growing app in history when it surpassed 100 million users in January 2023. It’s a mind-boggling number, considering it was released to the masses a mere two months prior, in November 2022. Threads ended up snagging that title after five days in July 2023. The sheer speed at which technologies like artificial intelligence (AI) and social media are developing highlights a crucial question: Are we creating technologies that not only advance humanity’s capabilities but also serve a higher good?
Companies all around the globe are constantly developing new technology. It’s so easy to get caught up in the excitement of "What can we do?" that many forget to pause and ask, "What should we do?" This is happening for many reasons, including a rush to market to release their innovations before competitors and a status quo of stakeholders overlooking societal impact in their value frameworks. But a lack of evaluation and consideration of technology's effects on society can accidentally worsen societal ills or even create spaces for new ills to arise. People will start to lose trust in technology if they feel it's not serving their best interests.
This is the critical challenge that Walton College Associate Professor Amber Young, Assistant Professor Tamara Roth, and colleagues Yaping Zhu (Middle Tennessee State University), Alan R. Hevner (University of South Florida), and Syed Shuva (University of North Carolina) tackle in their latest study, "Ethical design through grounding and evaluation: The EDGE method for designing information systems for social impact." Their thought-provoking research helps developers think about the ethical side of things early on, ensuring that new technologies are used to build a better, fairer, and more sustainable world for everyone.
Traditional technology system design often prioritizes operational capabilities and market speed over ethical considerations, a pattern that has intensified with the push for quicker AI development. This story has played out across many a headline: a new product hits the market, ethical and social challenges arise, and businesses and PR teams are left scrambling to address them. It’s clear that a more strategic approach can benefit the masses. This is where the EDGE method comes into play—it offers a way to slow down and thoroughly plan out the design and implementation process. Rather than relying on luck, teams can proactively identify issues early on and develop technology that enhances lives rather than causing legal and logistical headaches.
This Tokyo Drift pacing companies are currently innovating at can lead to unintended consequences that impact millions of users before potential harms are recognized. This was dramatically illustrated in early 2024 with the surge of AI-generated songs using unauthorized voice clones of artists like Drake, The Weeknd, and Taylor Swift. The viral "Heart on My Sleeve" track, which used AI to mimic Drake and The Weeknd's voices without permission, accumulated millions of streams before being removed from platforms. This highlighted how AI voice synthesis technology rapidly outpaced both legal and ethical considerations around artists' right to control their vocal likeness.
The music industry was unprepared for the rapid evolution and ease of use of AI voice cloning tools, which quickly created convincingly realistic and unauthorized content within hours. Major labels and artists scrambled to address copyright implications while platforms like TikTok and Instagram struggled with detecting and moderating AI-generated voice clones. It was an eerie callback to previous instances where technologies like deepfakes and facial recognition were deployed to the public before adequate protections could be put in place.
The EDGE method offers a systematic approach to applying ethical principles directly into the design process, ensuring that social impact is not an afterthought but a core design consideration. This responds to growing evidence that retroactive fixes to ethical problems are both more costly and less effective than proactive ethical design, as demonstrated by the ongoing challenges of protecting artists' rights and authenticity in an era of rapidly advancing AI capabilities.
The EDGE method can be implemented during initial system development, allowing for early ethical considerations and running parallel to stakeholder needs analysis or when redesigning existing systems to improve moral and social outcomes.
The method isn't just theoretical—it's been tested in real life involving a German energy utility company, Stadtwerke Leipzig, that was facing a moral dilemma. The company had introduced green energy tariffs, but customers were skeptical and believed they were victims of greenwashing. To address this issue, the utility developed a customer loyalty app, NexoEnergy, to promote and explain the use of green energy tariffs.
By using the EDGE method, the researchers evaluated the initial app design and its redesign, allowing them to ground the app's design in ethical principles and systematically assess its social impact.
By incorporating ethical considerations from the outset of the design process in business and technological fields, the EDGE method helps:
- Reduce the risk of unintended negative consequences
- Build trust with users and stakeholders
- Create products and services that contribute positively to society
- Offer differentiation in the market through ethical leadership
We exist in a time of human history where global adoption of a new technology can happen in weeks (possibly soon to be days) rather than years, making approaches like EDGE no longer optional but imperative. By providing a systematic method to embed ethical considerations directly into the technological design process, EDGE offers a pathway to creating innovations that are not merely powerful but fundamentally aligned with human values and social good. When we stop to ask “should we” instead of “can we,” we prioritize thoughtful creation and human well-being over mere technological possibilities. And that is a great thing, indeed.¬