Slowing Down to Speed Up: The Power of Ethical Tech Development

computer generated image holding justice scales
April 15 , 2025  |  By Meghan Perry; Amber Young and Tamara Roth

Share this via:

ChatGPT became the fastest-growing app in history when it surpassed 100 million users in January 2023. It’s a mind-boggling number, considering it was released to the masses a mere two months prior, in November 2022. Threads ended up snagging that title after five days in July 2023. The sheer speed at which technologies like artificial intelligence (AI) and social media are developing highlights a crucial question: Are we creating technologies that not only advance humanity’s capabilities but also serve a higher good?

Companies all around the globe are constantly developing new technology. It’s so easy to get caught up in the excitement of "What can we do?" that many forget to pause and ask, "What should we do?" This is happening for many reasons, including a rush to market to release their innovations before competitors and a status quo of stakeholders overlooking societal impact in their value frameworks. But a lack of evaluation and consideration of technology's effects on society can accidentally worsen societal ills or even create spaces for new ills to arise. People will start to lose trust in technology if they feel it's not serving their best interests.

This is the critical challenge that Walton College Associate Professor Amber Young, Assistant Professor Tamara Roth, and colleagues Yaping Zhu (Middle Tennessee State University), Alan R. Hevner (University of South Florida), and Syed Shuva (University of North Carolina) tackle in their latest study, "Ethical design through grounding and evaluation: The EDGE method for designing information systems for social impact." Their thought-provoking research helps developers think about the ethical side of things early on, ensuring that new technologies are used to build a better, fairer, and more sustainable world for everyone.

Speed vs. Ethics

Traditional technology system design often prioritizes operational capabilities and market speed over ethical considerations, a pattern that has intensified with the push for quicker AI development. This story has played out across many a headline: a new product hits the market, ethical and social challenges arise, and businesses and PR teams are left scrambling to address them. It’s clear that a more strategic approach can benefit the masses. This is where the EDGE method comes into play—it offers a way to slow down and thoroughly plan out the design and implementation process. Rather than relying on luck, teams can proactively identify issues early on and develop technology that enhances lives rather than causing legal and logistical headaches.

This Tokyo Drift pacing companies are currently innovating at can lead to unintended consequences that impact millions of users before potential harms are recognized. This was dramatically illustrated in early 2024 with the surge of AI-generated songs using unauthorized voice clones of artists like Drake, The Weeknd, and Taylor Swift. The viral "Heart on My Sleeve" track, which used AI to mimic Drake and The Weeknd's voices without permission, accumulated millions of streams before being removed from platforms. This highlighted how AI voice synthesis technology rapidly outpaced both legal and ethical considerations around artists' right to control their vocal likeness.

The music industry was unprepared for the rapid evolution and ease of use of AI voice cloning tools, which quickly created convincingly realistic and unauthorized content within hours. Major labels and artists scrambled to address copyright implications while platforms like TikTok and Instagram struggled with detecting and moderating AI-generated voice clones. It was an eerie callback to previous instances where technologies like deepfakes and facial recognition were deployed to the public before adequate protections could be put in place.

The EDGE Methodology 

The EDGE method offers a systematic approach to applying ethical principles directly into the design process, ensuring that social impact is not an afterthought but a core design consideration. This responds to growing evidence that retroactive fixes to ethical problems are both more costly and less effective than proactive ethical design, as demonstrated by the ongoing challenges of protecting artists' rights and authenticity in an era of rapidly advancing AI capabilities.

The EDGE method can be implemented during initial system development, allowing for early ethical considerations and running parallel to stakeholder needs analysis or when redesigning existing systems to improve moral and social outcomes.

Five steps make up the EDGE method:

1. Selecting an ethical kernel theory

Think of an ethical kernel theory as the moral compass that guides how a system is designed and evaluated. This research focuses on one specific approach centered on freedom and empowerment—basically ensuring that technology empowers people to act independently, think clearly, feel included, and express themselves freely rather than controlling or limiting them. This is the first step to laying a project’s moral foundation. A company needs to choose a robust ethical framework aligning with its goals and values. This isn't just about picking any philosophy—it's about finding one that makes sense for the specific context of the project and can handle the complex ethical challenges developers might face. It's like choosing the right blueprint before building a house.

2. Applying kernel theory to develop overarching, ethical design principles

Once an ethical foundation has been laid, it's time to turn those abstract ideas into practical guidelines. These principles bridge the gap between theory and practice. For example, if a kernel theory emphasizes fairness, a business might develop principles like "ensure equal access" or "prevent discriminatory outcomes." These principles become a North Star, guiding all design decisions that follow.

3. Moving from overarching design principles to context-specific design rules

Here's where things get concrete by taking those high-level principles and turning them into specific, actionable rules for the project. Think of it as creating a detailed instruction manual. If one of the principles is about user privacy, there should be rules that specify exactly how data should be collected, stored, and protected. These rules need to be practical enough that developers and designers can implement them directly and reflect the user experiences the organization considers most valuable.

4. Developing condition statements for each design rule

This step focuses on defining success by creating clear statements describing each design rule's expected social impact. It establishes measurable goals for ethical design choices, providing a basis for answering the question: "How will we know if our ethical design is actually working?" Imagine a social media app is being created. A general rule might be, "The app should build a community." To make this more concrete, this is turned into the conditional statement, "Users should be able to easily create and join groups based on what they like, and chat with others with similar interests."

5. Performing reconstruction to evaluate social impacts

The final step is where theory meets reality. This involves systematically analyzing condition statements by substituting key terms (like replacing "user" with "machine") to reveal whether the system honestly treats humans as distinct from machines and respects their rights to freedom. If the reconstructed statement sounds “off,” it suggests the design is ethically grounded. If not, it may signal that a redesign is needed.

This systematic process helps identify gaps between intended and actual ethical outcomes, enabling organizations to repeatedly improve their designs to better align with their chosen ethical framework.

EDGE in Real Life

The method isn't just theoretical—it's been tested in real life involving a German energy utility company, Stadtwerke Leipzig, that was facing a moral dilemma. The company had introduced green energy tariffs, but customers were skeptical and believed they were victims of greenwashing. To address this issue, the utility developed a customer loyalty app, NexoEnergy, to promote and explain the use of green energy tariffs.

By using the EDGE method, the researchers evaluated the initial app design and its redesign, allowing them to ground the app's design in ethical principles and systematically assess its social impact.

Looking to the Future

By incorporating ethical considerations from the outset of the design process in business and technological fields, the EDGE method helps:

  • Reduce the risk of unintended negative consequences
  • Build trust with users and stakeholders
  • Create products and services that contribute positively to society
  • Offer differentiation in the market through ethical leadership

We exist in a time of human history where global adoption of a new technology can happen in weeks (possibly soon to be days) rather than years, making approaches like EDGE no longer optional but imperative. By providing a systematic method to embed ethical considerations directly into the technological design process, EDGE offers a pathway to creating innovations that are not merely powerful but fundamentally aligned with human values and social good. When we stop to ask “should we” instead of “can we,” we prioritize thoughtful creation and human well-being over mere technological possibilities. And that is a great thing, indeed.

Meghan Perry Meghan is an experienced freelance writer and editor. In the daytime, she works as a PR and content writer specializing in B2B, government tech, and higher education. Her heart truly belongs to creative writing, where she finds joy in spinning tales and polishing editorial gems.

With a TBR pile that could rival a small mountain, there’s always a book tucked away in her tote bag. Her LinkedIn DMs are open for project requests, book recommendations, and Harry Potter trivia.

Amber Young Amber Young is Director of the ISYS PhD Program and Assistant Professor of Information Systems at the Sam M. Walton College of Business, University of Arkansas. Her research focuses on how organizations can design and implement technologies for social good and human empowerment. She is concerned with the effects of technology on individuals’ humanity and dignity. Dr. Young’s work has been published in MIS Quarterly, Journal of Management Information Systems, Information Systems Journal, Information & Organization, Communications of the AIS, The London School of Economics Business Review, and various IS conferences. She is on the editorial board of Information & Organization.

Tamara RothDr. Tamara Roth is an assistant professor in the Walton College's Department of Information Systems. Her research focuses on the adoption and integration of emerging technologies, such as blockchain and digital identity systems, in structured organizational environments like government agencies andutilities. She explores how these technologies can drive innovation while addressing cultural and structural barriers to their implementation.

Before joining the University of Arkansas, Dr. Roth was a postdoctoral researcher at the Interdisciplinary Centre for Security, Reliability, and Trust (SnT) at the University of Luxembourg. During her time there, she completed her second PhD in Information Systems, building on her expertise in technology adoption and organizational change. She also holds a PhD in Educational Psychology from the University of Bayreuth, where she examined the psychological and educational dynamics of learning and development.

Dr. Roth’s interdisciplinary background enables her to bridge technology, organizational behavior, and human-centric innovation, contributing to both academic research and practical solutions in the field of Information Systems.