Junk mail is a nuisance. Nothing is more annoying than hundreds of emails that fraudulently claim the need of your information to solve any number of nonexistent problems filling your inbox. Thankfully, many security software packages can flag and delete “phishing emails” with almost perfect accuracy. However, users are still reluctant to use these tools and rely on their own judgment to detect digital deception. Because of this, phishing attacks have caused billions of dollars’ worth of damage in the past decade, leaving researchers scratching their heads. In “It’s not just about accuracy: An investigation of the human factors in users’ reliance on anti-phishing tools,” Zachary Steelman, Sebastian Schuetz and Rhonda Syler seek to identify factors contributing to this hesitancy.
Information Under Attack
As technological innovation and instant-communication methods progress, so do the methods scammers and grifters use to steal your information. Since the early 2000s, the Federal Trade Commission has been battling thieves attempting to phish information from American citizens through email, text messages, and phone calls.
Phishing is a method of social engineering aimed at tricking victims into revealing sensitive information. Phishing scams typically involve a fraudulent message, seemingly from a trusted company or organization that misleads users into visit counterfeit websites mimicking the company’s authentic user interface. Users are then prompted to disclose vital information, such as credit card numbers or credentials. Phishing scams regularly attack companies, and the assailed organizations face steep damages.
Phishing attacks are generally spread through email and target mass audiences. Unfortunately, only a small number of successful attempts are needed for a phishing campaign to gain access to a wealth of sensitive information. Since so many people are targeted in an attack, they are often fruitful. On average, organizations lose $3.7 million per phishing attack.
Defense is readily available against these dangers. Many security software packages and email service providers have anti-phishing tools built into their services. These tools utilize machine learning techniques to identify suspicious patterns and cues from incoming messages. Once identified, the service will flag the message and typically display a warning message or label to indicate if the communication is phishing or legitimate.
Most people are familiar with junk folders and the type of emails sent to them. These are often phishing emails, hence why professionals advise against engaging with emails flagged as spam. Despite advanced methods of detection, scammers are crafty. Phishing emails are becoming increasingly sophisticated. Recently, phishing attempts have developed into concentrated and tailored “spear-phishing” emails to further deceive users into trusting the sender enough to disclose what should be well-kept secrets.
With the knowledge of how advanced these scams can be, you’d think users would readily depend on equally advanced security protocols to isolate phishing attempts in their inboxes. Surprisingly, users more frequently rely on their intuition to decide whether the emails are legitimate and disregard security messages. Steelman, Schuetz and Syler have deduced users’ reluctance widely stems from a fundamental misunderstanding and lack of knowledge.
No one is safe—even presidential campaigns are susceptible to phishing attacks. Russian hackers breached Hillary Clinton’s 2016 campaign email database and gained access to thousands of confidential communications. The hackers were successful because campaign chairman John Podesta received a phishing email in his inbox and visited the included link, twice.
This attack was unfortunate, to say the least. Yet campaign leaders had emphasized the importance of cybersecurity to their employees. They utilized cybersecurity services and trained employees to detect which communications to avoid. But the hack still occurred, further proving how careful companies must be to guard themselves against a similar fate.
Currently, there are two schools of thought on how to avoid phishing attacks: to train employees to detect attacks or to install anti-phishing software.
Training feels intuitive when employees are convinced of their ability to detect scams. However, attempts to train employees have proved less than fruitful, primarily because people do not want to learn about phishing or security in general. People feel over-confident in their abilities to detect online scams. Therefore, when advice about how to improve security measures comes their way, people tend to be willfully ignorant. Training only garners short-term results and can never eliminate users’ vulnerability to phishing emails.
Anti-phishing tools are more accurate than even the most learned cybersecurity specialist, and they’re less expensive than training an entire staff. Nonetheless, when researchers increase the accuracy of these tools to 100% for experimental purposes, users still exhibit under-reliance.
The motivations behind peoples’ reluctance to take warning messages to heart are surprisingly human in nature. Users not only want their tools to be highly accurate, but they also want to know and understand how the tool works. Steelman, Schuetz and Syler conducted two studies to learn more about what people expect from their security tools.
The first study proved higher tool accuracy does lead to increased user reliance on tools. They also discovered the frequency of warning messages can influence how much a user trusts the tool. People become extremely skeptical of programs that frequently flag messages incorrectly. Conversely, if a program is very accurate, users will trust it more if they receive warnings regularly.
Trust largely factors into peoples’ reliance on anything. If security tools provide users with accurate warnings frequently, then the person receiving the warnings continuously gains evidence they can trust the program to steer them clear of phishing attempts. If the user collects multiple inaccurate warnings, then the user distrusts the program and ignores the tool.
Throughout the first study, users never relied completely on security tools. 97% of users partially relied on programs described as 100% accurate, which indicates that users have concerns other than just accuracy. After conducting their study, the researchers discovered many crucial factors influencing trust.
They found people value transparency, accuracy, and frequency when determining whether they will adhere to a security tool’s warning. Users appreciate knowing how security programs work and what information of theirs is being used. In fact, testing showed that lacking transparency was a critical antecedent to users’ distrust of a program. People need to know more about the tools they are supposed to use, trust, and rely on.
Improving Workplace Cybersecurity
Trust should be paramount in the relationship between users and security tools. Developers should be cautious in displaying phishing warnings. Inaccurate warnings devastate the confidence a person has in their security program. Additionally, being transparent in how a tool works and actively offering consumers knowledge of such fosters trust in cybersecurity measures.
More than a nuisance, phishing emails are potentially disastrous. Employees, managers, and even executive officers encounter dozens of phishing attempts every day. Businesses can now implement a higher level of cybersecurity using the results of this study, thus creating a more stress-free workplace. Cybersecurity companies can also use these insights to improve their products and increase customer satisfaction.
Researchers investigating cybersecurity are discovering new ways to protect the digital population, despite the field being relatively new. As research progresses, managers can better train employees and emphasize the effectiveness of security tools. Business owners will breathe easier knowing their information is safe. Anyone can defend themselves online by educating themselves on what threats exist and how technology can help them identify and disregard fraud.