Why Social Engineering Still Works (And How to Fight Back)
Social engineering dominates cyber-attacks, yet traditional security training fails to stop it.
In recognition of Cybersecurity Awareness Month this October, we're sharing insights on one of the most persistent threats facing individuals and organizations today.
The most prolific and successful cyber attack is the social engineering scheme, often in the form of phishing. Security professionals sometimes take a lackadaisical approach in their training, implementing some form of basic information awareness training and calling it enough. (This approach tends to fit the stereotype of the cybersecurity professional who doesn’t want to deal with people, only technology.)
Unfortunately, the information awareness training that worked 20 years ago hasn’t changed much since. It does not address the changes a person can make in their behavior but instead throws a bunch of outdated information at you saying “Good luck! Any future mistakes are now your fault.” This fear-based approach doesn't work. Fear motivation is rarely effective in the long term.
Why These Attacks Succeed
Whether the attack is a phishing email, a vishing call, a smishing text or any of the other variants of social engineering, they are successful because they are precise and pointed. Social engineers use who we are against us and only need one person to fall for their attack to succeed. The training currently employed uses fear tactics, and the information we learn is not retained when we are faced with an actual attack.
The Numbers Behind the Threat
Throughout my career, I’ve studied which behaviors influenced the decision-making process and would likely result in someone becoming a victim. I’ve honed my understanding of how predetermined personality traits, cultural factors and learned behaviors can make a person susceptible to social engineering.
The findings of my research are striking:
- 638 behavioral traits influence human behavior and decision making.
- 128 of these traits significantly increase vulnerability to social engineering.
- These traits fall into three categories: positive, negative and neutral.
Timing Is Everything
Social engineers know how to capitalize on human behaviors and manipulate us to get the result they want. Timing is often a key component to the social engineer’s attack. They may take advantage of someone who distracted, absent-minded or overconfident to cultivate a higher chance of attack success. These attackers are willing to exploit vulnerable social behaviors and often use a sense of urgency or acting as an implied authority to be successful. This creates an impetus on their target to respond quickly without thinking further about the request.
For example, we frequently see an uptick in phishing emails on Friday afternoons and Monday mornings.
Let’s say it’s a busy Friday afternoon at the office, and you’re counting the minutes until the end of the day when you can clock out and go home to address a problem there. Your mind is distracted, and your decision making might be compromised. This is an ideal time for a social engineer to attack. More insidious engineers might couple the Friday-afternoon distraction with a fabricated sense of urgency, like the CFO emailing you with a frantic request to pay a late bill now!
Or let’s say it’s Monday morning, and you are still thinking about your active weekend, the meetings you have this week and if your coffee is still hot. This is another perfect storm of distraction of which social engineers take advantage, and why we also see an uptick in these attacks on Monday mornings.
Another common example is the grandparent scheme.
These attacks normally happen in the middle of the night when the grandparent is asleep. A call comes in where the social engineer pretends to be a grandchild who is in financial or legal trouble, and they need help now! The urgency and fear of these calls influence the grandparent to act without thinking.
The AI Threat
Artificial intelligence has supercharged social engineering attacks.
With the introduction of artificial intelligence (AI) and growing use of TikTok, we are seeing an uptick in voice cloning. The threat actor identifies a target, requests to be their friend on social media and starts to identify key stakeholders in the target’s life. The social engineer then pulls a snippet from the target’s TikTok videos and uses AI to clone the target’s voice.
The attacker can use this voice technology in a myriad of ways, generating phone calls, deepfake videos and more to reach the identified stakeholders.
Time for a New Approach
So, how do we defend ourselves from these types of attacks?
First and foremost, the cybersecurity industry needs to change how we approach the human being in the network. We need to work to develop a new form of information awareness training that is effective in changing behavior using psychological techniques like cognitive behavioral therapy. The best solution is one that is interdisciplinary in nature, where cybersecurity professionals work with psychology and sociology colleagues in the development of new training tools.
We also need to change the way we assess susceptibility to becoming a victim of social engineering attacks. The current training method is an ineffective 20-question test, where the answers are available all over the internet. We need to create a method of assessment that asks behavioral questions in addition to security questions.
To protect yourself, I suggest these three essential practices:
- Create a safe word or sentence. This should be something that is not easily identified on social media. This adds a layer of validation that won’t be easy for a social engineer to fake.
- Verify before you trust. If you were not expecting a suspicious email or phone call, treat it as if it is an attack until you verify its legitimacy. Don’t answer the phone or reply to the email. Reach out to trusted sources if you are dubious.
- Never click without confirming. Always verify the validity of an email before you click on any links. If you have any doubts, reach out to the sender by calling, stopping by their office or generating a new email to their email address.
The Bottom Line
AI-enhanced social engineering attacks are becoming more sophisticated and successful. We must be more vigilant than ever, but vigilance alone isn't enough. Cybersecurity professionals need to cater training to address human nature and employ psychological solutions that actually work.
Written by Dr. Henry Collier
Dr. Henry Collier is the dean of the School of Science and Technology. He is a professor, scholar, educator, and author with over 15 years of experience in higher education. He is an internationally recognized expert in the human firewall, artificial intelligence, and networking, having published numerous peer-reviewed articles on these topics. Dr. Collier is currently leading a global research initiative exploring the cultural factors that influence susceptibility to cybercrime.