In my previous blog post, I provided examples of the growing sophistication – and subsequent success – of several high-visibility email attacks. This week, I’d like to look at the different types of emails that are enabling these attacks.
Deceptive emails are used by cyberattackers to carry out three different types of attacks:
- To coerce the recipient to follow a hyperlink to a website masquerading as a trusted site, where the recipient’s login credentials are requested (i.e., phishing);
- To compel the recipient to install malware – whether by opening a malicious attachment or visiting a malicious website;
- To convince the recipient to surrender sensitive information or willingly transmit money to the attacker.
To succeed with their deception, attackers masquerade as parties trusted by their intended victims; use social engineering laden messages; and, occasionally, hyperlinks or attachments that pose dangers to users.
In contrast to traditional phishing attacks and typical spam, the detection of typical deceptive emails cannot be done in ways that leverage large volumes of identical or near-identical, unwanted messages, disreputable senders, or keywords indicative of abuse. This is because cyberattacks typically are targeted. They use customized messages, senders and hyperlinks without bad reputations, and—to the extent that they contain malware attachments— individually repacked malware instances that avoid triggering signature-based anti-virus filters.
The analysis of messages with the goal of identifying targeted attacks, accordingly, is time consuming. Diligent scrutiny can easily take minutes of computational effort for difficult emails, and the time is expected to increase as more rules are added to address the mushrooming of new attacks and the increased sophistication likely to be seen moving forward. Particularly subtle forms of deceit may require human-assisted review to detect, further adding to the worst-case delivery delays. Without meticulous screening, of course, we expect to see either false positives or false negatives to increase—or, potentially, both situations.
The delays caused by filtering—and the associated fears of lost messages—may very well become the greatest liability when it comes to deploying strong security against targeted attacks. This is due to the resistance among decision makers to accept security methods that have the potential of introducing noticeable delivery delays or, worse still, causing false positives. Given the relatively low commonality of targeted attacks and a widespread hubris among end users as it comes to being able to identify threats, this reluctance is understandable.
Check back in next week, when I discuss the intrinsic trade-offs between false positives, false negatives and delivery delays, as well as the new open quarantine filtering paradigm. If you subscribe to the blog (in the top right hand corner of this web page), you’ll get a notification to find out when my next blog is published.