Questions? Feedback? powered by Olark live chat software

In a recent article, Gizmodo reported on a security “test” they had performed where they sent phishing emails to several high-profile targets within the Trump organization, and received indications that roughly half of the recipients were deceived. They used identity deception to take advantage of trusted relationships, hiding their true identity behind display name fraud. This is a common method used by criminals wishing to deceive email recipients, and was used extensively by Russian hackers as they trained their sights on political targets in 2016.

The fact that half of their targets fell for the ruse isn’t shocking. It doesn’t show that the Trump administration is negligent or clueless. The administration, simply, is made up of people, and this is what people do. For those who think we should hold the victims accountable for their actions, I have one piece of advice: give it a rest. That might have been possible five years ago, before the level of sophistication of email attacks rose to the current level. If anything, it shows that these government officials were lucky that they were not targeted by the same cyber criminals who made John Podesta a household name, or who attacked NGOs the day after the presidential election.

In the end, email security is not about teaching users to do the right thing because humans will always be the weakest link. With a cleverly designed email attack, a majority of the targeted recipients will become victims. However, this also shows that we have arrived at an end-of-life for many traditional security technologies, whether spam engines (that look for offending keywords, such as “viagra”) or blacklist-based phishing detectors (that look for known bad URLs — which the clever criminals avoid using, of course.) These solutions can’t stop sophisticated email attacks, and cyber criminals know it.

Instead, we need to usher in a new era of security technologies that are automated “guardian angels” to all recipients of a protected organization, and which identify risk not by detecting “known bad” (whether senders, URLs or attachments) but by determining whether an email would be deceptive to the recipient. Gizmodo provides a good example of such an email: it comes from a stranger that the recipient has no reason to trust, but which has a display name that suggests the identity of a trusted party. This discrepancy is the most dangerous of them all, because it corresponds to a risk of being deceived. New security countermeasures that deploy artificial perception methods — which identify how the recipient is likely to perceive the email — can make a difference.