In my previous blog post, I discussed the two phases of email filtering in the open quarantine process that can help prevent social engineering-based email attacks. The first phase deals with how to identify high-risk situations. However, due to the complex nature of cybersecurity, these situations don’t necessarily correspond to a certainty of attacks. As a result, we need a way to deal with this uncertainty. This can be addressed using well-designed user communication, which is what I’d like to explore this week.
As soon as an email message is identified as coming from an undetermined source, its primary risk(s) are identified and one or more neutralization actions are taken. To ensure the user (aka the email recipient) is aware of the situation with regards to the questionable message, we need to create an automated response that is obvious and easily understood. With this in mind, a different action – with a different response message – will be taken if the principal risk is spoofing vs. an account take-over. Here are some possible neutralization actions, based on the identified risk:
High Risk of Spoofing. A message that is identified in the first phase as being at a higher-than-normal risk of being spoofed can be modified by rewriting the display name associated with the email with a subtle warning—e.g., replacing “Pat Peterson” with “Claims to be Pat Peterson”—and by inclusion of a warning. An example warning may state:
This email has been identified as potentially being forged, and is currently scrutinized in further detail. This will take no more than 30 minutes. If you need to respond to the message before the scrutiny has completed, please proceed with caution.
In addition, any potential reply-to address can be rewritten by the system, e.g., by a string that is not an email address but which acts as a warning. Consider if the following text were to be set as the reply-to address:
You cannot respond to this email until the scrutiny has completed. If you know that this email is legitimate, please ask the sender to confirm its legitimacy by responding to the automatically generated validation message he/she has received.
You will then be able to reply.
To see what the recipient would experience if he/she were to reply to the message, try sending an email to yourself with the text above as a reply-to address, and then try to respond to the message.
High Risk of Impersonation. Assume you were to receive an email from “Bank of America <firstname.lastname@example.org>”, or a similar type of email that appears to come from one of your trusted contacts, but does not. This is a display name attack, and while it may be surprising, these types of attacks are very successful. Emails appearing to be display name attacks can be modified by removing or rewriting the display name. For example, the display name above, “Bank of America” could be removed or changed to “Warning! Unknown contact.” In addition, warnings can be added to the email content portion. These warnings would be different from those for a high-risk spoof message. An example warning is:
This sender has a similar name to somebody you have interacted with in the past, but may not be the same person.
High Risk of Account Take-Over. Account Take-Overs (ATOs) are often used by attackers to send requests, instructions and attachments to parties who have a trusted relationship with the user whose account was compromised. Accordingly, when an email suspected of being the result of an ATO contains any element of this type of attack, the email recipient needs to be protected. One traditional way to do this is to rewrite any URL to point to a proxy; this allows the system to alert the user of risk and to block access without having to rewrite the message. Attachments can be secured in a similar way—namely, by replacing the attachment with an attachment of a proxy website that, when loaded, provides the recipient with a warning and the attachment. Text that is considered high-risk can be partially redacted or augmented with warnings, such as instructions to verify the validity of the message in person, by phone or SMS before acting on it.
In addition, emails with an undetermined security posture can be augmented by control of access to associated material – whether websites, attachments, or aspects of attachments (such as a macro for an excel file). All emails with an undetermined security posture can also be visually modified, e.g., by changing the background color of the text. The modified message is delivered to the recipient and the system starts an in-depth analysis of the email. This analysis may take quite a long time – maybe half an hour in some cases. As soon as in-depth classification of an email results in a determination—whether identifying an email as good or bad—any modifications can be undone and limitations lifted by a replacement of the modified message with an unmodified version in the inbox of the recipient.
If you’d like to learn more about what’s been discussed in this blog series, don’t miss my upcoming webinar: Email Identity Deception, Defined and Quantified.