Legacy email security systems are failing, as more enterprises migrate their emails to the cloud and cyber-attacks become more professional and difficult to detect. Companies in all industries and government agencies simply are not moving fast enough to counter the increasingly sophisticated threats like business email compromise (BEC) and account takeover attacks.
The first part of this two-part series discussed how training employees can help the situation and should be incorporated in an email security strategy, as an added protection. The reality is, though, training alone simply isn’t enough to make a dent in the problem. Criminals have perfected their craft and tapped into social engineering tactics that prey on people’s inherent desire to trust. At the end of the day, relying on an employee in HR, finance, sales, or any operational function outside of cybersecurity to decide which emails to trust in their inbox is a setup for costly losses.
Something has to give. And it has. Smart communities—groups that use technology and data to be more efficient, solve challenges, and save money—are forming. These communities rely on real-time data, AI, and an understanding of human behavior to advance their cause, making their output far more reliable than what individuals can do on their own. Here at Agari, we’re building a smart community that models the good to prevent the bad based on our insight into two trillion emails sent around the globe each year.
Cybercriminals are no longer lone wolves holed up in basements. They have moved beyond older types of email attacks with malicious links and attachments that legacy secure email gateways were developed to stop. Their tactics have evolved, and we must evolve to stop them.
Criminals apply the same best practices to succeed that legit businesses employ. They buy lead-generation lists, carefully target their prospects, and time their phishing attempts for optimal moments, like when a targeted executive is on the road. They’ve also expanded into offering “BEC as a Service” at a low price point for would-be scammers who don’t have the deep pockets or skills to build their own criminal enterprise. Some cybercriminals also run long-con romance scams by developing fake, exploitative relationships via email. And they bring it all together by running multiple types of scams at once to receive the highest payout possible.
Identity deception is the name of the game. Fraudsters pretend to be a person or a company that they’re not. And not only are traditional SEGs incapable of detecting these types of threats, but the transition of email to the cloud has created new impersonation options for bad actors. Cloud-based email servers that aren’t yet protected by DMARC are vulnerable to domain spoofing—fraudulent emails sent using the company’s real domain. Lookalike domain abuse and account takeover attacks are also on the rise, which are even more difficult to stop.
How are individual organizations supposed to protect themselves from this level of threat? By banding together to share and access information to create a smarter system of companies and agencies that want to protect their email, revenue, reputation, and relationships.
Sharing in smart communities goes beyond person-to-person warnings about specific scams, which are hard to scale in real-time. And human nature is to trust each other, so even with warnings and high-quality anti-phishing training, most people are not great at spotting fraud. That’s why 30% of people click on phishing messages even after training, and why half of spam reports to SOCs are false positives.
A smart community understands that to truly protect people and organizations from malicious messages, those messages must never reach the inbox. Rather than put the burden of threat detection on the individual, smart communities learn to identify what’s authentic and make sure only those messages are seen. And when one email happens to get through these filters or activates after delivery, they must band together to take those messages out of the inbox—before more organizations can be harmed.
At Agari, we’re pioneering an approach that models authentic email communication so that identity deception attempts can be detected and eliminated. To do this, we’ve built the Agari Identity Graph, advanced threat-detection technology that no one else has. It combines machine learning, email telemetry at internet scale, and real-time data reporting to map authentic communication, with more than 300 million daily machine-learning model updates to keep the Agari customer community protected even as new threats emerge.
Messages move through three phases of the Agari Identity Graph—Identity Mapping, Behavioral Analytics, and Trust Modeling—to see who is sending the message, whether that behavior matches the norm, and how closely tied the recipient is to the sender. Based on the score determined through these three pages, the message is either accepted or denied. Those that are denied never reach the inbox, and the people to whom they were directed stay safe.
When only authenticated messages reach the inbox, everyone wins—except the cybercriminals. Data breaches, financial losses, brand damage, and stress can all decline dramatically. Employees can work through their inbox without worry. Customers feel confident opening messages from brands they trust. And SOC analysts are freed from investigating false-positive employee reports so that they can address real threats.
And as new threats emerge, rather than go into reactive mode, the Agari Identity Graph can spot them based on their markers of inauthenticity. That data then goes into the data pipeline for updated real-time detection and response across the Agari customer community.
Cybercriminals are persistent and creative. Smart communities that share information for continuous learning and threat detection are the solution. We believe that by protecting email with smart communities, good data, and artificial intelligence, humanity can prevail over evil.
To learn more about how Agari is using machine learning, read our series on the Agari Identity Graph.