With a large percentage of the global workforce based remotely for the foreseeable future, more business than ever is being conducted over email. And while this modern convenience has been critical to the continued operation of many businesses in the current health crisis, it has also presented those businesses with new data security challenges.

The unfamiliar environment of remote work — not to mention its potential distractions, like children and pets — leaves employees more vulnerable to misdirected emails and other mistakes that can lead to accidental data breaches. Scams aimed at both individuals and organizations (even healthcare facilities on the front lines of the pandemic have not been immune to their efforts) have also risen, attempting to capitalize on the situation.

Accidental breaches are notoriously difficult to combat because they can be caused by something as simple as a typo in an email address. Human error is a part of life — something businesses must seemingly accept as inevitable. But the advent of today’s machine learning technology has changed that. When it comes to accidental data breaches and even social engineering scams, human error is no longer an acceptable or understandable cause — it is a preventable one.

Email Remains A Primary Vector For Both Accidental And Malicious Breaches

Those who don’t work in IT are often surprised to learn that email remains one of the most popular attack vectors for cybercriminals. Movies like Swordfish or Hackers tend to feature hoodie-wearing hackers furiously typing to stay ahead of any network defenses, but a real-life criminal is far more likely to simply capitalize on a user’s mistake. Misdirected emails, mistaken attachments and other email mistakes are extremely common. In fact, employees mistyping the name of a contact and sending information to the wrong person was one of last year’s leading breach drivers.

As if this type of mistake wasn’t common enough on its own, cybercriminals will often attempt to artificially induce them. Verizon’s 2020 Data Breach Investigations Report (DBIR) revealed that social engineering attacks are among the most common tactics used by cybercriminals. The report found that human-focused phishing scams remain a top social threat to businesses, involved in 32% of confirmed breaches, while “pretexting,” which involves criminals misrepresenting themselves or their intentions (often via email), is another top concern.

These incidents are built upon the simple premise that people make mistakes. If a scammer sends thousands of phishing emails, even a success rate of less than 1% could potentially prove lucrative, whether directly (through financial channels) or indirectly (through theft of personal or confidential information).

Given that IMB’s Cost of a Data Breach Report indicates that the average incident costs more than $8 million in the US, putting a stop to these breaches — whether accidental or malicious in nature — is of paramount importance for businesses, many of which may not be able to absorb the cost of such an attack. But how does a business put a stop to everyday mistakes like misdirected emails or ensure that employees remain alert for spoofed email addresses? To many, it seems that a certain level of vulnerability to this sort of breach is unavoidable.

Today’s Technology Puts New Tools In The Hands Of Defenders

Unfortunately for would-be email scammers, these breaches are not, in fact, unavoidable. The advent of machine learning technology has not only improved the efficacy of many cybersecurity tools; it has effectively enabled the creation of an entirely new layer of cybersecurity. No longer is security technology limited to perimeter and in-network defenses, such as static rule-based data loss prevention (DLP). Today’s data security teams can implement human layer security capable of protecting its users from outside threats — and from themselves.

At its core, contextual machine learning is about understanding normal behavior within an organization. This might include who employees correspond with via email, what information those emails usually contain and what types of attachments are common. By creating a baseline for each individual, the technology can identify and flag abnormal behavior. This could be as simple as noticing a typo in the “to” field or as complex as identifying a potential breach of client confidentiality when sending files at a financial services company. These everyday errors, once nearly impossible to stop, can now be identified, and users can be alerted, allowing the errors to be rectified before they occur and putting an end to many of the most common forms of accidental breaches.

Of course, for truly comprehensive security, it is important to both prevent and protect. Even when an email is sent to the right person, today’s regulations mandate strict encryption to prevent interception or mishandling once received. Machine learning technology can identify potential risk factors and automatically apply or recommend the necessary level of security, from simply ensuring that transport layer security (TLS) is enabled to employing message-level encryption. This type of contextual security has become increasingly important as laws like GDPR and CCPA become more common.

Fixing Mistakes Before They Happen

Risk analysis is not revolutionary on its own, of course — human users have been making these judgment calls for quite some time and can continue to do so by manually checking that:

• The correct recipients are added.

• The correct files are attached and “hidden” information in extra tabs and metadata is removed.

• They’re replying to legitimate senders.

• The necessary encryption is applied.

The benefit of technologies like contextual machine learning is the safety net they provide when people inevitably fail to spot risks. With email-driven breaches continuing to dominate the cyberattack landscape, the need to implement strong human-layer protections is likely to grow in importance as we move into the future.

Write A Comment