Advertisement

Security News

Microsoft’s 5 Latest Updates On Data Security

Kyle Alspach

The company announces updates for its Purview data protection platform, including a new solution driven by machine learning that’s meant to help with battling insider threats.

Adaptive Protection

The new Adaptive Protection solution for Microsoft Purview aims to address several issues in data security at once, including the issues of overly broad data loss prevention (DLP) controls and high levels of alert “noise” from risk detection tools. Ultimately, the goal is to ensure that organizations can better target their protections against insider threats such as data theft and tampering, according to Microsoft.

Adaptive Protection works by utilizing Purview’s Insider Risk Management machine learning (ML) technology, which is capable of understanding what constitutes normal and abnormal behaviors for users in terms of their interactions with data. The ML-powered tool is then capable of identifying high-risk actions that could lead to a data security issue, and can tailor DLP controls automatically in accordance with the detected level of risk, Microsoft said. As a result, organizations are able to make their DLP policies far more dynamic and ensure that only the riskiest users are blocked from sharing data, according to Microsoft. Low-risk users, on the other hand, are enabled to continue sharing data as normal, the company said.

Adaptive Protection is a unique solution for data security because it offers the “right protection for the right time” by adapting to user behavior, Mitra told CRN. The solution can interpret the context with which data is handled, and “the controls are informed by how the data is handled,” he said.

That includes by tracking behavioral signals to determine what is normal and what is not, which is what enables the adaptability of the solution, Mitra said. “So you’re not trying to predict for an organization, ‘Do I protect the data at the highest level or the lowest level?’ You set it where you want, and then it changes as it learns more over time about what normal handling for the data is, and what abnormal starts to look like.”

The user behavioral signals are anonymized in order to protect user privacy, Mitra noted.

 
Kyle Alspach

Kyle Alspach is a Senior Editor at CRN focused on cybersecurity. His coverage spans news, analysis and deep dives on the cybersecurity industry, with a focus on fast-growing segments such as cloud security, application security and identity security.  He can be reached at kalspach@thechannelcompany.com.

Advertisement
Advertisement
Sponsored Post
Advertisement

NEWSLETTER

Advertisement exit