Video Transcript:
- Up next, with more and more organizations now supporting hybrid work scenarios, managing employee turnover, we’re going to kick off a new series for the latest privacy-enabled insider risk management solutions that are now part of Microsoft Purview. And we’re going to show you how these use machine learning to automatically identify high-risk incidents such as data theft, and how they importantly provide the context and workflow you need for the right stakeholders to take appropriate measures. So I’m joined today by Talhah Mir from the Microsoft Purview engineering team. Welcome to Mechanics.
- Thanks Jeremy. Great to be here.
- And it’s great to have you on to kick off our series. You know, I mentioned that there are two major trends that we’ve seen over the past two years with the move towards hybrid work and what’s been called the great resignation. You know, for example, Microsoft’s latest Work Trends Index report shows that 43% of employees overall and 52% of Gen Z are looking to change employers over the next year. So what impact does this have then on information protection?
- So these trends really highlight the need for more internal focus as you think about protecting your information. We’re used to thinking about data exposure caused by external threats from cyber attacks or insider threats from rogue admins, but less so about insider risks introduced by everyday employees. And this is an expanding problem that most organizations are worried about. Just to give you an idea, in a survey we conducted with Carnegie Mellon’s Security and Privacy Institute, we found that almost 2/3 of the organizations have experienced over five malicious insider threat incidents over the past year. At the same time, around 60% had experienced over 10 inadvertent data issue incidents by employees. And it goes without saying that your data exposure risk will increase the more departing employees you have and the more people joining your organization that might not be familiar with the security and compliance requirements as they work with data and communicate.
- And just to be clear, when we say malicious insider threats, we’re mainly talking about people deliberately walking away with company data, which can cost organizations sometimes millions of dollars per incident. Now, you also touched earlier on another factor, which really expands on how we traditionally think about insider risk, which is employee communications.
- That’s right, so it’s true when you look at things like burnout, which is an underlying reason why somebody leaves an organization, what you find is that there’s a direct correlation to somebody venting in a communication platform, which then, in turn, leads to things like an unhealthy or toxic workplace, or worse, can lead to workplace harassment.
- And these topical examples of potential data theft, leakage, and also risky communications are some of the most prevalent insider risk scenarios we see today. You know, insider risk, though, can also span a broader spectrum of things, including inappropriate behavior, insider trading, and much more. So how do you even begin to address this area and really maintain that level of individual privacy required?
- Well, it’s not a trivial effort. Think about the number of people and teams accessing resources from anywhere in a hybrid work scenario and the volume of people that may be cycling in and out of an organization. Insider risk management should really be part of a zero-trust approach for the protection of your data and communications, where the main goal is to assume breach and always verify. When you look at data exposure, most people will be aware of data loss prevention, or DLP, that protects users from data leakage risks through policy tips and automates specific policy-based transactions like blocking the copying of sensitive information.
- And we’ve had data loss prevention for some time now. In fact, we’ve covered it a lot on Microsoft Mechanics, and it’s really something everyone watching should be taking advantage of.
- It’s a super important part of your content-focused defense-in-depth strategy. But here’s the thing, data doesn’t move itself. People move data. And this is an important distinction to our approach as it’s not just about detecting that something bad has happened, but also establishing context. For that, we have two main solutions: insider risk management, which helps you to identify and act on malicious or inadvertent user activities, and communication compliance for the detection of code of conduct violations or risky communications. Both are part of the Microsoft Purview family of solutions to help you govern, protect, and manage your data estate. Now, you asked earlier how we maintain individual privacy. Let’s take the example of insider risk management. We use machine learning to establish a privacy-first approach across both Microsoft and non-Microsoft services in aggregate and at scale that parses through anonymized user activity. This reasons over activity signals to aggregate and sequence anomalous behavior patterns that accrue to risk, connecting the dots back to a pivotal event like a resignation recorded in your HR systems. Importantly, we provide a secure and compliant workflow that allows organizations to bring together the right teams with the right permissions from SecOps, human resources, and legal, as necessary to take actions. All actions taken are fully auditable, and we give you full control to establish what risk means for your organization and the types of activities that’s important to flag. It’s this level of visibility and context that helps you to establish a zero-trust stance, and you don’t have to run any agents locally on the user’s machine.
- It’s super powerful, but you know, I’d love to see this in action and really walk through an example.
- Sure, so most organizations would probably want to start by understanding and quantifying the level of insider risk that they have in their organization. For that, I’ll start with insider risk management, which you can get to from the Microsoft Purview portal. And under analytics, you’ll see we give you an aggregate view of anonymized user activities to help you quantify the level of risk inside of your organization and see data exfiltration patterns which can in turn help you decide and prioritize on the types of policies you want to put in place. We can see the percentage of data exfiltration activities by users. And if I scroll down, you can also see top exfiltration activities in play by users, from files copied to USB, emails being sent outside of the organization, and more. I can also dig into the next level of detail, such as here with potential data leak activities. Notice that the percentage of risky activity is low in this case, and that’s because typically there’s a small percentage of people that are doing something they shouldn’t. That said, what analytics does is to help you assess the risk profile of your entire user base and quickly see where you may have issues so that you can get to managing those risks as soon as possible.
- Right, and without this, it can take months to be able to assemble all that information and really contain those risks. And compared to traditional user entity behavioral analytics solutions, this isn’t resource intensive, and you aren’t having to capture, then anonymize, then stitch together all the user activities and have to make sense of them yourself.
- That’s right. It’s all done for you. You can get started with literally one click and you’ll see reports like this for your organization within 48 hours, and those reports are refreshed daily. Now I’m going to take you to a summary view of your insider risk management dashboard. Here you can see the alerts needing review and the active cases. At the top, there are tabs for Alerts, Cases, Policies, Users, and Notice templates. I’ll click into Alerts and here’s where I get an aggregate view of alerts and the severity of the alert. Again, notice the display names of users are anonymized by default, not just to protect privacy, but also to prevent conflicts of interest or bias, which can arise if you see an alert for a friend or a colleague. If I click into one of these alerts, I can see the activities. On this view, I can clearly see the event that caused this user to come in scope for a policy. In this case, the user submitted their resignation. I can also clearly see the activity that generated this alert. In this case, it was the downloading of content from a website marked as unallowed. If I look down further, this is where I can see the aggregate detector, which is an assessment in aggregate of the level of risk under the cumulative exfiltration activities card. This exposes activities over time that accumulate risk, and here our risk score is 82/100. And this is important because most insider risk solutions only catch what I call the convenience store grab-and-go thief. That’s the person that comes in, grabs everything off the shelf, and tries to run out the door. They make enough noise in the process that’s relatively easy to detect. What’s harder to detect is what we call the low-and-slow thief. It’s their gradual set of actions where one file may be copied to USB, another printed, and another uploaded to cloud storage, and all that across many days. On their own, these activities may not raise an alert. But our aggregate detectors can assess the risks across time and exfiltration dimensions and present this aggregate risk so you can decide whether to investigate further.
- And importantly, it’s kind of filtering out all the unnecessary noise and not doing false flags, and that way you can kind of see what truly needs your attention.
- That’s the most important part. Insider risk management is all about accelerating time to action, presenting the right information in the right context so you can make an effective decision. Now, if I want to, I can convert this alert to a case, which essentially moves it into an active investigation. But to save time, I’ll show you what happens next with another case already in progress from the Cases tab. I’ll open this case and jump into the user activity view and on the left is the detailed list of activities over the last three months with their associated risk scores. I can also click into the visual to see the details for each activity and I’ll show you the sequence on the right that was identified with this user right before they submitted their resignation. Here, we see they downloaded 45 files, renamed several of them, printed them, and then deleted those files as if to cover their tracks. Each of these activities, again, might not be concerning if we saw them happening independently. But with the sequence detection, we have the full context on what was happening. Seeing that this user renamed files, exfiltrated them, and then deleted them right before submitting their resignation shows intent. Importantly, we have the processes and workflows built in. If I go over to Case notes, looks like the HR team has already evaluated this. I’m not even going to bother adding any notes, but instead jump right into action. This is a case I want escalated to my legal team to investigate immediately. I do that by going to Case actions and clicking on Escalate for investigation, which will collate all the evidence collected and open a case in the eDiscovery solution so the legal team can start to preserve the evidence to prepare for law enforcement escalation or litigation. This is important because the hardest part about eDiscovery is the discovery bit. So the end-to-end nature of this solution takes that into account and helps you start with all the necessary evidence.
- And I really like how this is bringing together the entire stakeholder team, you know, but how do permissions work with something like this? This is probably not right for, for example, the admin team to be able to see all the details, maybe the files as well. So how does that work?
- You’re right, so the use is governed by role-based access control. You can segment out users and their level of access so that, for example, if you’re an investigator, you can see all of the activities as well as explore the actual content of the files. This role is typically reserved for somebody in the legal team of a given organization, but the analyst cannot explore the underlying content. They can just see the user activity as well as all the aggregate and sequence detection insights that the system generated.
- So that’s insider risk management, but you also mentioned the importance of communication. So can we switch gears to communication compliance?
- Sure, so the chances are you might be quite disgruntled as you resign. Here in communication compliance, you get a similar view of communication risks inside your organization that you can investigate further. Similar to analytics and insider risk, we’ve built an aggregate assessment of potential violations. Here, we can see instances of targeted harassment and threat, as well as instances of sensitive information being shared. Using built-in policy templates, you can quickly create policies in just a few clicks to start detecting these risks. Let’s look at a case that shows early indications of employee disgruntlement here with this Inappropriate Text policy. In this case, I’m logged in as an investigator so I can see usernames, but if I were an analyst, that role would have pseudonymization on by default to protect identities and privacy. I’ll look at the pending items and you can see the two matches for Diego. This one is in a language I don’t understand, but luckily we’ve got built-in translation to automatically translate the message. In this case, French text has been translated into English. That’s a troubling message. And below that match, it looks like an image was also shared. Using built-in optical character recognition or OCR, we can pick up violations that occur in text or handwritten notes contained in images, PDFs, and more. And if I go back to the first match and look at the Conversation tab, I can get even more context around what’s happening. Clearly this employee is showing signs of disgruntlement and might be a flight risk, which, as you saw earlier, may have other ramifications for your data risk.
- Right, and the impact of this can go beyond that disgruntled employee. You know, it can also hurt the morale of everyone they’re working with, everyone around them. In our recent Workplace Health Index report, we saw that 46% of people link positive workplace culture as a key to staying in their current job. You know, it ranks even higher than benefits or time off or flexible work hours.
- Right, this could play a key part in your overall strategy to maintain a healthy work environment. And really, this is just the tip of the iceberg for what communication compliance can detect, which is something we’ll explore in another episode.
- So between the two solutions that you showed today, you can really take a broad and holistic approach to insider risk management. And if you’re wondering how to configure these solutions and the underlying workflows and policies for your organization, we’re going to go hands-on with the implementation in upcoming episodes on our playlist that you can find at aka.ms/InsiderRiskMechanics. So for anyone who’s watching right now, what do you recommend they do to get started?
- Well, both of these solutions are available today and you can learn more by going to aka.ms/insiderriskdocs. Also, if you’re using Microsoft 365 E3, you can sign up for a trial at aka.ms/PurviewTrial to get started.
- Thanks so much, Talhlah, for the overview on how we can help solve for insider risk management. Of course, keep watching Microsoft Mechanics for more in the series and subscribe if you haven’t already. And as always, thank you so much for watching.