Securing Data Governance with Microsoft Purview

AI tools are brilliant for productivity, but are they your business’s biggest blind spot? 

You’ve probably heard of shadow IT—that’s when people use systems, devices, software, or services without the approval of the IT department. Now we’re facing a new challenge: shadow AI. 

This occurs when employees use AI tools like ChatGPT, Gemini, or GitHub Copilot without proper oversight, potentially exposing sensitive company data to third-party systems with zero visibility or control. 

This trend exposes some pretty severe risks that many companies aren’t equipped to handle—but Microsoft Purview is one potential solution. 

Our Principal Architect, Elliot, has created a short walkthrough video about how Microsoft Purview data governance works. If you want to know how to keep your company data safe when using AI, watch the video below, or read on to learn more. 

The hidden dangers of unmonitored AI

AI-powered work tools are fantastic. Almost everyone wants to use them—not just your IT teams. 

Whether that’s generating sales emails based on customer data, making sense of massive spreadsheets in Excel, or sharing Studio Ghibli memes through Teams—everyone’s at it (Around 75% of knowledge workers currently use some sort of AI each day). 

But the problem is that if they’re using unapproved tools like ChatGPT, Gemini or GitHub Copilot, you might be handing your data to someone else’s model with zero visibility. 

Think about it. When was the last time you copied work data into an AI tool? Now imagine that’s: 

  • A junior engineer copying config files through Cursor 
  • An HR manager replying to a complaint with Gemini 
  • A lawyer reviewing contracts in ChatGPT 

That’s not just a data leak—that could be a GDPR breach, IP loss, or a customer trust disaster waiting to happen. What’s worse, most organisations don’t even know this is happening and can’t answer the most basic question: Where is our sensitive data going, and who’s using AI to move it? 

It’s pretty unlikely that any policy or training session is going to stop this completely. Let’s be honest—people will always choose the quickest path to get work done. 

So what can we do about it? 

Microsoft Purview shows how your data is being fed to AI 

This is where Microsoft Purview DSPM (Data Security Posture Management) comes in. It’s a suite of tools designed to help businesses protect their data—brilliant for those running cloud infrastructure through Azure. 

Purview DSPM for AI tackles this data problem head-on by giving your security teams insight into: 

  • Which AI tools are being used across your organisation 
  • Where sensitive data is flowing 
  • What your level of risk exposure is across your systems 

It’s not trying to stop people using AI—the purpose is more geared toward giving you visibility and control. So you can enable your teams to do really great work (and have a bit of fun along the way) without letting security fall through the cracks. 

Let’s take a quick tour around the dashboard to see what it can do for you.  

Microsoft Purview Portal

How the Microsoft Purview Portal works for data governance

To get started with Purview DSPM, you’ll need a Microsoft 365 E5 or E5 Compliance licence. Once that’s sorted, you’ll have access to a dashboard with several key areas. 

Overview 

When you first log into the Microsoft Purview DSPM dashboard, you’ll be greeted with a neat overview that gives you immediate insight into AI usage across your organisation. 

Here you’ll find: 

  • A ‘getting started’ section that guides you through initial setup steps, which might include deploying browser extensions or onboarding devices into Purview 
  • Some key metrics showing AI usage across your organisation 
  • Indicators of potential risk areas that need your attention 
  • Quick links to the most common actions you’ll need to take for strong data governance 

Microsoft Purview DSPM for AI

From here, you can use the menu on the left to visit any of the following sections. 

Recommendations 

In the Recommendations section, you’ll find insights on the best things you can do right now, They’re automatically generated based on your organisation’s current security posture. The system will track what you’ve completed and what you’re yet to tackle. 

Things you might find here are: 

  • Suggestions for fortifying data security where vulnerabilities have been detected 
  • Recommendations to bring in specific policies that prevent users from accessing unapproved AI tools 
  • Alerts highlighting departments or teams that are showing particularly high-risk behaviour 
  • Step-by-step guidance for addressing identified security gaps 

One example shown on Microsoft’s Purview blog is how the system can recommend you control unethical behaviour in AI. It can detect someone using Copilot in a way that matches a certain classifier tag (“regulatory collusion, stock manipulation, unauthorised disclosure, money laundering, corporate sabotage, sexual, violence, hate” etc.) and notify you, so you can decide on how to proceed. 

Microsoft Purview DSPM for AI Recommendations

 

Reports 

The Reports section gives you detailed analytics on AI adoption and usage within your business. You’ll find some handy graphs and visualisations on things like which AI tools are being used most frequently (for example, you might see increasing use of Copilot, ChatGPT or other services). 

You can see here the user adoption rates over time, allowing you to track how quickly AI tools are spreading through your organisation on a departmental basis. 

You can also get data on unprotected sensitive assets—those that aren’t covered by a DLP (data loss prevention) policy that stops their exfiltration, or those that don’t have a sensitivity label that controls access to them. 

Policies 

Policies are at the heart of Microsoft Purview’s data protection capabilities. In this section, you can create and apply various policies to restrict users copying and pasting data into AI tools (or just alert you when it happens). 

Here, you can manage: 

  • Data leak prevention policies that automatically detect when sensitive information (like credit card numbers, passport details, or proprietary code) is being copied into AI tools and can either block the action or provide a warning to users 
  • Tool usage policies that allow you to specify which AI tools are approved for business use and alert or block when employees attempt to use non-sanctioned services 
  • Conditional access policies that enable different levels of AI tool access based on user roles, departments, or data sensitivity levels 
  • Data classification policies that automatically identify and tag sensitive information across your organisation, making it easier to control what can be shared with external AI services 
  • Notification policies that alert security teams in real-time when high-risk activities are detected, allowing for swift intervention 

These policies can be customised with different severity levels, from simply logging activities so you can review them later, to actively preventing data from leaving your secure environment. You can potentially start with looser monitoring policies to understand usage patterns before implementing stricter controls. 

Activity explorer 

Here you’ll find similar information to the above sections but drilled down into more granular detail about specific actions taken by specific users. 

You can find: 

  • Detailed logs showing where users have used AI tools and what kind of information they’ve inputted 
  • Filters that let you drill down by user, department, AI service, or time period 
  • The ability to see exactly which information was shared with external AI services 
  • Indicators highlighting when sensitive information has potentially been exposed 
  • Timeline views that help identify unusual patterns or potential security incidents 

Data assessments 

This section is currently in preview (beta), but you can still make use of it today. The Data Assessments section uses AI-based algorithms to identify when oversharing is present in your organisation. 

This feature automatically scans your SharePoint sites and other document repositories to identify potential security risks. It then conducts thorough risk assessments of documents and applications that might have been inappropriately shared outside your secure environment. 

Based on these assessments, you’ll receive tailored recommendations for improving your overall data security posture. Over time, you can track trending data to see if your security measures are becoming more or less effective—meaning you can refine your approach continuously. 

Microsoft Purview DSPM Data Assessments (preview)

Stay safe with your superpowers 

Are AI tools your productivity miracle or your biggest security blind spot? With the right visibility and controls, they can be one of your greatest assets—don’t let them be a liability. Microsoft Purview DSPM gives you the tools to know what’s happening with your data as it interacts with AI. 

So what do you need to get up and running? 

To roll out the complete DSPM solution, you’ll need: 

  • Intune or hybrid-join devices already in place within your environment 
  • Browser extensions deployed for Edge and Chrome to monitor AI usage 
  • Optionally, integration with Purview DLP (Data Loss Prevention) for more comprehensive protection 

If you’d like more guidance on setting up Purview and using AI for safe data management, get in touch. Synextra’s elite cloud experts will be happy to show you the way. 

 

Subscribe to our newsletter

Stay ahead of the curve with the latest trends, tips, and insights in cloud computing

Elliott Leighton-Woodruff, Principle Architecture at Synextra
Article By:
Elliott Leighton-Woodruff
Principal Architect
thank you for contacting us image
Thanks, we'll be in touch.
Go back
By sending this message you agree to our terms and conditions.