Why your meeting rooms now need a 3-to-5-year refresh cycle

Shadow AI is the quiet problem sitting behind most AI conversations. It describes the unsanctioned use of AI tools by staff without approval or oversight from IT or security teams.
With generative AI everywhere, staff copy and paste emails, contracts, customer data and financials into whatever AI tool feels easiest at the time. Many employees are already using AI at work without formal approval and some are uploading sensitive information to unmanaged tools. (Check out our free AI Policy here).
Buying Microsoft 365 Copilot or ChatGPT on its own does not solve this. In fact, if your tenant is loose, Copilot can make the impact of shadow AI worse by surfacing data that was already overshared.
For New Zealand organisations running on Microsoft 365, the answer is not to ban AI, but to put proper guardrails in place and use the security features you already own.
At Layer3, we use advanced tools to reduce shadow AI risk through Microsoft 365 policies, so we can roll out Copilot with control rather than hope. These tools and services are baked into the base of every managed services agreement.
Common shadow AI behaviours inside a Microsoft 365 environment include:
Each of these creates real risk: data leakage, regulatory issues, loss of IP and a larger attack surface for threat actors to exploit.
Microsoft 365 Copilot does not create new permissions. It inherits whatever your identity, device and data policies already allow.
If any of the following are true, Copilot will happily work inside that reality:
That is why Copilot readiness is really about shadow AI readiness. If you do not tighten the tenant first, Copilot simply makes it easier for people to find content that should never have been accessible.
The good news is that most of the controls you need to manage shadow AI are already in Microsoft 365.
Microsoft Purview DLP policies can target Microsoft 365 Copilot and Copilot Chat. This means you can:
This is central to reducing shadow AI. You are not only controlling where data can be stored and shared, you are actively managing how AI can interact with that data.
Sensitivity labels in Microsoft Purview let you classify and protect data across emails, documents, Teams, SharePoint and OneDrive. They can enforce encryption, access controls, external sharing limits and visual markings.
With the right label strategy you can:
Modern SharePoint and OneDrive include oversharing controls and access review capabilities that reduce the amount of “open by accident” content in your tenant.
Cleaning up years of open folders is one of the fastest ways to reduce shadow AI risk, because it reduces the pool of data that any AI tool can potentially surface.
Conditional Access policies allow you to control:
Combined with app governance, you can block or monitor AI tools that try to connect to Microsoft 365 without going through your approval process.
Intune lets you enforce device compliance for access to Microsoft 365, and apply app protection policies to control what users can do with data on mobile and desktop.
This matters for shadow AI because it helps you:
Microsoft 365’s unified audit log, Purview eDiscovery and Copilot-specific logging give you visibility into how AI is being used. You can monitor:
Doing all of this manually for every Microsoft 365 tenant does not scale, especially once you add Copilot and shadow AI into the mix. That is why we have built a standardised Copilot readiness and governance service.

First, we run a Copilot readiness assessment that analyses:
From there, we map the findings into clear recommendations and a practical roll out plan.

Behind the scenes we use automation and policy templates to:
For our customers, that means:
If you are worried about shadow AI in your Microsoft 365 environment, here is a simple path forward:
Shadow AI will not disappear, but with the right Microsoft 365 policies and MSP partner, like Layer3, you can keep it in the light, reduce the risk, and still get the benefits of AI for your business.