Staff are actually utilizing greater than 3,400 AI apps at work — most outdoors IT visibility. Based on Zscaler, that surge is creating a serious shadow AI compliance & safety problem, as delicate firm knowledge flows into instruments that many IT groups can not absolutely monitor.
As Jay Chaudhry, CEO and Founding father of Zscaler, mentioned on the corporate’s current earnings name:
“Organizations are quickly adopting AI to drive productiveness and innovation, however doing so is creating new vulnerabilities, considerably increasing the assault floor and growing cyber threats in scale, sophistication, and velocity — recasting AI from a productiveness engine right into a harmful safety risk.”
The size behind that warning can’t be ignored. Zscaler mentioned AI software utilization throughout its clients has expanded to greater than 3,400 apps, a quadrupling during the last 12 months. In the meantime, knowledge transfers to AI functions exceeded 18,000 terabytes in 2025.
The corporate additionally reported that enterprise AI utilization rose 91% yr over yr, whereas knowledge transfers to AI and machine studying functions climbed 93%.
Learn Extra:
Securing Uncontrolled AI Utilization at Scale
Zscaler is positioning its new AI Defend instruments as a response to this shift, arguing that enterprise AI safety should now focus as a lot on worker conduct and governance as on conventional cyber protection.
In its current monetary outcomes, they highlighted Zscaler AI Defend as a necessity for securing enterprise AI utilization at scale. Zscaler is attempting to offer enterprises one thing most at the moment lack: visibility into how AI is definitely getting used. Which means figuring out which AI instruments workers are utilizing, controlling entry, and monitoring how knowledge flows into them.
This enables enterprises to maneuver from reactive coverage enforcement to proactive governance.
The AI Defend package deal is now not being framed as a distinct segment add-on for experimental AI initiatives. As a substitute, it’s being positioned as a management layer for AI compliance and broader enterprise AI safety.
That is already exhibiting up in buyer offers. Zscaler mentioned a Fortune 500 semiconductor producer signed an eight-figure new brand deal that included Zscaler AI Defend and knowledge safety merchandise. Their function? To dam unsanctioned AI functions, forestall knowledge leakage into public giant language fashions, and supply visibility into prompts.
One of the vital telling particulars from the quarter got here from an leisure buyer. Based on Chaudhry, a serious leisure firm activated Zscaler’s coverage enforcement for AI site visitors and found that 4 million AI prompts per week have been now being secured. That type of quantity suggests corporations could also be a lot additional into shadow AI utilization than management groups notice.
Enterprise AI Safety Will get Tougher as AI Brokers Enter the Workflow
Zscaler can be attempting to widen the dialog past workers utilizing AI instruments manually. The corporate says the following problem for enterprise AI safety will come from AI brokers working autonomously throughout workflows, functions, and knowledge environments. Chaudhry defined:
“AI brokers shift the risk panorama and function autonomously at speeds far exceeding people, exponentially growing agentic site visitors whereas compressing the time to forestall, detect, and reply to threats.”
That warning issues within the worker expertise house as a result of AI is more and more being embedded into collaboration and workflow automation. As soon as AI brokers start appearing throughout enterprise programs at scale, shadow AI compliance turns into more durable to handle. The problem is now not simply what workers kind into AI instruments, however what linked AI programs can entry, share, and set off on their very own.
Hold updated on the most recent UC safety tendencies by following UC At present on LinkedIn.
Compliance Strain Is Giving Zscaler One other Opening
The compliance dimension provides much more weight to Zscaler’s argument. In its current enlargement of worldwide compliance capabilities, the corporate emphasised the necessity for stronger native controls. Misha Kuperman, Chief Reliability Officer at Zscaler, mentioned within the announcement:
“Efficient knowledge sovereignty requires clients to have verified authority over their knowledge residency, telemetry and management knowledge airplane knowledge.”
For enterprises coping with shadow AI, this raises a important concern. It isn’t nearly seeing how workers use AI, however guaranteeing that any knowledge shared with these instruments doesn’t violate regional compliance necessities or knowledge residency guidelines.
What This Indicators for IT and Safety Leaders
The larger takeaway from Zscaler’s quarter is that shadow AI compliance is now not a aspect concern attributable to a number of curious workers testing new instruments. It’s changing into a mainstream enterprise governance drawback, pushed by widespread office adoption and the speedy development of AI-powered workflows.
That’s the place Zscaler AI Defend is attempting to land its message. The corporate is betting that clients will more and more want a devoted coverage and visibility layer between workers, AI functions, and delicate company knowledge. If that thesis holds, enterprise AI safety will grow to be one of the crucial vital finances conversations out there over the following yr.
For a lot of enterprises, the uncomfortable actuality is straightforward: AI adoption is dashing forward – leaving governance on the wayside.
Need to improve your enterprise safety? Take a look at UC At present’s Information to Safety & Compliance to kickstart your adoption journey and discover all of the steering you’ll want.
FAQs
What’s Zscaler AI Defend?
Zscaler AI Defend is Zscaler’s platform for locating AI utilization, managing entry, and inspecting prompts. It additionally helps forestall delicate knowledge leakage throughout AI functions.
What does shadow AI compliance imply?
Shadow AI compliance refers back to the problem of governing worker use of AI instruments. Specifically, in circumstances the place utilization will not be authorised, monitored, or lined by present compliance controls.
Why is enterprise AI safety changing into extra pressing?
Enterprise AI safety is changing into extra pressing as a result of workers are utilizing extra AI instruments. This contains sharing delicate knowledge with them, and starting to work together with AI brokers that may function autonomously at scale.







