Shadow IT
March 4, 2025

The Rise of Shadow AI: Understanding the Risks and Mitigating the Threats

Samuel Bismut
CTO and co-founder

A new and dangerous phenomenon is emerging in companies across the globe known as Shadow AI. It is emerging as a significant challenge for security leaders and CISOs. Unlike traditional Shadow IT, which involves unauthorized software or hardware, Shadow AI refers to the creation and use of AI applications without the oversight or approval of IT and security departments. This trend is driven by well-intentioned employees seeking to enhance productivity and efficiency, but it introduces substantial risks that organizations must address proactively.

What is Shadow AI and Why is it Growing?

Based on a recent Software AG surveythat found 75% of knowledge workers already use AI tools and 46% saying they won’t give them up even if prohibited by their employer. The majority of shadow AI apps rely on OpenAI’s ChatGPT and Google Gemini. But there are also new apps coming in like DeepSeek that swarmed organisations almost overnight.

A survey showed that more than half (55%) of employees all over the world said that they are regularly using unapproved AI tools at work. They are not alone. 73.8% of ChatGPT accounts are not using corporate licences. They lack the security and privacy controls of more secured implementations. The same percentage is even higher if you look at Gemini (94.4%).

Shadow AI encompasses a wide array of AI applications and tools developed by employees to automate tasks, streamline processes, and leverage generative AI for various business functions. These apps often train on proprietary company data, raising concerns about data breaches, compliance violations, and reputational damage. The allure of Shadow AI lies in its ability to accelerate workflows and meet tight deadlines, making it an attractive option for employees grappling with increasing workloads and time constraints.

The proliferation of Shadow AI is alarming. Reports indicate that thousands of new AI apps are created daily, many of which default to training on any data fed into them. This means that sensitive corporate information could inadvertently become part of public domain models, posing significant risks to organizations.

The Dangers of Shadow AI

The risks associated with Shadow AI are multifaceted:

1. Data Breaches: Unauthorized AI apps can expose proprietary data to potential leaks, as they often lack the necessary security controls.
2. Compliance Violations: The use of Shadow AI can lead to violations of regulatory requirements, particularly in industries with stringent data protection laws.
3. Reputational & Financial Damage: Data mishandling and breaches can severely impact an organization's reputation, eroding trust among customers and stakeholders.

Addressing the Shadow AI Challenge

To mitigate the risks posed by Shadow AI, organizations must adopt a holistic approach that balances innovation with security. Here are some key strategies:

1. Centralized AI Governance: Create an Office of Responsible AI to oversee policy-making, vendor reviews, and risk assessments. This centralized governance ensures that AI tools are vetted and compliant with security standards.
2. Employee Training: Educate employees on the risks of Shadow AI and provide them with secure, sanctioned AI tools. Training should emphasize the importance of data protection and responsible AI use. This should also provide alternatives to unauthorised apps. Shadow AI usually does not stem from bad intentions but from an absence of alternatives.
3. AI-Aware Security Controls: Implement security measures designed to detect and mitigate text-based exploits and other AI-specific threats.
4. Continuous Monitoring: Regularly monitor software usage and data flows to identify and address unauthorized AI apps promptly.
5. Balanced Policies: Avoid blanket bans on AI tools, as this can drive usage underground. Instead, provide employees with secure AI options and clear guidelines for their use.

Embracing AI Securely with automation

The rise of Shadow AI presents a complex challenge, but it also offers an opportunity for organizations to harness the power of AI securely. By implementing centralized governance, proactive monitoring, and robust employee training, companies can unlock the benefits of generative AI while safeguarding their data and compliance.

In conclusion, Shadow AI is a growing concern that requires immediate attention. Organizations must adopt a proactive approach to manage this phenomenon effectively. By doing so, they can foster innovation while maintaining the security and integrity of their data. At Corma, we understand the gravity of the Shadow AI problem and offer a tailored solution to this challenge. Our expertise lies in automatically identifying unauthorised apps before they become problematic. By partnering with us, you can ensure that your valuable data and infrastructure are protected, paving the way for a secure and cohesive technological future.

Related blog

Ready to revolutionize your IT governance?