Is shadow AI undermining your compliance? Here’s what you need to know:
Rapid AI adoption could lead to significant compliance and security challenges.

 In |

Reading Time: 6 minutes

Join us at the SHI & Stratascale Summit on June 17-18 in Somerset, NJ, where top cybersecurity leaders and industry experts will converge to transform cybersecurity into a business accelerator. This exclusive two-day event will equip you with practical security strategies and tactics to address AI, increase resiliency, and make cybersecurity your competitive advantage. Don’t miss this opportunity to enhance your cybersecurity posture and propel your business forward. Secure your seat through the registration link below.

The surge of artificial intelligence (AI) in the workplace is gaining momentum, and organizations must catch up.

Three out of four global knowledge workers already use AI at work today, according to the 2024 Work Trend Index Annual Report from Microsoft and LinkedIn. Interestingly, 78% of AI users bring their own AI tools to work (BYOAI), and this is even more commonplace in small- and medium-sized companies (80%).

“While [79% of] leaders agree AI is a necessity, the pressure to show immediate ROI is making leaders move slowly,” found the report, with 60% of leaders concerned they lack a plan to implement AI.

As employees embrace AI at a quickening rate, and organizations experiment, adopt, and transform to move forward, a new challenge has emerged: shadow AI.

Shadow AI is the unsanctioned or unmonitored use of generative AI tools by employees or departments outside of official IT governance. While often motivated by innovation and efficiency, shadow AI introduces significant compliance, security, and reputational risks.

We’ll shed light on the evolving regulatory landscape, the complications shadow AI introduces, and — most importantly — the practical steps you can take today to regain control.

The regulatory landscape: What’s changing in 2025

Governments and regulatory bodies worldwide are rapidly updating laws to address the unique risks posed by AI, focusing on transparency, accountability, and bias mitigation.

As AI changes the technology landscape, leaders are enforcing practices to ensure responsible AI development and deployment. Audit trails for AI-driven decisions are required to comply with regulations. These audit trails are commonly documented via explainability reports, which help make AI models more understandable to humans by illuminating how AI systems reach conclusions.

The approach varies based on the industry, standard, and region of the world. Key regulations include:

  • General Data Protection Regulation (EU): GDPR now mandates AI-specific provisions such as transparency, explainability, and data minimization.
  • Health Insurance Portability and Accountability Act (U.S.): HIPAA reinforces the need for privacy and security in AI-driven healthcare applications, including audit trails and explainability for clinical decisions.
  • Sarbanes-Oxley Act (U.S.): SOX requires AI models used for financial reporting to comply with SOX requirements, ensuring accuracy, transparency, and traceability of model outputs.
  • EU AI Act: The regulation categorizes AI systems by risk level, imposing stricter requirements on high-risk applications.
  • U.S. state-level laws: In 2025, 48 states and Puerto Rico introduced AI-related legislation, while 26 states adopted or enacted more than 75 new measures, according to the National Conference of State Legislatures.

The takeaway? Regulatory scrutiny is intensifying. Whether you’re in finance, healthcare, or retail, AI governance is no longer optional — it’s a board-level concern.

How shadow AI complicates compliance

Shadow AI can too easily circumvent regulatory requirements, bypassing the required controls and oversight needed to mitigate risks.

As shadow AI lurks, organizations face a compliance conundrum. Unofficial AI tools often lack explainability reports or audit trails. Additionally, security vulnerabilities can lead to data loss and manipulation. Think Hugging Face and open-source model poisoning, where malicious actors embed harmful code into pre-trained models.

Employees may use unauthorized tools to speed up work or gain a competitive edge. Beware: both of these scenarios can lead to detrimental consequences for organizations, such as unintentionally exposing sensitive data.

Policy enforcement, as well as IT and end-user awareness, are critical needs in any regulated organization to help combat the fines, business impact, and reputational damage incurred by violating regulations.

Strategies for combating shadow AI

How is AI applicable to your organization? How is your organization using AI, and what capabilities are included within the known usage?

To address the challenges of shadow AI, learn everything you can about your AI usage and where else it may be used within the organization, unbeknownst to you, your team, or your current monitoring capabilities. Identify departments or teams likely to adopt AI independently, and take inventory of all known and unknown AI tools in use.

After assessing the usage by breaking down the most common risks first, like data and access, take a deeper dive into the AI practices you’ve discovered by using specific threat modeling techniques for each use case.

Develop your AI roadmap

To establish an effective AI governance framework, it is essential to first organize roles and responsibilities, even if they are only loosely defined at the beginning. Develop an operating model to determine how AI will interact with data, applications, and other components within your organization. Focus on:

  1. Policy development
  • Define acceptable AI tools and usage.
  • Establish clear guidelines for procurement, deployment, and monitoring.

Sample policy language: “Employees must only use AI tools approved by IT. All AI-generated outputs used in decision-making must be documented and reviewed.”

  1. Audits and control practices
  • Create an audit plan that aligns with compliance requirements, ensuring all risks are identified and documented.
  • Conduct regular audits of AI usage.
  • Implement access controls and logging for all AI systems.

Tool tip: Consider using Microsoft Purview or OneTrust for AI data governance and auditing.

  1. Threat modeling
  • Tailor threat models to different AI types, such as chatbots, agentic AI, and homegrown large language models (LLMs).
  • Identify potential misuse scenarios and mitigation strategies.

Framework tip: Use the NIST AI Risk Management Framework to guide your modeling.

  1. Monitoring and testing
  • Continuously monitor AI activity.
  • Perform regular security and accuracy testing.

Tool tip: Use endpoint detection tools like CrowdStrike or SentinelOne to monitor AI tool behavior.

  1. Training and awareness
  • Provide role-specific training based on AI literacy and exposure. For instance, IT staff will have different training needs compared to HR personnel.
  • Foster a culture of responsible AI use.

Quick win: Launch a 15-minute AI safety e-learning module for all employees.

Navigate your AI journey with SHI

To reduce shadow AI risks, and transform challenges to capabilities, our experts can help guide you through each step of your AI journey.

SHI offers a comprehensive and structured approach to help organizations navigate generative AI, from ideation to full-scale deployment. We can assist with strategic guidance and readiness, experimentation and validation, deployment, adoption, and technical infrastructure and expertise.

With our AI advisory services, our team provides vendor-neutral, expert guidance to help you select, integrate, and optimize generative AI platforms. Our readiness assessment evaluates your current systems and processes, identifying potential gaps and delivering actionable solutions to put you on a clear path to effective AI adoption.

You can also explore SHI’s AI & Cyber Labs: a cutting-edge environment where organizations can test AI solutions using your own data and workloads across leading platforms. With rapid prototyping, proofs of concept are developed in 2–6 weeks, reducing risks and accelerating your time to value.

Ready to take control of shadow AI? Connect with our AI experts to assess your current exposure, build a governance roadmap, and unlock the full potential of AI — safely and strategically.

Speak with an SHI expert