An extension of shadow IT, shadow AI involves employees using non-approved AI technology. Security teams can reduce shadow AI exposure by building a clear AI governanceAn extension of shadow IT, shadow AI involves employees using non-approved AI technology. Security teams can reduce shadow AI exposure by building a clear AI governance

Is Shadow AI Worse Than Shadow IT?

2025/12/12 05:56

A quiet office can look harmless. Racks of monitors bathed in light, headphones covering conversations, and the buzz of work carry on with no sign that something sinister lies underneath. But increasingly, there are accidental, unsanctioned technologies — a personal cloud folder here and an unsanctioned AI chatbot there. Soon, the organization will need to manage all of these new unanticipated risks. But shadow IT was just the first load of hidden threats. Shadow AI has upped the ante.

What Shadow AI Is and Why It’s Growing

An extension of shadow IT, shadow AI involves employees using non-approved technology. Shadow IT typically refers to consumer technology, like file-sharing apps or personal devices. Shadow AI usually involves fast-moving, data-hungry systems whose behavior can be erratic.

\ Per research conducted by Gartner, 80% of organizations experience gaps in data governance. These gaps make it easier for people to miss AI-generated behavior. Many teams fail cybersecurity readiness assessments. The risk associated with AI is increased by employees adopting new tools faster than their teams can adequately review them. Since 30% of data breaches originate from vendors or suppliers, knowing what tools a team uses is a critical component of securing a company’s digital assets.

\ Shadow AI has gained traction because employees view AI-generated content as a faster way to create content, summarize complex information, and troubleshoot technical issues. It reduces friction in daily work but introduces risks not previously seen with shadow IT concerns, including data exposure, compliance risk, and model-level risks.

Shadow AI Versus Shadow IT

Shadow IT has long been blamed for unknown vulnerabilities. A high percentage of earlier breaches were due to unsigned SaaS tools or personal storage. AI tools change the equation entirely. The scale and speed at which they work, along with their opacity, create risks that are more difficult to detect and contain.

\ With 78% of organizations utilizing AI in production, some breaches are now due to unmanaged technology exposure. The larger IT model still matters, but AI introduces a new dimension to broaden the attack surface.

Key Differences Between Shadow AI and Shadow IT

Shadow AI is similar to shadow IT in that both stem from an employee's desire to be more productive, but they differ in where the risk resides.

  • Shadow IT tools have fixed logic, which makes behavior predictable. Forecasting the behavior of shadow AI tools is more complex because models can be continuously modified and retrained.
  • Shadow IT risks include data being stored or moved without authorization. Shadow AI risks include model inversion, data poisoning, and model training.
  • Shadow IT is deterministic, while AI tools may hallucinate, generalize poorly, and overconfidently produce incorrect outputs.

\ Shadow AI also arises in the context of upcoming regulations, such as the EU Artificial Intelligence Act, which could increase regulatory scrutiny.

Security Risks That Make Shadow AI More Urgent

Shadow AI can lead to problems in engineering, marketing, and finance. As decisions are made based on AI outputs, proprietary data can be leaked, and internal business processes can be manipulated without anyone noticing.

\

  • Model manipulation: Attackers can craft data that skews outcomes.
  • Prompt injection exposure: A created prompt can be used to extract private information from a model.
  • Data lineage gaps: AI tools may generate and store data in ways security teams can't track.
  • Compliance drift: AI tools change, and evolving governance plans may become irrelevant.

\ The concern grows with the advent of generative AI. A chatbot answering a vendor's question or a generative AI summary may seem harmless, but it risks revealing sensitive usage data or valuable proprietary intellectual property. Carnegie Mellon University found that large language models are far more vulnerable to adversarial prompts than rule-based systems. The problem increases when employees can use the tools without supervision.

\ An AI-enabled decision tree can be more biased than a conventional decision tree. Shadow AI often receives incomplete training information fed into third-party tools. Structured oversight of AI systems would ensure the integrity of updates. When teams overlook this, the model's data and behavior drift.

How Security Teams Can Reduce Shadow AI Exposure

Although shadow AI poses numerous risks, organizations can mitigate many of them by combining visibility with policy and technical controls, striking a balance that protects employee productivity without burdening them with time-consuming check-ins or blocked sites. Security teams benefit from treating shadow AI as a governance issue rather than a punishment issue. Mitigation strategies will inevitably need to evolve as employees use AI tools to improve productivity.

1. Build a Clear AI Governance Framework

A governance plan should specify which AI tools to approve, what types of data employees can use, how to review model outputs before making high-stakes decisions, and what to do when an unpredictable model behavior occurs. The latter element includes who reviews the behavior, who investigates its causes, and what the consequences are.

\ With oversight in place, organizations can treat AI as any other enterprise asset, subject to the same traceability, auditability, security, and compliance responsibilities as other legacy enterprise systems.

2. Provide Approved AI Tools

Teams with access to vetted, centralized AI tools are less likely to turn to unapproved public AIs to bypass blockers. As jobs become more automated, staff will pour more effort into various models. Workers are already spending around 4.6 hours weekly using AI on the job, exceeding the average personal use time of 3.6 hours per week. AI from third parties, without proper monitoring, might already be more common than enterprise tools that are vetted and approved. Companies should take immediate steps to enforce their policies.

\ With a managed environment, organizations can monitor usage through tools, set permissions within databases, and enforce data governance across departments. This improves employee productivity while also protecting the business's data integrity and compliance.

3. Monitor Data Movement and Model Usage

Visibility tools that flag abnormal behavior — such as sudden increases in AI usage, uploading data to unusual endpoints, or accessing the model in a short time frame with sensitive data — may help security teams identify misuse and data leaks. Reports indicate that over the past year, as many as 60% of employees utilized unapproved AI tools, and 93% admitted to inputting company data without authorization.

\ Detecting these patterns early may enable remediation, re-education, permission reconfiguration, or termination of the process before it leads to data leakage or compliance breaches.

4. Train Employees on AI-Specific Risks

Cybersecurity training in general is not enough. AI can hallucinate by misinterpreting the intent behind prompts and generate seemingly authoritative, false, or biased content. Additionally, workers must understand that the use of AI differs from the use of software or services. Secure use requires changing mental models, understanding prompt risks, and handling personal data.

\ Users with basic machine literacy will fact-check output and be less likely to over-share personal data. They will treat the tools as valuable co-pilots, but they must be used under human supervision.

Protecting Organizations Against Shadow AI

Shadow AI is growing faster and is harder to identify than shadow IT. Although the scale and complexity of the risks differ, enlisting employee help can identify both more effectively. Governance policies can help companies strike the right balance. Security teams should reassess their exposure, stay vigilant for emerging threats, and act promptly before unseen AI-based tools make pivotal decisions in business applications.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
BitcoinEthereumNews2025/09/18 02:28
Shytoshi Kusama Addresses $2.4 Million Shibarium Bridge Exploit

Shytoshi Kusama Addresses $2.4 Million Shibarium Bridge Exploit

The post Shytoshi Kusama Addresses $2.4 Million Shibarium Bridge Exploit appeared on BitcoinEthereumNews.com. The lead developer of Shiba Inu, Shytoshi Kusama, has publicly addressed the Shibarium bridge exploit that occurred recently, draining $2.4 million from the network. After days of speculation about his involvement in managing the crisis, the project leader broke his silence. Kusama emphasized that a special “war room” has been set up to restore stolen finances and enhance network security. The statement is his first official words since the bridge compromise occurred. “Although I am focusing on AI initiatives to benefit all our tokens, I remain with the developers and leadership in the war room,” Kusama posted on social media platform X. He dismissed claims that he had distanced himself from the project as “utterly preposterous.” The developer said that the reason behind his silence at first was strategic. Before he could make any statements publicly, he must have taken time to evaluate what he termed a complex and deep situation properly. Kusama also vowed to provide further updates in the official Shiba Inu channels as the team comes up with long-term solutions. As highlighted in our previous article, targeted Shibarium’s bridge infrastructure through a sophisticated attack vector. Hackers gained unauthorized access to validator signing keys, compromising the network’s security framework. The hackers executed a flash loan to acquire 4.6 million BONE ShibaSwap tokens. The validator power on the network was majority held by them after this purchase. They were able to transfer assets out of Shibarium with this control. The response of Shibarium developers was timely to limit the breach. They instantly halted all validator functions in order to avoid additional exploitation. The team proceeded to deposit the assets under staking in a multisig hardware wallet that is secure. External security companies were involved in the investigation effort. Hexens, Seal 911, and PeckShield are collaborating with internal developers to…
Share
BitcoinEthereumNews2025/09/18 03:46