Internet & Software Tips

Shadow AI : How to deal with unauthorized models and unregulated agents

Shadow AI is considered the next iteration of Shadow IT, the main difference being that while developers may use an independent, unauthorized tool in their work, the tool itself does not pose a risk.

Shadow AI is particularly problematic because an unauthorized model can gain access to databases it shouldn’t have and lack the organization and structure to make the right decisions. Furthermore, Shadow AI almost always involves someone in an organization taking a company’s intellectual property and pasting it into a public tool, leaving the destination and subsequent processing unknown.

Part of the problem, according to Broadcom Head of Product Management, Clarity, Brian Nathanson, is the organization’s approach to governance and security because AI is evolving rapidly and changing constantly. Engineers feel that governance is a burden to get their work done, and that the management of their organizations is too slow to bring different models on board. ““Individuals see the productivity benefit of AI beyond what business sees, at least right now, but businesses, out of concern for liability and protecting their IP, try to slow down,” Nathanson said.

Nathanson said that puts engineers in a bind, because if the company only approves, say, Gemini, and the engineer knows that Claude might give better answers for a certain task, the developer thinks “I’ll just copy and paste it into my private, personal account of Claude, and they say, ‘I’ll just use it, because I can’t wait to approve the management process’.”

Ted Way, vice president and chief product officer at SAP, said employees “just want to get things done,” and often will ask for forgiveness later. But that’s not good for the risk of leaking sensitive data, “and it’s not only leaked, but it’s also stored and processed outside of your company. It can be used to train a model. Then you have a compliance risk,” he said. “And, in the process of doing things, you don’t even do it,” because you might not get the exact results you want.

What organizations can do

Getting AI’s reputation under control involves organizational governance, policy and culture.

Some companies, instead of limiting Ai, have created orchestration layers that allow developers to use many different open source and proprietary models in an orchestration-controlled way. This reduces the need for developers to go outside of company policies to do their work in the model they choose, and thus reduces the risk of company proprietary data and conversations not being released to the public.

From a policy perspective, Way said it starts with a clear vision of productive AI policy. He explained that modern technology forces trade-offs: organizations can achieve two of the three desired outcomes—safe, efficient, and independent.

  • Safe and Able to: This situation requires extensive “people’s babysitting” and is considered too slow, since the entire application is “put in the people.”
  • Able and Independent: This represents the opposite extreme—the lack of guidance when LLM determines what is safe. Way cites the example of an LLM who decides to de-code database responses in order to score better in an analysis.
  • Safe and Secure: This mode is very limited, which means that the system will not have access to the necessary tools to do it.

Addressing Shadow AI requires moving dysfunctional management models. Michael Burch, director of application security at Security Journey, suggests that while an AI team or governance committee should exist, governance is not just “a 10-page policy report that no one is going to read.” Instead, it should be about “effective day-to-day governance—taking that 10-page report and making it work for individuals.”

Management, he said, “It’s not just about publishing policy and writing all the rules and buying the right tools. It’s all the work we do, is it working? Has it really had an impact? And have we given it to people in a way that allows them to do it every day and improve the way they think and manage security?” Any gThe management effort must be “based on the real reality of daily work flow,” he said, to ensure that people will actually accept it. The ultimate goal is an effective system that drives adoption and holds people accountable for how they use AI. Burch noted that governance fails when policies alone are relied upon to create good decisions.

A key step in this effective approach is creating a culture of safety. This includes teams with shared vocabulary, workflow guidance, and role models. If everyone understands how AI integrates into the workflow and speaks the same language, the chances of failure are greatly reduced.

If we all speak the same language, if we all understand how AI fits into our different walks of life, and we have examples to work from to understand that…

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button