Internet & Software Tips

The pitfall of outsourcing AI: Is your business doomed – or is there a way out?

Once, when ChatGPT was down for a few hours, a member of our software team asked the team leader, “How urgent is this project? ChatGPT isn’t working – maybe I’ll do it tomorrow?” You can probably imagine the team leader’s reaction. To put it mildly, he was not happy.

Today, according to a Stanford HAI reportone in eight companies use AI services. Productivity has increased – but so have risks. If AI tools are used without clear guidance, employees may inadvertently feed neural networks not only routine work, but also confidential data. The case of Samsung in 2023, where the company found that developers have uploaded sensitive code to ChatGPT, is one of many examples.

So how do you strike the right balance between using AI for productivity and protecting your company’s security?

AI in business is no longer an “experimental project”

Today, developers use AI in more than just coding. See replace individual sections of CI/CD pipesprepare for deployment, conduct tests – the list goes on.

For businesses, AI translates technical data into plain language information. For example, in our industrial equipment monitoring system, we have an AI agent that processes data from the operation of the machine to track IIoT sensors. It explains the state of the equipment, highlights the risk of failure, suggests possible solutions, and can answer customer questions.

The push for AI is accelerating. According to Menlo Venturescompanies will spend $37 billion on AI technology in 2025 – three times more than in 2024. AI is becoming an integral part of tech ecosystems. Gartner predicts that soon more than 80% of enterprise GenAI programs will be deployed in existing organizational data management platforms instead of independent experimental projects.

In this scenario, AI will not only affect human productivity but also the continuity of all business processes.

Where the dangers lie

When we started using LLMs to analyze mechanical data, it quickly became clear that the models often erred on the side of observation – flagging problems where they didn’t exist. If we hadn’t trained them to recognize common situations, these false scenarios would have resulted in unnecessary recommendations and unnecessary costs to customers.

The risk associated with the accuracy of the model can be reduced in advance. But some threats appear only after a great deal of damage has been done.

Take the secret data leak for what it’s supposed to be Shadow AI – interacting with AI through personal accounts or browsers. According to LayerX Security, 77% of employees regularly share business data with social AI models. That is not surprising IBM reports that one in five data breaches is linked to Shadow AI.

If that number seems excessive, consider the incident in which the acting director of the Cybersecurity and Infrastructure Security Agency of the US. loaded confidential documents of the government contract to the public version of ChatGPT. I have personally seen cases where even system passwords end up being exposed publicly.

This creates unprecedented opportunities for cyber fraud: a bad actor can ask a neural network what it knows about a certain company’s infrastructure – and if an employee has already uploaded that data, the model will provide answers.

What if people followed the rules?

External threats do not end in this situation either. For example, in June 2025, the researchers found the EchoLeak vulnerability in Microsoft 365 Copilot, which allowed zero-click attacks. An attacker can send an email containing hidden instructions, and Copilot will automatically process it and initiate the transfer of encrypted data — without the recipient needing to open it.

Along with technical and security risks, there is a less obvious but equally dangerous threat: automatic biasthe tendency to rely uncritically on the output of automatic systems. We had a case where the client’s technical team, after presenting our proposal, actually asked for a week’s suspension to “verify us with ChatGPT”.

So, are we doomed?

Reducing the risk of using external AI tools does not mean abandoning them. There are several procedures that can help:

  • Set up corporate subscriptions and centralize LLM access. This is a basic and straightforward step. In paid company versions of AI services, data is not used to train models. Trust us — registration costs a lot less than leaking private data.
  • Establish a control policy. A company should have a set of rules that define what can and cannot be sent to the model and what functions can be used. There should also be a designated owner who updates these policies as models and regulatory requirements change. As the models adapt to each user, the lack of integrated standards can lead to a loss of control over the quality of the output.
  • Limit the actions of the AI ​​agent. All LLM requests must be handled based on the user’s role, their access rights, and the type of data requested. To control the interaction between models and systems of the company, MCP servers can be used — an infrastructure layer that enforces access policies and restrictions regardless of the internal logic of LLM.
  • Monitor where and how data is processed. For some clients, it is important that their data never leaves the EU, due to GDPR compliance, EU AI Law, or internal security policies. In such cases, there are two ways. The first is to work with a provider that can guarantee data processing and storage on European servers. The second is to use managed solutions such as Azure, which allow you to use a single cloud environment and limit the access of the AI ​​service to the company’s internal network only.

At this year’s World Economic Forum in Davos, historian and author Yuval Noah Harari said“A knife is a tool, you can use a knife to cut a salad or kill someone, but it’s your decision what to do with it. And that, I think, introduces a risk that we don’t fully understand. So the question is not whether to use AI resources, but how to keep people engaged in the loop.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button