Internet & Software Tips

Docker Isn’t Just About Containers Anymore

I have worked with Docker for over ten years. I installed my first production resource while Docker Compose was in Fig. At the time, people were arguing that containers would replace VMs. Time settled that argument. Containers won. But something exciting happened along the way: Docker stopped being just a container company.

If you’ve looked down on the code, you might have missed how much the landscape has changed. Docker now runs local LLMs, orchestrates MCP servers, and spins up microVMs for AI agents. The container runtime that has quietly changed the way we deploy software is becoming the infrastructure layer for how we build with AI.

I want to discuss what this really means for developing teams. Most installations trap you or destroy you completely.

Pieces on the board

Docker Model Runner allows you to drag and run AI models locally via an OpenAI-compatible API. He runs docker model pulls the same way you can drag an image, and the model loads into memory at runtime. It supports llama.cpp, MLX on Apple Silicon, and Vulkan for GPU acceleration. For teams that want to test local models without sending data to a cloud provider, this is really useful. It does not replace your production index stack. It gives engineers a way to prototype against actual models of their machines.

The MCP Gateway is where things get architecturally interesting. The Model Context Protocol has become the standard way AI systems connect to external tools and data. Docker gateway uses MCP servers in different containers. It manages configuration in one place and takes care of injection of validation. Rather than every developer configuring each AI tool individually, teams can set up a gateway once. For teams that use several AI tools across their IDEs and workflows, this solves a real communication problem.

Docker Sandboxes is a piece that I find very compelling. When you enable the AI ​​code agent to run automatically, it needs to install packages, run scripts, build containers, and modify files. Giving it that freedom inside a common container means it shares your host’s kernel. One bad decision from an agent, and your machine pays for it. Sandboxes solve this by running each agent in a lightweight microVM with its own kernel, its own Docker daemon, and its own network stack. The agent can do whatever he wants. Your host doesn’t care. Docker has built its own VMM instead of using Firecracker. Firecracker targets Linux only, and developers work on Mac and Windows as well.

There are security details to call: details never go into the sandbox. A host-side proxy intercepts outgoing requests and injects API keys when they come out, so the agent works with the proxy while the actual secret resides with the host. If someone compromises the sandbox, there is nothing sensitive inside to steal it.

What is the strategy here?

Docker has joined the Linux Foundation’s Agent AI Foundation as a Gold member alongside Anthropic, Google, Microsoft, and OpenAI. That’s not just a move. Docker is betting that the infrastructure layer for AI agents will look a lot like the infrastructure layer for applications: isolated environments, common interfaces, centralized management, and virtualization.

This is the same playbook Docker ran with containers a decade ago. At that time, the problem was “working on my machine.” Docker solved it with standard installation. Now the problem is “my AI agent trashed my site” or “my agent can’t safely access the tools it needs.” Docker positions itself as a neutral platform that solves those problems without competing with the agents themselves.

There is a pattern to note: Docker continues to find ways to become a layer between developers and any infrastructure complexity that makes their lives difficult. In 2013, that complexity became a shipping inconsistency. In 2020, it was a Kubernetes configuration. In 2026, an AI agent for segmentation and orchestration of tools.

What should the teams really do?

If your team is using AI coding agents today, and many teams are whether they have done it legally or not, the question of isolation is the first to answer. Running agents with full permissions on your local machine was fine while auto-completing task names. It’s not acceptable if they automate multi-step workflows.

Regardless, MCP Gateway deserves a serious look from any team that uses more than two AI-assisted tools. Configuration sprawl is real, and will only get worse as the ecosystem grows.

For everything else, wait and see. Docker Model Runner is interesting for prototyping, not production. The recently launched Sandbox kits are promising but it’s still early days. If your team is measuring in agent areas, keep an eye on how that feature is maturing.

The big takeaway is simple: the company that taught us how to ship software with containers is now teaching us how to ship software with AI agents. Rhyming patterns. Whether Docker will do as well in this pivot as they did in the first remains to be seen, but the foundation of the technology they are building is sound.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button