Internet & Software Tips

People don’t belong in the loop – They are in the middle

Over the past year, I’ve watched teams roll out AI systems, tools, and agents, and then struggle to trust, adopt, or scale them. I would argue that today’s biggest problem with AI adoption starts with how we plan for change.

“Human-in-the-loop” (often abbreviated to HITL) has become a buzzword today. Companies and analysts faithfully repeat to regulators, auditors, and risk teams as a signal of compliance and assurance, a summary of this: “don’t worry, this program is not completely independent, there is a responsible person who can intervene and monitor.” HITL is increasingly becoming a message of reassurance to customers and employees: “If you depend on using AI tools, don’t worry, “humans” like you will stay in the loop!

This is not the first time this phrase has appeared. ‘Human in the Loop’ comes from engineering fields (aviation, nuclear systems, industrial control), where systems are becoming increasingly automated. In 1998, the US Department of Defense’s Modeling & Simulation Glossary used “human-in-the-loop” as a phrase to describe “an interactive model that requires people to participate.”

The difference between that usage and today is subtle but important. In 1998, the DoD was defining prescriptive and automated machine learning systems designed to perform specific procedures under controlled conditions. In classical control systems and early automation, the “loop” was a repetition of: sense, decide, act, observe, then adjust. Machines will collect signals (radar, gauges, telemetry) and humans will make sense of the data. In the programs of the 1980s, people didn’t just intervene, they defined goals, limits, and failure modes. However, today’s usage retains the same label but defines a framework with less autonomy.

With the rise of LLMs and agent AI, the loop has become something more along the lines of: the model generates, the human reviews the errors, and the agent continues.

The Problem of Independence

Once you start turning the sentence around in your mind, the outline is clearly wrong. Why do we call it “man-in-the-loop” in the first place? The formulation of this word paints a picture of AI models doing work with humans invited somewhere along the way.

This is a fundamental design problem: a language that places AI as the protagonist and relegates humans to a supporting role as if they were a resource of the system, rather than a catalyst for the system itself. The structure of the sentence suggests that AI is the main actor running the job, with ‘human’ placed as a form of control or quality assurance at the end of the automated assembly line.

In manufacturing and engineering, responsibility without authority is known as a failure mode. And yet that is exactly what the HITL framework is all about: people endorsing outcomes they didn’t design. In this framework, models are generated, systems are developed, and ‘people’ are brought in to evaluate, approve, and ultimately take responsibility if something goes wrong. In any other context, we would immediately see this as a flawed system—one that separates decision-making from accountability.

Then there is the word ‘human’ itself: cold, sterile, biological, and impersonal. No wonder people tend to mistrust these models—this statement sounds like something a model can do.

If HITL is the story we’re talking about in the market, today’s news of AI adoption should come as no surprise. If we want to fix the detection problem, first we have to fix our frame.

The point is: well-designed systems don’t avoid automation, they do do make shipping transparent. People set direction, define purpose and boundaries, and decide where judgment is needed. Automation handles the rest. When that order is clear, AI is a powerful extension of human capacity. If not – and when systems are developed first and people are pulled in later to review and manage risks – trust is inevitably lost. Not because automation is too powerful, but because authority and work don’t go well together.

The Uncanny Valley of Work

In a culture that prides itself on individual agency, creativity, and innovation, we’ve taken an unconventional approach to defining how humans should interact with AI.

The narrative around “AI-enabled” tools is pretty much the same: fewer human touch points and automation = greater efficiency and speed. The clear promise is that progress means less human involvement, because you only need the odd person “in the loop” to keep things from going completely wrong.

I think this framing fits right into today’s distrust of these models, not because it always plays out this way, but because of the story it suggests. In this case, people are concerned about three things:

    1. What if I train the very systems that may (at worst) end up replacing me, or (at best) relegate me to a new role that feels less impactful or meaningful?
    2. What would this new role look like for me? Will I be expected to review, catch mistakes quickly, and approve results I didn’t create? Will my work move from creation to a tedious cycle of revision and rubber stamping?
    3. If something goes wrong, will I be held responsible or accountable?

Together, these worries reveal what I think of as the uncanny valley of work: the feeling that this job looks like my job, the decisions are like my judgment, everything feels normal, and yet it still feels empty because none of it is true. mine.

This framing also disrupts the roles we play; traditionally, people create and technology support. In this role, AI generates and optimizes work, while humans make choices. In that situation, it is easy to feel indifferent to any consequences, “I don’t know, the AI ​​decided?

People find meaning in effort and success, so putting them as reviewers in the ‘loop’ removes that sense of meaning and ownership: the perfect recipe for burnout. After all, most people only tolerate administrative work when it supports meaningful or creative work, is time-bound, and has a clear purpose.

This is where the term human-in-the-loop fails; it puts people’s judgment as a step in the process, where our judgment is the absolute basis of success.

On the other hand, when we reverse that framework, suddenly people are the ones setting the goals, choosing when to put AI at work, and shaping the results. When we think about AI use and adoption, we should position AI as what it already is: a power tool that can help people break down information, high patterns, and reduce administrative work, not something that replaces human ownership or ownership.

Language as Architecture

Well-designed AI systems make missions transparent. Humans must set direction, define boundaries, and decide where judgment is needed while automation handles the rest. In this model, AI expands what experts can do: reveal patterns, reduce administrative workload, and speed up decisions without destroying ownership or responsibility.

When AI is reprogrammed with original human thought, it becomes powerful. I see this play out every day at Quickbase with our customers and our internal product teams. Organizations that succeed with AI adoption are not trying to remove people from the system; they give domain professionals better tools to work with their data, adapt in real time, and focus their energies where they have the most impact, especially in areas shaped by labor shortages, supply chain changes, and tight project budgets.

The reality of the work is strange. Content matters, and our good judgment, knowledge, and intelligent problem-solving are not nice-to-haves, they are the core of how real work gets done.

If we want AI systems that people trust, measure, and stand behind (which is only (how well this works for everyone), we need to design them with a simple rule: People own the results and AI supports the work. Not the other way around.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button