Anthropic temporarily blocked OpenClaw’s creator from accessing Claude

“Yes folks, it will be very difficult in the future to ensure that OpenClaw still works with Anthropic models,” OpenClaw creator Peter Steinberger posted on X early Friday morning, along with a photo of a message from Anthropic saying his account was suspended for “suspicious” activity.
The ban did not last long. A few hours later, after the post went viral, Steinberger said his account had been restored. Among the hundreds of ideas — many of them in the realm of conspiracy theory, given that Steinberger is now employed by Anthropic’s rival OpenAI — he was one of Anthropic’s engineers. The developer told the developer that Anthropic has never blocked anyone from using OpenClaw and offered to help.
It is not clear if that was the key that restored the account. (We asked Anthropic about it.) But every string of messages was enlightening on many levels.
To recall recent history: this ban follows news last week that subscriptions to Anthropic’s Claude will no longer include “third-party harnesses including OpenClaw,” the AI modeling company said.
OpenClaw users now have to pay for that use separately, based on usage, through Claude’s API. In fact, Anthropic, which provides its agent Cowork, now charges a “nail tax.” Steinberger said he was following the new rule and using his API, but was banned anyway.
Anthropic said it implemented the pricing change because subscriptions aren’t designed to handle “use patterns” for nails. Claws can be more complex than simple instructions or scripts because they can use continuous loops, automatically repeat or retry operations, and tie into many other third-party tools.
Steinberger, however, wasn’t buying it. After Anthropic changed the prices, he wrote, “It’s funny how the times match, they first copy some popular features in their closed harness, then they close the open source.” Although he did not specify, he may have been referring to features added to Claude Cowork’s agent, such as Claude Dispatch, which allows users to remotely control agents and assign tasks. Dispatch was launched a few weeks before Anthropic changed its OpenClaw pricing policy.
Steinberger’s frustration with Anthropic was on display again on Friday.
One person pointed out that some of this is on him, for taking a job at OpenAI instead of Anthropic, writing that “He has a choice, but it’s a bad one.” Steinberger replied: “One accepted me, the other sent legal threats.”
Wow.
When many people asked him why he was using Claude instead of his employer’s models at all, he explained that he was only using it for testing, to make sure that updates to OpenClaw wouldn’t break things for Claude users.
He explained: “You need to separate two things. My work at the OpenClaw Foundation where we want to make OpenClaw work well for *any* model provider, and my work at OpenAI to help them with future product strategies.”
Many people also pointed out that the need to test Claude is because that model remains a popular choice for OpenClaw users with ChatGPT. He also heard that when Anthropic changed its prices, he replied: “Working on that.” (So, that’s an indication of what his work at OpenAI entails.)
Steinberger did not respond to a request for comment.



