Ban on lobster farming? Anthropic plans to charge extra for third-party tool calls, effectively blocking OpenClaw

robot
Abstract generation in progress

AI (artificial intelligence) upstart giant Anthropic has dealt a heavy blow to developers using OpenClaw (“lobster”) and its flagship model Claude.

Local time on April 3, according to technology website The Verge, Anthropic is about to block third-party integration tools such as OpenClaw from accessing Claude’s subscription services. Users must enable a specific pay-as-you-go mode on demand in order to continue using OpenClaw together with Claude.

An email the company sent to users shows that starting at 3:00 p.m. Eastern Daylight Time on April 4, users using Claude AI via OpenClaw will find that their original subscription allotments no longer apply.

This means that Anthropic is cutting off access for third-party call wrappers, including OpenClaw, to standard subscription packages, forcing users to switch to a separate per-usage billing system. And under this model, the cost per token is typically higher than the subscription quota, meaning developers will face costs that exceed their established budgets—costs that are difficult to predict.

For most users who use OpenClaw and Claude together, this adjustment is, in essence, equivalent to a ban. Although Anthropic has not directly blocked OpenClaw access on the technical level, for teams that have built complete workflows around OpenClaw and rely on it to call Claude through commonly used interfaces, this change will immediately bring both financial and operational pressure.

Public information shows that Anthropic was founded in 2021 by former OpenAI employees. Its products include the Claude series of large language models. Since its founding, the company has successively received investments from tech giants such as Amazon, Google, Nvidia, and Salesforce.

For this paid policy adjustment, a large number of developers have already voiced dissatisfaction on social media, complaining about platform instability and a crisis of trust. Some well-known AI developers pointed out that they chose the Claude platform because Anthropic seems more willing than other competitors to build a third-party ecosystem—yet this policy change weakens that advantage.

Behind this is increasingly fierce competition among AI companies. OpenClaw originally emerged from Claude, relying entirely on Claude to provide intelligence capabilities, and—under Anthropic’s requirements—changed its name from “Clawdbot” to the “lobster” that is widely known today. After that, OpenClaw’s founder Peter Steinberger joined OpenAI.

By raising the cost of using third-party tools, Anthropic is trying to steer users toward its own ecosystem. Earlier this year, Anthropic released a desktop agent application called Claude Cowork. Recently, Anthropic also rolled out its own “lobster,” announcing that users of its Claude Code and Claude Cowork can let Claude control their own computers—open files, use the browser, and run development tools.

And not long ago, Anthropic unexpectedly “open-sourced” 510,000 lines of source code for its coding assistant Claude Code, including 4,756 source files, more than 40 tool modules, and multiple unreleased functions that were leaked. For a startup like Anthropic that emphasizes “safety” and is actively seeking to go public via an IPO, a source code leak is undoubtedly a major blow.

(Source: The Paper)

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin