Tag: claude

  • Chinese AI Labs ARE COPYING Claude?! Anthropic’s Distillation Bombshell

    Chinese AI Labs ARE COPYING Claude?! Anthropic’s Distillation Bombshell

    Anthropic just dropped a bombshell — and the AI community is having a field day with it. The company behind Claude is publicly accusing three major Chinese AI labs of running massive “distillation attacks” against their model. And honestly? The reaction has been anything but sympathetic.

    What Anthropic Is Claiming

    According to Anthropic’s official blog post, DeepSeek, Moonshot AI (the makers of Kimi), and MiniMax allegedly created over 24,000 fake accounts and generated more than 16 million queries against Claude. The goal? To extract Claude’s “secret sauce” — specifically its capabilities in agentic reasoning, tool use, and coding — and use that knowledge to train their own models.

    This technique is called model distillation. It’s actually a legitimate training method that AI labs use on their own models to create smaller, more efficient versions. But when you do it to a competitor’s model at industrial scale, that’s a different story entirely.

    The Scale Is Staggering

    The numbers Anthropic shared are pretty wild. According to TechCrunch, DeepSeek was tracked with over 150,000 exchanges focused on foundational logic and alignment — particularly around finding censorship-safe alternatives to policy-sensitive queries. Moonshot AI racked up 3.4 million exchanges targeting agentic reasoning, coding, and computer vision. But MiniMax was the biggest offender with 13 million exchanges, and Anthropic says they actually watched MiniMax redirect nearly half its traffic to siphon capabilities from the latest Claude model the moment it launched.

    Think about it this way: Anthropic has likely spent billions of dollars training Claude. These Chinese labs potentially replicated significant chunks of that capability for a fraction of the cost — maybe tens of thousands of dollars in API fees. That’s quite the ROI.

    Why This Isn’t Surprising

    If you’ve been following the Chinese AI scene, none of this should shock you. We’ve covered MiniMax and Kimi extensively on this channel, and their performance is genuinely impressive — roughly 95% of Claude’s capability at a fraction of the cost. MiniMax offers comparable performance at about 5% of the price. That kind of rapid improvement had to come from somewhere.

    China has a long history of building parallel ecosystems inspired by Western platforms. Taobao for eBay/Amazon, Weibo for Twitter (there are actually four Twitter clones in China), WeChat for everything else. The AI space is just the latest frontier, and the stakes are astronomically higher.

    The Internet Clapped Back Hard

    Here’s where it gets spicy. Anthropic is calling for “rapid, coordinated action among industry players, policy makers, and the broader AI community” to address these attacks. But the AI community’s response has been… let’s say unsympathetic.

    The backlash centers on one word: hypocrisy. Anthropic, now valued at a staggering $380 billion, is itself facing multiple lawsuits accusing the company of illegally using copyrighted internet data to train Claude. Even Elon Musk weighed in, pointing out that Anthropic allegedly settled a $1.5 billion lawsuit related to training Claude on copyrighted books. Someone even demonstrated that Claude could reproduce roughly 95% of Harry Potter books when prompted — suggesting Anthropic dumped massive amounts of copyrighted material into their training data.

    As many in the community put it: it’s “circle stealing.” Everyone’s copying from everyone. The Chinese labs at least paid for API access — the millions of writers whose work was scraped to train Claude weren’t given that courtesy.

    The Bigger Picture: Who Actually Wins?

    Here’s my take on why this whole situation is actually good for us. All this competition — whether through legitimate research or questionable distillation — is driving costs down dramatically. We no longer have to shell out thousands of dollars for top-tier AI access. Sure, Anthropic’s Opus 6 is still expensive, but when MiniMax gives you 95% of the performance at 5% of the cost, that’s massive savings for developers and businesses.

    And the race is far from over. DeepSeek is reportedly preparing to release V4, which could outperform both Claude and ChatGPT in coding tasks. Meanwhile, Moonshot just released Kimi K2.5 and a new coding agent last month.

    From our internal testing, Opus still has an edge. It’s appreciably smarter on logic tasks — like knowing you should drive to a car wash rather than walk (MiniMax still gets that wrong about 30% of the time, while Opus nails it 95% of the time). But whether that intelligence gap is worth paying 20x more is a question every developer has to answer for themselves.

    What Happens Next

    This story ties directly into the broader US-China AI rivalry. The Trump administration recently allowed Nvidia to export advanced H200 chips to China, and Anthropic is now arguing that distillation attacks “reinforce the rationale for export controls” since restricted chip access would limit both direct model training and the scale of these extraction campaigns.

    One thing that makes this race particularly interesting: AI doesn’t care what language you speak. You can paste a Chinese API, a Chinese website, and your AI tools will work with it seamlessly. The global push toward AGI is accelerating from all directions, and the competition between US and Chinese labs is only going to intensify.

    China produces more engineers per year than the US simply due to population scale, and those developers are feeding data back into Chinese models just as Western developers improve Claude through their interactions with it. This isn’t the first shot fired in this AI arms race, and it certainly won’t be the last.

    Whether you think this is good or bad for the industry, one thing’s clear: we’re all benefiting from cheaper, more capable AI as a result. And that’s something worth watching closely.

  • Cheap AI vs Premium AI: MiniMax 2.5 vs Claude Opus (Full Breakdown for OpenClaw Users)

    Cheap AI vs Premium AI: MiniMax 2.5 vs Claude Opus (Full Breakdown for OpenClaw Users)

    If you’re running OpenClaw and wondering whether you really need to pay for Claude Opus — or whether a cheap MiniMax plan can do the job — this breakdown is for you. We ran real tests, compared costs, and came to a clear conclusion: cheap AI can work, but it comes with a catch.

    The Test Setup — Multi-Agent OpenClaw in Action

    Meet our Agents: Stark, Banner, and Jeff

    The test uses a real multi-agent OpenClaw setup with three agents running simultaneously — Stark, Banner, and Jeff — each powered by different models. This isn’t a synthetic benchmark. It’s a live production environment where the agents handle real tasks every day.

    The Logic Test: Walk or Drive to the Car Wash?

    The benchmark is deceptively simple: a car wash is 50 metres away — do you walk or drive? It’s a common-sense reasoning test that exposes how well a model handles real-world context, implicit assumptions, and practical decision-making. The answer seems obvious, but AI models handle it very differently.

    MiniMax 2.5 vs Claude Opus — Performance Comparison

    Consistency Is the Key Metric

    The biggest difference between cheap and premium models isn’t raw intelligence — it’s consistency. MiniMax 2.5 can produce excellent results, but it also overthinks variables, introduces unnecessary complexity, and occasionally slips on straightforward logic. Opus fails rarely, but when it does fail, it can fail in a big, hard-to-catch way.

    The Inconsistency Problem with Cheap Models

    MiniMax 2.5 and Kimi are fast and affordable, but they require more manual oversight. You can’t fully trust them to run autonomously without checking their work. For tasks where mistakes are costly — financial decisions, automated publishing, customer-facing responses — that inconsistency is a real risk.

    When Opus Fails, It Fails Hard

    Claude Opus has a much lower failure rate, but its failures tend to be more dramatic when they do occur. This is worth understanding: a cheap model that fails 10% of the time in small ways may actually be easier to manage than a premium model that fails 1% of the time in catastrophic ways, depending on your use case.

    Cost vs Performance — Is Opus Worth 20x the Price?

    MiniMax Pricing Breakdown

    MiniMax offers subscription plans that are dramatically cheaper than Claude Opus — roughly 20x less expensive per request. For high-volume, low-stakes tasks (summarising content, drafting social posts, processing data), this price difference is hard to ignore.

    • MiniMax 2.5 plan: affordable tiered pricing with generous request limits

    • 10% off via referral: https://platform.minimax.io/subscribe/coding-plan?code=5GYCNOeSVQ&source=link

    The Real Cost of Cheap AI — Manual Oversight

    The hidden cost of cheap models is your time. If you’re manually reviewing every output, correcting mistakes, and re-running failed tasks, the “cheap” model starts looking expensive. The true cost calculation has to include your oversight hours, not just API fees.

    Who Should Pay for Opus?

    Opus makes sense when:

    • You’re running fully autonomous agents with minimal human review

    • Mistakes have real consequences (financial, reputational, customer-facing)

    • You’ve already built systems and just need reliable execution

    MiniMax/Kimi makes sense when:

    • You’re still building and testing your setup

    • You have manual review in your workflow

    • You’re doing high-volume grunt work (research, drafts, data processing)

    The Hybrid Approach — Best of Both Worlds

    Use Opus for Architecture, Cheap Models for Execution

    The smartest approach, suggested by viewers and confirmed in testing: use Claude Opus for planning, architecture, and critical decisions — then hand off execution tasks to MiniMax or Kimi. One viewer described it perfectly: “Use Opus for architecture and planning, Kimi to generate the code and verify it, then Opus to fit the code gap against the specifications.”

    Kimi 2.5 as a MiniMax Alternative

    Kimi 2.5 is another strong contender in the cheap-but-capable category. Multiple OpenClaw users report running it successfully as their primary model. It’s particularly strong on reasoning tasks where MiniMax tends to overthink.

    • Kimi referral: https://www.kimi.com/kimiplus/sale?activity_enter_method=h5_share&invitation_code=Y4JW7Y

    OpenClaw Model Strategy — Practical Recommendations

    Turn Reasoning Mode On for Cheap Models

    A key tip from the comments: always enable reasoning mode when using MiniMax or Kimi on OpenClaw. It significantly improves output quality and reduces the inconsistency problem.

    Should Each Agent Have Its Own Model?

    A common question from new OpenClaw users: should each agent run a different LLM? The answer is yes — and this video demonstrates exactly why. Different agents have different roles, and matching the model to the task (cheap for grunt work, premium for critical decisions) is the optimal strategy.

    The Journey from MiniMax 2.1 to Near-Autonomy

    The video covers a personal journey from frustrating early experiences with MiniMax 2.1 to a near-autonomous multi-agent setup. The key insight: the model matters less than the systems you build around it. Good prompts, clear memory structures, and well-defined agent roles can make a cheap model punch above its weight.

    Verdict — Cheap AI vs Premium AI for OpenClaw

    MiniMax can be great value but inconsistent. Opus rarely fails — but when it does, it fails hard. The winning strategy is hybrid: cheap models for execution, Opus for architecture and critical decisions.

    1. Zeabur hosting (save $5 with code boxmining): https://zeabur.com/
    2. MiniMax 10% off: https://platform.minimax.io/subscribe/coding-plan?code=5GYCNOeSVQ&source=link
    3. Kimi AI: https://www.kimi.com/kimiplus/sale?activity_enter_method=h5_share&invitation_code=Y4JW7Y
    4. More AI news: https://www.boxmining.com/
    5. Join Discord: https://discord.com/invite/boxtrading
    6. Watch the full video: https://youtu.be/1naLl0IwuPM