OpenAI Anthropic and Google team up in a way that feels different from the usual rivalry. For years, the biggest AI companies have acted like they’d rather compete than collaborate. OpenAI pushes for the best chatbot. Google focuses on the smartest assistant. Anthropic aims for the safest model. So when these rivals begin sharing information, it usually means something has shifted from a company problem to an industry problem.
That appears to be the case now. A Bloomberg report says OpenAI Anthropic and Google team up through the Frontier Model Forum to detect and block what’s being called adversarial distillation — a technical way of describing how someone may be using outputs from advanced US AI models to build cheaper copycat systems, reportedly in China. It marks an important moment because it shows how AI competition, national security and business protection are all starting to collide in a complicated way.
Quick Highlights
-
- OpenAI, Anthropic and Google are sharing data on model copying attempts
- The concern is adversarial distillation, not normal AI training
- DeepSeek’s rise helped trigger the alarm
- Cybersecurity-style cooperation is becoming the new playbook
Why this suddenly matters so much
Here’s the thing: AI model copying isn’t just a tech nerd argument about training methods. It’s about money, speed and power. Building frontier AI models costs a huge amount of compute, talent and time. If another company can quietly extract a model’s behavior and recreate something similar at a fraction of the cost, that’s not a small shortcut. That’s a serious competitive threat.
And the fear goes beyond losing business. If a model is copied without the original safety layers, the result could be a system that’s powerful but far less controlled. That’s why this isn’t being treated like ordinary software cloning. AI models are more like trained decision engines than simple apps. You can copy the output style, the
reasoning patterns and sometimes even the quirks. That makes the whole problem much more slippery.
For the average reader, the easiest comparison is this: imagine a chef spends years perfecting a signature dish, only for another restaurant to taste it over and over, reverse-engineer the recipe and sell a near-identical version for half the price. The result may look similar from the outside, but the original creator has taken the hit.
What adversarial distillation actually means
Distillation itself isn’t shady. In fact, it’s a normal technique in machine learning. A large “teacher” model helps train a smaller “student” model so the smaller one can be cheaper and faster to run. AI labs do this all the time for legitimate reasons. It’s one of those technical processes that sounds more dramatic than it is.
But adversarial distillation is the bad version of that story. Instead of using the method with permission, a third party may repeatedly query a model, collect its responses and use that behavior to train a competing model. If done at scale, it can give the impression that the new system learned all that intelligence from scratch, when really it may have borrowed heavily from someone else’s expensive work.
That’s the core issue OpenAI, Anthropic and Google are now trying to tackle together. Not just “are people using our models?” but “are they extracting enough of our model’s behavior to recreate something close enough to compete with us?” It’s a subtle difference, but a huge one.
How DeepSeek changed the conversation
This whole topic got louder in January 2025 when Chinese startup DeepSeek released its R1 reasoning model. People in the AI world noticed quickly. Not because the model was bad, but because it was surprisingly capable. That kind of leap tends to make everyone nervous, especially when it appears to come out of nowhere.
Bloomberg reported that Microsoft and OpenAI looked into whether DeepSeek had extracted large amounts of data from OpenAI’s models. OpenAI later told US lawmakers that DeepSeek had used more advanced methods to pull results from its systems, even after safeguards were tightened. The company also accused DeepSeek of trying to
free-ride on the work of OpenAI and other US frontier labs.
Now, to be fair, public accusations and actual proof are two different things. But in the AI industry, even the possibility of large-scale model extraction is enough to trigger defensive action. The stakes are that high. If one company’s model can be mirrored cheaply, the advantage of spending billions on research starts to look fragile. And that’s a pretty uncomfortable thought for the whole sector.
What the big three have already done
Anthropic seems to have moved first. Last year, it blocked Chinese-controlled companies from accessing its models. In February, it specifically named DeepSeek, Moonshot and MiniMax as having allegedly extracted model capabilities through distillation. Anthropic also warned that the threat goes beyond one region or one company, which is probably the most realistic part of the entire debate. Once a method becomes useful, people everywhere try it.
Google has also acknowledged the problem. In a blog post, it said it had seen a rise in model extraction attempts. OpenAI confirmed it’s participating in the Frontier Model Forum’s information-sharing effort, while Google and Anthropic didn’t comment directly to Bloomberg on the collaboration.
That silence doesn’t necessarily mean much. In fast-moving AI policy stories, companies often say less than you’d expect, especially when legal and regulatory boundaries are still fuzzy. But the pattern is clear enough: these firms are no longer treating this as an isolated issue. They’re treating it like a shared defense problem.
Why this kind of collaboration feels unusual, but not random
It does feel a bit strange at first. OpenAI, Anthropic and Google are competitors. They’re all chasing the same users, the same enterprise contracts and, let’s be honest, a lot of the same talent. Under normal circumstances, they don’t exactly sit around comparing notes on weaknesses.
But this situation is a lot like cybersecurity. When one company sees a new attack pattern, sharing that information can protect everyone else. If a phishing scam, malware strain or hacking tactic is spotted early, the whole industry benefits from knowing about it. The AI model copying problem is starting to follow the same logic.
Instead of waiting for each company to separately discover the same abuse pattern, the Frontier Model Forum lets them compare signals, identify suspicious extraction behavior and possibly spot which actors are involved. That means faster detection and, hopefully, better defenses. It’s practical. Maybe even overdue.
Still, there’s a catch. Bloomberg says the collaboration is limited right now because companies aren’t fully sure what they’re allowed to share under existing antitrust guidance. That’s the kind of boring legal detail that ends up shaping major technology policy. In other words, everyone agrees the problem is real, but the rules around cooperation are still a bit hazy.
What the US government is watching
The policy angle here is getting stronger too. The Trump administration has signaled openness to helping companies coordinate on these issues, and its AI Action Plan reportedly calls for a dedicated information-sharing and analysis center. That sounds bureaucratic, sure, but the idea is simple: give companies a structured way to report threats and compare notes without constantly worrying they’re crossing a legal line.
That said, government support doesn’t automatically solve everything. AI moves fast. Regulations do not. And when the technology in question is being used globally, by companies in multiple countries with different rules, enforcement gets complicated very quickly.
Here’s where a lot of readers might underestimate the problem. This isn’t just about one model or one startup. It’s about whether frontier AI research can be protected in a world where the cost of imitation is falling. If copying becomes easy enough, the leaders have less incentive to keep spending at the same level. That could slow down
innovation, or push labs into a more defensive, secretive posture. Neither outcome feels especially great.
| Term | Simple meaning | Why it matters |
|---|---|---|
| Distillation | Using a big model to train a smaller one | Normal and often useful when done legally |
| Adversarial distillation | Using model outputs without permission to copy behavior | Can erode business advantage and safety controls |
| Model extraction | Trying to infer how a model works by probing it repeatedly | A key method behind suspected copying |
| Frontier Model Forum | Industry group formed by leading AI companies | Provides a place to share threat intelligence |
So what does this mean for the rest of us?
If you’re not building AI models yourself, it might be tempting to shrug and move on. But this story matters because the shape of AI products you use every day depends on what happens behind the scenes. If companies feel forced to lock things down harder, models could become more expensive, less open or more restricted. If they don’t, the next generation of AI might arrive with copied capabilities and weaker safety.
There’s also a bigger cultural shift here. For years, AI development has been sold as this almost mythic race: whoever builds the smartest model wins. But this story shows that the race is also about defense, leakage and trust. Not every breakthrough is visible from the outside. Some of the most important work is now about making sure others can’t quietly take what took years to build.
And honestly, that’s the part that feels most telling. When rivals start comparing notes, it usually means the problem is bigger than any one company wants to admit. OpenAI, Anthropic and Google aren’t teaming up because they suddenly got sentimental. They’re doing it because the incentives changed.
Maybe this becomes the new normal for AI: fierce competition on the product side, but a cautious alliance when it comes to security and model protection. Or maybe the legal and technical gray areas slow everything down. Either way, the fact that these companies are coordinating at all says a lot about where the industry is heading.
And if you’ve been following AI for even a little while, you probably know this already: the smartest moves in tech are often the ones made after the alarm bells start ringing. The interesting question now is whether this partnership can actually make copying harder before the next big model race heats up again.
That’s the real story here. Not just that AI labs are fighting back, but that they’re being forced to rethink what competition even means in a world where imitation is getting cheaper by the day. Will this kind of collaboration stay limited, or turn into a much broader defense strategy? That’s worth keeping an eye on.





