What’s Happening? Popularity often comes with its consequences. Well, in the Deepseek case, this can be true as its rapid rise welcomed some allegations too. First, Italy’s Data Protection Authority (DPA), now Open AI. OpenAI vs DeepSeek just got serious. Developing an AI model at a lower cost is the hottest news in town right now. But its free-of-cost model shook up the entire AI industry. However, DeepSeek may have used a technique called distillation – according to sources like The Indian Express. Using this technique, they gain the advantage of learning from Big AI players like OpenAI and ChatGPT.
Let’s understand the term AI Distillation
Distillation is a process where a newer AI model learns from an existing, more powerful AI model. It provides an edge to developers to transfer knowledge without investing a single penny inexpensive computing resources. Maybe that’s the reason why they created such outclass AI models at a fraction of the cost on behalf of U.S. companies.
Why is this even a problem?
- Violation of privacy policy and terms of service: OpenAI and other companies prohibit using their AI models’ outputs to train competitors. It’s against their privacy policy.
- Regulatory Challenges: Due to OpenAI’s open-source nature and digital access across the globe, preventing AI model distillation is next to impossible. Especially when compared to semiconductor export controls.
U.S reaction over this:
“I think they [China] only care about themselves and seek to harm us. And so we need to protect ourselves,” Lutnick said Wednesday when asked how he will address competition threats from China during his confirmation hearing. In a recent Fox News Interview, David Sacks, The man behind the White House’s AI and crypto czar, also raised these concerns.
Other than that:
- Technical Barriers: U.S. firms like Grog have started blocking Chinese IP addresses to prevent any kind of AI access.
- Investigation: OpenAI is reviewing and looking for any suspicious activity that will prove that DeepSeek improperly used its models.
- Potential U.S. Sanctions: OpenAI pledged to collaborate with the U.S. government to protect American AI technology that could provoke Potential U.S. Sanctions
Why Stopping DeepSeek is Difficult:
- Small is enough: Even if there are signs of distillation in the DeepSeek model, they needed fewer than a million data points from a larger model to boost their AI. So, detection in this case will feel like finding a needle in a haystack.
- Evasion Tactics: Even if the U.S. makes hurdles in its way or puts sanctions in place, China-based developers can find workarounds through third parties.
- Open-Source Models Exist: AI models like Meta’s Llama and Mistral can be freely downloaded, making it hard to track unauthorized use.
What are Netizens saying?
DeepSeek that sounds so much like DeepFake? People over the internet have mixed opinions. Nitika Sawhney listed down out 3 points in her recent post on Linkedin:
1. The platform has already faced a cyberattack, compelling it to cease new user registrations.
2. Analysts are doubting that the platform could pose national security risks.
3. Microsoft is already exploring if they improperly obtained OpenAI data
What’s Next?
Will the U.S. opt for strict “know-your-customer” rules for AI companies? Like banking regulations, it is vital to have a track record of using American AI models. This theory will remain uncertain, especially with a potential shift in U.S. political leadership. Stay Tuned to learn more!
Suggested Topic: Italy Demands Answers from DeepSeek: Is Your Data at Risk?