The fact that Meta refused to enter the European Union code of practice with the AI model is a moment of awareness in the current discussion of technology, regulation, and business expansion in Europe. The leadership of Meta feels that the emphasis of the EU and especially the new voluntary code put an excessive number of legal constraints and legal uncertainties and demands as compared to those originally envisioned by the AI Act. They assert that accepting these new conditions might cripple the capability to research and deploy potent AI models in Europe, making the continent a refrigerated zone to develop new AI-based products.
This main point of contention is the level and the extent to which the EU demands are binding and strict. The new code will require corporations to maintain up to date documentation, ensure that the data is not pirated or acquired via illicit means and to recognize the creator rights by complying with requests not to include their material in training datasets.
Meta and most other large tech firms say such sweeping compliance requirements compound costs, open legal gray areas, and could suppress the competitive spirit required to outgrowth new advances in AI.
Such opposition is not unique to Meta. Ever since AI Act development started, most European companies and global tech giants have cautioned the Commission against a rush to implement indistinct rules that may ultimately only serve to ruin local start-ups and even bigger corporations.
Moving towards the future, the technology sector of Europe finds itself at the crossroads. The strict followership of regulation may be able to hold consumers in safety and support valuable values, but it may as well delay innovation in AI or divert investments to other spheres.
The remaining two years will indicate whether the EU will manage to strike the balance or whether its regulatory approach accidentally places Europe a step back in the global AI race.