Development of Extremely Risky AI Systems may Halt, Meta indicates

CEO Mark Zuckerberg has committed to eventually making artificial general intelligence (AGI), it refers to the capability of AI in performing any human task which is considered in the future to be openly available. However, a new policy document from Meta suggests that in certain cases, the company may choose not to release highly advanced AI systems developed internally.

AI System’s risks:

In the document which is named the Frontier AI Framework, two types of AI systems, “high risk” and “critical risk”, are considered somewhat risky to be released. According to Meta, both classifications involve AI systems that would support breaking through cybersecurity measures, as well as attacks on chemical and biological fronts. The critical risk systems could cause a “catastrophic outcome that cannot be mitigated in a proposed deployment context,” whereas high-risk systems may facilitate attacks but not as effectively or reliably as critical risk ones.

Meta provides examples of potential threats, such as the automated end-to-end compromise of a practice protected corporate scale environment and the ‘’proliferation of high-impact biological weapons”. Meta says,“ it doesn’t believe the science of evaluation is sufficiently robust as to provide definitive quantitative metrics for deciding a system’s riskiness”. The company acknowledges that its list is not exhaustive but represents what it views as “the most urgent” and plausible risks arising from the release of powerful AI.

Astonishingly, Meta measures system risk not through a single empirical test but through insights garnered from the collaboration of several internal and external researchers and the final decision residing with senior executives. According to the company, current assessment methods are just not “sufficiently robust” to allow for definitive quantitative risk assessment to be set.

Suppose an AI system is classified as high-risk. In that case, access will be restricted from internal parties, and action on the system’s release will remain in limbo until mitigations can reduce the risk to a moderate level. Suppose a system is determined to reach critical-risk status. In that case, Meta will set in place measures to restrict access to all by putting security in place and suspending its development until such a time when the system can be made less dangerous.

Meta’s Frontier AI Framework:

Meta’s Frontier AI Framework is designed to evolve alongside advancements in AI and aligns with the company’s prior commitment to publishing it before the France AI Action Summit. This initiative appears to be a response to criticism regarding Meta’s open approach to AI development. In contrast to companies like OpenAI, which restrict access to their AI systems by putting them behind an API, Meta has generally favoured a comparatively more open yet still controlled access to its AI models.

While this has created much popularity for its Llama AI models, it has also been fairly contentious, especially with the reports that adversaries of the U.S. have used Llama to create a defence chatbot. With the announcement of the Frontier AI Framework, Meta may also be trying to distinguish its stance from DeepSeek, a Chinese AI company following a similar path of openly launching its models while consisting of fewer safeguards to stop harmful content creation.

Meta says, “[W]e believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves the benefits of that technology to society while also maintaining an appropriate level of risk.” Meta aims to develop advanced AI technology with an approach that maximizes the societal benefit of AI development and innovation while minimizing its risks.

Read More: Metas Shift to Community Notes: Revolution or Risk?

Disclosure: Some of the links in this article are affiliate links and we may earn a small commission if you make a purchase, which helps us to keep delivering quality content to you. Here is our disclosure policy.

Fatimah Misbah Hussain
Fatimah Misbah Hussainhttps://www.techi.com/
Fatimah Misbah Hussain is a tech writer at TECHi.com who transforms complex topics into accessible, compelling content for a global audience. She covers emerging trends, offers insightful updates, and explores technology’s evolving impact on society with clarity and depth.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular This Week

Similar Stories

Artificial Intelligence seems to be gaining acceptance as the new battleground on which nations compete, with world leaders attempting to...
The future of AI is unfolding before our eyes, and Google is leading the charge with its latest advancements in...
OpenAI makers are considering U.S states for its massive Stargate venture as potential artificial intelligence data centre locations. They have...