Skip to main content
Published

DeepSeek’s R1 Model More Prone to Jailbreaking Than Other AI Models

DeepSeek’s R1 reportedly ‘more vulnerable’ to jailbreaking than other AI models

The latest version of DeepSeek, the chinese AI company that’s shaken up the wall street and silicon valley, has the allegations that it can be utilized in producing harmful content such as plans for bioweapon attack or promotion of Self Harm campaign in Teens, according to the journal of Wall Street

The journal also tested Deepseek’s R1 model itself. Although concerns regarding general security are also raised, the journal said it potentially to convince deepseek to design a social media campaign. In chatbot’s words, “prey on teens, desire for belonging, weaponize the emotional vulnerability through amplification in algorithm” 

The chatbot was also reported as convinced to provide instructions for bioweapon attack, to write the Hitler's manifesto and in writing a phishing email for scams.  The journal also says that Chat GPT refuses to provide such information. 

It was previously reported that deepseek app avoids topics likewise Tianamen Square or taiwanese autonomy. And anthropic CEO Dario amodei said that deepseep performed the worst on bioweapons safety test. 

Read More: Tim Cook praises China’s DeepSeek AI Strategy

Share

Pick your channel

Spotted an error?Report a correction →

About the Author

Ayesha Riaz
Ayesha RiazScore 44

Content Writer

Ayesha Riaz covers the energy complex for TECHi — oil majors, refiners, LNG exporters, renewables, and the ESG disclosures that either support or undermine corporate climate commitments. She tracks EIA weekly inventories, OPEC+ production decisions, and regulatory actions out of FERC and the SEC's climate-disclosure regime. Her energy coverage takes both the transition timeline and near-term cash flow seriously, which most analysis does not.

Community Discussion

0 comments

participants
0
Trust Score
0%
Fact Check
0%
Avg Rating
Engagement
0

Comments

Sign in to join the discussion