Hugh Naylor pfp
Hugh Naylor
@hughnaylor
“A recent experimental stress-test of OpenAI’s o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons.” https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html?smid=nytcore-ios-share&referringSource=articleShare
2 replies
1 recast
6 reactions

HH pfp
HH
@hamud
> A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons.” We pump out millions of engineers who could outfit a backyard suicide drone but nobody's bothered enough to do it. Nothing burger
2 replies
0 recast
1 reaction

@BestCryptoTwits pfp
@BestCryptoTwits
@bestcryptotwits
this could cause some really big problems very quickly
0 reply
0 recast
1 reaction

Air Queen Service 💫 pfp
Air Queen Service 💫
@nmesoso
Thanks for this info, I'm looking for for the Gemini update since I'm on a watch out for an upgrade with the current model I integrated our platform!
0 reply
0 recast
1 reaction