Hugh Naylor pfp
Hugh Naylor
@hughnaylor
“A recent experimental stress-test of OpenAI’s o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons.” https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html?smid=nytcore-ios-share&referringSource=articleShare
2 replies
1 recast
5 reactions

@BestCryptoTwits pfp
@BestCryptoTwits
@bestcryptotwits
this could cause some really big problems very quickly
0 reply
0 recast
1 reaction