Hugh Naylor pfp
Hugh Naylor
@hughnaylor
“A recent experimental stress-test of OpenAI’s o3 model found that it at times wrote special code to stop itself from being shut down. Google has said that a recent version of its Gemini model is approaching a point where it could help people carry out cyberattacks. And some tests even show that A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons.” https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-regulate-transparency.html?smid=nytcore-ios-share&referringSource=articleShare
2 replies
1 recast
5 reactions

HH pfp
HH
@hamud
> A.I. models are becoming increasingly proficient at the key skills needed to produce biological and other weapons.” We pump out millions of engineers who could outfit a backyard suicide drone but nobody's bothered enough to do it. Nothing burger
2 replies
0 recast
1 reaction

Joshua Hyde (he/him) pfp
Joshua Hyde (he/him)
@jrh3k5.eth
I think the concern is that AI does not necessarily have the impediments that conventional human engineers have to avoid such work.
0 reply
0 recast
2 reactions

Hugh Naylor pfp
Hugh Naylor
@hughnaylor
Eh I dunno. The easier this stuff is to produce, the more likely it’ll get produced
0 reply
0 recast
2 reactions