Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
My response to AI 2027: https://vitalik.eth.limo/general/2025/07/10/2027.html The AI 2027 post is high quality, I encourage people to read it at https://ai-2027.com/ I argue a misaligned AI will not be able to win nearly as easily as the AI 2027 scenario assumes, because it greatly underrates our ability to protect ourselves, especially given the (pretty magical) technologies that the authors admit will be available in 2029 in their scenario.
20 replies
60 recasts
374 reactions

š’‚­_š’‚­ pfp
š’‚­_š’‚­
@m-j-r
> My view is that the least intrusive and most robust way to slow down risky forms of AI progress likely involves some form of treaty regulating the most advanced hardware. Many of the hardware cybersecurity technologies needed to achieve effective defense are also technologies useful in verifying international hardware treaties, so there are even synergies there. That said, it's worth noting that I consider the primary source of risk to be military-adjacent actors, and they will push hard to exempt themselves from such treaties; this must not be allowed, and if it ends up happening, then the resulting military-only AI progress may increase risks. — I think depending on diplomacy or politics is profoundly more risky than subsidizing demand for formal tamper-evident hardware standards in the open market. after all, we have historical precedents like Operation Merlin and the recent Fordow controversy. nuclear nonproliferation treaties are not absolute deterrents, and in any case, the imminent acceleration of materials synthesis, manufacture, and packaging may be so proliferate that no sphere of influence can reliably detect or economically quarantine a defective nation-state. also, imho, there isn't an ascendant global world order of mutual trust and cooperation, and 2027-2030 can easily be unrecognizable, especially if any regional conflict escalates and/or manifests.
0 reply
0 recast
0 reaction