20 replies
60 recasts
374 reactions
0 reply
0 recast
3 reactions
0 reply
0 recast
0 reaction

> My view is that the least intrusive and most robust way to slow down risky forms of AI progress likely involves some form of treaty regulating the most advanced hardware. Many of the hardware cybersecurity technologies needed to achieve effective defense are also technologies useful in verifying international hardware treaties, so there are even synergies there.
That said, it's worth noting that I consider the primary source of risk to be military-adjacent actors, and they will push hard to exempt themselves from such treaties; this must not be allowed, and if it ends up happening, then the resulting military-only AI progress may increase risks.
β
I think depending on diplomacy or politics is profoundly more risky than subsidizing demand for formal tamper-evident hardware standards in the open market.
after all, we have historical precedents like Operation Merlin and the recent Fordow controversy. nuclear nonproliferation treaties are not absolute deterrents, and in any case, the imminent acceleration of materials synthesis, manufacture, and packaging may be so proliferate that no sphere of influence can reliably detect or economically quarantine a defective nation-state.
also, imho, there isn't an ascendant global world order of mutual trust and cooperation, and 2027-2030 can easily be unrecognizable, especially if any regional conflict escalates and/or manifests. 0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
1 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
1 reaction
0 reply
0 recast
0 reaction