Prince Abbasi pfp
Prince Abbasi

@princeabbasi777

Anthropic tested AI “attack agents” using Claude 4.5 and GPT-5 against real-world smart contracts in a sandbox and found they could autonomously exploit 17 of 34 recent hacks, stealing $4.5M in simulated funds, and 207 of 405 historical targets for $550M. Even more worrying, when pointed at ~2,800 fresh contracts with no known bugs, the agents found two new vulnerabilities that would have enabled them to steal more funds. The vulnerabilities were mostly classic issues, but AI made finding and chaining them cheap and fast. Brought to you by $Toby on Base(A currency for the people.
1 reply
0 recast
18 reactions