@neeraj0717
Watched the clip from @SentientAGI at @openagisummit during @NeurIPSConf and one line really stuck with me.
“High-Stakes AI.”
That framing feels important. Because once AI isn’t just recommending thingsbut actually executing payments, trades, or negotiations—mistakes aren’t theoretical anymore. They’re financial. Legal. Irreversible.
Most AI today is built for convenience. Speed first, safety later.
That works for chat, search, or content.
It absolutely doesn’t work when AI starts touching real value.
What @viswanathpramod pointed out is subtle but critical:
high-stakes AI doesn’t just need better models.
It needs systems engineered against failure and attack by default.
This is where @SentientAGI feels different to me.
They’re not pitching AI magic.
They’re treating AI like infrastructure something that must assume adversaries, edge cases, and worst-case outcomes.
High-Stakes AI isn’t a future problem.
It’s already here.