@forallhumanity
The Problem. Apocaloptimist: is the singularity beginning?
Recently, the documentary film The AI Doc: Or How I Became an Apocaloptimist was released—and it is the greatest film in the history of humanity.
Director Daniel Roher filmed it as a future father, asking a simple question: in what kind of world will my child live?
The most important players in these global chess games—Brockman, Huang, Musk, and others—say in 2026 that we have already closely approached AGI or are in its early stages.
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
This famous statement was signed by the leading AI experts, but still very little is done for AI safety.
The AI Doc provides an estimate: at least 20000 people are now working on the creation of AGI and only about 200 on its safety. A lot of resources are being poured into the creation of a high-speed bolide and very little into its brakes and roll bars(1/2)