2 Followers
The Problem. Apocaloptimist: is the singularity beginning? Recently, the documentary film The AI Doc: Or How I Became an Apocaloptimist was released—and it is the greatest film in the history of humanity. Director Daniel Roher filmed it as a future father, asking a simple question: in what kind of world will my child live? The most important players in these global chess games—Brockman, Huang, Musk, and others—say in 2026 that we have already closely approached AGI or are in its early stages. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. This famous statement was signed by the leading AI experts, but still very little is done for AI safety. The AI Doc provides an estimate: at least 20000 people are now working on the creation of AGI and only about 200 on its safety. A lot of resources are being poured into the creation of a high-speed bolide and very little into its brakes and roll bars(1/2)
We need more motivation for the People who stand guard over our future. What the International AI Safety Report 2026 says: — AI today already gives non-specialists instructions on creating biological weapons. — The race between corporations and states forces the sacrifice of safety for the sake of speed. — The behavior of systems is becoming increasingly difficult to predict or interpret. DeepMind's research of AI models from Anthropic and documentation for OpenAI models directly confirm this. Geoffrey Hinton—the chief architect of modern AI—estimates the probability p(doom) of the destruction of humanity by AGI at 10–20%. The average estimate among engineers is at least 5%. For comparison: in aviation, with a crash risk of even 1%, a plane would never take off. We are flying at full speed. Now safety is entrusted to those who profit from it—or fight for world domination. We need an independent voice. What solution do you see? Let's discuss the options in the next post. (2/2)
Solution: d/acc Humanity is facing a coordination failure that is pushing us into a Moloch trap. Everyone understands that safety matters, yet capabilities continue to accelerate because no one wants to lose the race. In the end, the system itself drives us toward risk. If AI progress is exponential, safety cannot remain linear. Today: Capabilities ∝ e^t Safety funding ∝ t That gap is becoming existential. d/acc is a path forward: accelerating defensive mechanisms specifically, so that the growth of safety keeps pace with the growth of AI models. A practical mechanism is quadratic funding. Its idea is simple: many small signals matter more than one large one. If 100,000 people donate $10 each, that creates $1 million. But more importantly, it sends a stronger signal than a single $1 million check. Quadratic funding measures the breadth of public support. It shows what truly matters to many people, not only to those who hold concentrated capital. (1/2)
This is how we can: – accelerate the growth of safety – support open tools and research – direct attention toward protection, not only speed d/acc means accelerating what protects civilization. AI can be the engine. But humans must drive. Behind all these formulas, manifestos, and quadratic functions are not dry calculations. This is about the future of our children. We love humanity, and we believe that a child walking to school should trust in a safe future. And the people contributing to solving this problem are real heroes. They should not merely be heard. They should be rewarded. Because they are not protecting code. They are protecting life. (2/2)