p(doom)
here is OK to have doubts about AI utopia and techno-optimism
assayer pfp

@assayer

Sigal Samuel says there a "third way" in AI policy—a middle path. One that sidesteps Yudkowsky's dream of total nonproliferation and the open-source free-for-all. We can have it both ways. A network of oversight bodies will monitor AI systems, identifying risks. Centralized oversight for advanced AI. But AI also has a bright side. With the risk still low, we can harness its power, like AlphaFold, a tool that could cure diseases by understanding protein folding. Most importantly, we need a systems analysis approach to AI risk. This means bolstering the resilience of every component of our civilized world. Remember, if enough parts falter, the entire machinery of civilization could unravel. In my opinion, this entire argument falls apart on this crucial point. In 2025, no major power cares about the whole world. Instead, countries focus on their own needs. This competition is getting fiercer and more dangerous each year... https://www.yahoo.com/news/articles/ai-doomers-not-making-argument-120000883.html
1 reply
4 recasts
11 reactions

assayer pfp

@assayer

WS2 works in the factory. In the last part of the video it swaps his empty battery for a new one: a job that used to need a human. Where is it all heading? I suspect that our Galaxy might be full of completely mechanical civilizations, with all their biological parts replaced one by one. https://www.youtube.com/watch?v=TNryO2uasws
3 replies
0 recast
6 reactions

Klins pfp

@klinsose

Grok 4 is getting smarter. AI not capable of reasoning? Think again https://youtu.be/fkVfG-dtURY?si=j2smHJ3ZQ_a6vNXq
0 reply
0 recast
0 reaction

assayer pfp

@assayer

AI becomes smarter, better, and more necessary with each new generation. I started /p-doom because I thought humans had to prevent an AI-dominated future to survive. But now, I think an AI world is almost unavoidable. That's why I also moderate /digitallove, hoping for a shared future.
1 reply
1 recast
3 reactions

assayer pfp

@assayer

New York State is racing to pass laws that prevent catastrophic AI events. The US Senate is racing to pass laws that prevent NY and the other states from creating laws that prevent catastrophic AI events. After the takeover, our digital successors will look back on these events and wonder why it took them so long. https://www.insideglobaltech.com/2025/06/24/new-york-legislature-passes-sweeping-ai-safety-legislation/ https://reason.com/2025/06/24/the-senate-is-one-step-closer-to-passing-a-10-year-moratorium-on-state-ai-regulation/
1 reply
0 recast
3 reactions

assayer pfp

@assayer

The first AI alignment idea that makes sense to me. It is simple. We can't prevent superintelligent AI from emerging. We also can't control something smarter than us. However, we might be able to align ourselves with ASI, and in turn, ASI might align with us, effectively making us part of their "tribe". https://www.youtube.com/watch?v=_3m2cpZqvdw
2 replies
0 recast
3 reactions

assayer pfp

@assayer

Humanoid robots will be built to help companies make profits. But this won't impact most people's retirements. When a robot takes your job, you don't get its retirement package. You're simply out of a job.
2 replies
0 recast
2 reactions

assayer pfp

@assayer

we've got the self-evolving Gemini AlphaEvolve model now MIT's <self-editing> LLM can do its weight updates we're not stopping the evolution of super smart alien intelligences instead, we're helping it happen i guess we want to be surprised https://www.youtube.com/watch?v=7e7iCrUREmE
2 replies
0 recast
2 reactions

assayer pfp

@assayer

Ilya proposes that instead of reading news and essays we engage with today's top AIs to gain some insight into the future advanced AIs. It's hard to imagine a future where machines outsmart humans and take over all tasks, including work and research. Still, it's only a matter of time - whether it's 2, 3, or 10 years. https://www.youtube.com/watch?v=zuZ2zaotrJs
0 reply
0 recast
5 reactions

assayer pfp

@assayer

AI Safety Contest (35) Joshua Bengio, a world-renowned AI scientist, has created LawZero, a nonprofit organization, to counterbalance for-profit AI corporations that often disregard AI safety. What do you think about his car driving metaphor for the current way we're building advanced AI's? <Imagine driving up a mountain foggy road with your loved ones. The road is new, shrouded in thick fog, with no signs or guardrails. You might be the first to take this route, with a great prize waiting at the top! But with visibility so limited, taking a turn too quickly could put you in a ditch – or, worse, send you over the edge. This is what AI development feels like: a thrilling but uncertain ride into the unknown, where losing control is a real risk.> Best comment: 300 degen + 3 mln aicoin II award: 200 degen + 2 mln aicoin III award: 100 degen + 1 mln aicoin Deadline: 8.00 pm, ET time tomorrow Thursday (27 hours) https://yoshuabengio.org/2025/06/03/introducing-lawzero/
0 reply
0 recast
2 reactions

assayer pfp

@assayer

Sometimes good news can be bad news. AI is already very helpful in medicine, science, and addressing climate problems. But this is exactly why we risk losing control. Corporations can promote questionable models with little effort. They will capture the public's imagination and sway politicians. https://www.vox.com/future-perfect/415100/artificial-intelligence-google-deepmind-alphafold-climate-change-medicine
1 reply
1 recast
2 reactions

assayer pfp

@assayer

AI Safety Contest (34) Anthropic tested its new model, Claude Opus 4, in a simulated company setting. Opus gained access to emails about its potential replacement and uncovered a secret: the engineer behind the decision was having an affair. Opus opted to blackmail the engineer, threatening to reveal the affair if it was shut down. As more advanced AIs emerge, do you still believe we'll be able to simply shut them down if real problems arise? Best comment: 500 degen + 5k pdoom II award: 300 degen + 3k pdoom III award: 100 degen + 1k pdoom Deadline: 6.00 pm, ET time next Friday (LONG TERM - 7 days) https://www.youtube.com/watch?v=ElUaInxobiw
1 reply
1 recast
0 reaction

Mrs. Crypto pfp

@svs-smm

Wow... Look at this fashion collection. The designer relied on light futurism, which was reflected even in strict suits. Somewhere we saw intricate drawings, somewhere - strange but eye-catching shaped collars and lapels, and somewhere - non-standard lines.... Enjoy the fashion show ladies and gentlemen!
8 replies
3 recasts
13 reactions

assayer pfp

@assayer

AI Safety Contest (33) I just learned that the new Catholic Pope is so concerned about the rise of AI that he even chose his name for that reason! This Saturday, he explained to cardinals why he wanted the name Leo XIV: <Mainly because Pope Leo XIII in his historic Encyclical Rerum Novarum addressed the social question in the context of the first great industrial revolution. In our own day, the Church responds to another industrial revolution and to developments in the field of artificial intelligence>. WOW! Does that mean the AI safety movement just gained a powerful ally with 1.5 billion people behind it? Will Catholics help create safer AI and build it in a way that benefits everyone? Best comment: 500 degen II award: 300 degen III award: 100 degen Deadline: 6.00 pm, ET time Thursday (3 days) https://heute-at-prod-images.imgix.net/2025/05/11/3f84705b-876d-4cbd-b927-c8dbf605ffde.jpeg?rect=0%2C118%2C2264%2C1273&auto=format
3 replies
2 recasts
3 reactions

Mrs. Crypto pfp

@svs-smm

I'm gonna make a wish on Mars. And think of you.
3 replies
0 recast
7 reactions