Independent Scientist in Surface&Interface Chemistry | Radiation Nerd | Climate activist | AI Risk· Info-ecology· Performative war | DeSci enthusiast | Writer
4 Followers
Reality doesn’t care about your consensus. It’s governed by chemistry and entropy. I’m Siarhei Besarab. While the outside world hallucinates along with exhausted transformers, I manage the physical fallout: nuclear safety, toxicology and local geoengineering. I work on the hard stack of survival in an era of climate stress, technofeudalism and performative wars. Farcaster is not a marketing outpost for me, but a live stress test. I’m evaluating this protocol strictly as a terminal for decentralized research funding and verifiable peer‑to‑peer donations. The legacy academic machinery is failing. I’m looking for a persistent alternative that can sustain high‑stakes research without institutional interference – a functional funding stack, not engagement. If the architecture holds, my involvement scales, if it doesn’t – the experiment ends here.
I’m called an AI skeptic, but really I’m skeptical of hype. Today’s LLMs are like fast undergrads: great at grunt work, bad at original thinking. They help with data cleaning, boilerplate code, translation – useful “extra hands,” not new minds. The real risk isn’t Terminator, it’s Idiocracy. Companies cut juniors because “AI can do the junior work.” But that’s exactly how juniors become seniors. Remove that layer and you end up with prompt‑operators who can’t fix systems when they break. Meanwhile we outsource more cognition to models and spend the freed time doomscrolling. Brains atrophy; cognitive debt grows. Add to this a business model drifting toward AdGI – LLMs optimized for ads and manipulation – and you get a bubble that can pop hard, even without any AGI. https://2digital.news/a-technooptimist-on-what-will-finally-burst-the-ai-bubble/
Nature shows human+AI teams crush generative tasks but worsen decision‑making. AI boosts creativity yet scrambles judgment. MIT EEG data backs it up: people writing with ChatGPT showed lower brain activity, weak recall and little ownership of their work vs. unaided thinking (Google sits in the middle). Cognitive offloading feels efficient, but it quietly erodes critical faculties and agency. This “cognitive debt” compounds: short‑term ease, long‑term decline in reasoning, creativity and resistance to manipulation. The Matthew Effect kicks in — strong thinkers use AI as leverage, everyone else atrophies. We should treat AI as a generative exoskeleton, not a decision‑making prosthesis. Use it to explore ideas, not to replace the hard work of thinking. https://www.linkedin.com/posts/steanlab_cognitivedebt-ai-artificialintelligence-activity-7343286850259673088-nabN
My 2026 tech forecasts Elite biology-as-a-service vs. chatbot health for the rest. AI goes from hype to control/war tech. Space: exploration → battlefield. Belarus science → Russia's back office Full read: https://belsat.eu/90661091/pragnozy-na-2026-god-ad-navukoca (In Belarussian)