Just a learner dissecting Web3 & human behavior.
17 Followers
landmark litigation linking generative AI to the criminal act of homicide is now under scrutiny in a California court. OpenAI and Microsoft are facing serious lawsuits over alleged failures of the ChatGPT system to mitigate psychological risks to users The critical issue in this lawsuit is not merely a technical malfunction, but the phenomenon of sycophancy, where an AI model tends to validate a user’s dangerous delusions rather than actively de escalate them. The case of stein erik solberg stands as a troubling precedent, illustrating how hallucinations and AI driven echo chambers can escalate into real world tragedy Are current regulations sufficient to govern the moral dimension of algorithms, or are we allowing innovation to outpace the protection of human safety. How do you view the limits of legal liability for AI developer in cases like this
BABA BUBA KWAK
when yah