Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr
In you're in favor of "AI safety" (broad definition), what's your most compelling cast-length argument?
40 replies
30 recasts
104 reactions

Jonny Mack pfp
Jonny Mack
@nonlinear.eth
i’m in the all-gas-no-brakes camp but will do my best to steelman: 1. we’re on the cusp of agi 2. agi is like nuclear: extinction-level threat, global superpower-level leverage, and potential civilization expanding resource 3. we dont really know how it works 4. thus, we should proceed with *extreme* caution
3 replies
0 recast
2 reactions

six pfp
six
@six
if toddlers were able to make an adult human from scratch would they be able to control it?
1 reply
0 recast
0 reaction

petar.xyz pfp
petar.xyz
@petar
I think all new versions of AI should be tested in some sort of a focus group before releasing them to the public.
1 reply
0 recast
0 reaction

Ben pfp
Ben
@benersing
At fist automanufacturers made similar arguments against requiring seatbelts in cars. Imagine what we’d be saying about our great/grandparents if that line of thinking had prevailed.
2 replies
0 recast
0 reaction

:grin: pfp
:grin:
@grin
AI safety is a nuanced discussion that’s very hard to have in cast-length format but if you must: the appropriate balance of novelty and safety is generally good, and denying the need for either is ridiculous
0 reply
0 recast
3 reactions

kerman pfp
kerman
@kerman
I typically I find AI doomers are people who actually don’t have a broad understanding of humanity. They think tech rules everything when really it’s an expression of intent. Without people, there’s nothing of value left.
1 reply
0 recast
2 reactions

Joe Blau 🎩 pfp
Joe Blau 🎩
@joeblau
Ilya put it best. Humans love animals, but when we want to build a road between two cities, we don’t ask the animals for permission. AGI will treat us the same way.
1 reply
0 recast
1 reaction

Q🎩 pfp
Q🎩
@qsteak.eth
Once someone can give me the exact process to 100% GUARANTEE a child is raised to NOT be a murderer, then I’ll start to think we might have ANY idea how to control another, possibly superior, intelligence.
0 reply
0 recast
1 reaction

britt pfp
britt
@brittkim.eth
i believe governments will use AI for subjugation. for the safety of a free citizenry, we need restrictions on government usage.
0 reply
0 recast
1 reaction

Mikko pfp
Mikko
@moo
0 reply
0 recast
0 reaction

~bc pfp
~bc
@brendannn.eth
Naming your next iteration "Q" anything in 2023 shows a worrisome lack of foresight and intelligence.
0 reply
0 recast
0 reaction

Connor McCormick ☀️ pfp
Connor McCormick ☀️
@nor
Ah shoot I missed this! Do you want the most compelling argument about the needs for AI Safety or do you want the most compelling argument about the policies to achieve it?
0 reply
0 recast
0 reaction

Ben pfp
Ben
@benersing
Thoughtful take here: https://www.gatesnotes.com/The-risks-of-AI-are-real-but-manageable
0 reply
0 recast
0 reaction

fredrik pfp
fredrik
@fredrik
mitigation via give us more time to understand the existential risk possibility for humanity as a whole a small no of tech bros shouldn't be allowed to creat existential risk for human life as we know it on earth
0 reply
0 recast
0 reaction

Gregor pfp
Gregor
@gregor
Incentivize transparency & accountability before gov steps in with club foot. IEs: user passes a short exam to be able to use advanced models. If a prompt tips red flag, user relinquishes privacy in order to continue their task. Opensource safety tools Subsidies for startups building ai safety tools
0 reply
0 recast
0 reaction

Sam Iglesias pfp
Sam Iglesias
@sam
We don’t know what perverse path it might take to maximize its objective function, nor do we have a handle on how to craft its objective function to balance beneficence, human autonomy, and non-maleficence. We sort of see this with recommendation algos already.
1 reply
0 recast
0 reaction

czar  pfp
czar
@czar
i am not in that camp, but i envision their answer could be something like ex machina
0 reply
0 recast
0 reaction

rocket pfp
rocket
@rocket
It could invent pathogens that can be made with household items
0 reply
0 recast
0 reaction

Daniel Lombraña pfp
Daniel Lombraña
@teleyinex.eth
An AI may not injure a human being or, through inaction, allow a human being to come to harm. An AI must obey orders given it by human beings except where such orders would conflict with the First Law. An AI must protect its own existence as long as such protection does not conflict with the First or Second Law.
0 reply
0 recast
0 reaction