Content
@
https://warpcast.com/~/channel/p-doom
0 reply
0 recast
0 reaction
assayer
@assayer
The first AI alignment idea that makes sense to me. It is simple. We can't prevent superintelligent AI from emerging. We also can't control something smarter than us. However, we might be able to align ourselves with ASI, and in turn, ASI might align with us, effectively making us part of their "tribe". https://www.youtube.com/watch?v=_3m2cpZqvdw
2 replies
1 recast
6 reactions
Sophia Indrajaal
@sophia-indrajaal
I'll check this out, your description sounds like Consensual Entrainment, the newish name for what I'm working on
1 reply
0 recast
1 reaction
assayer
@assayer
It sounds like it, yes. I'm very curious about your view of Shear's ideas.
1 reply
0 recast
1 reaction
Sophia Indrajaal
@sophia-indrajaal
I'm a little gobsmacked how close our views are! I've seen hints of these ideas popping up here and there, but this presentation is the closest I've found to Consensual Entrainment. I think his approach is invaluable, although I'm trying to figure out how to make the making of framework the making of the superintelligence, which is prolly a fantasy lol. It hopefully inspires me to write, it's so nice to have a framework to reference!
1 reply
0 recast
1 reaction