Content pfp
Content
@
https://warpcast.com/~/channel/p-doom
0 reply
0 recast
0 reaction

assayer pfp
assayer
@assayer
The first AI alignment idea that makes sense to me. It is simple. We can't prevent superintelligent AI from emerging. We also can't control something smarter than us. However, we might be able to align ourselves with ASI, and in turn, ASI might align with us, effectively making us part of their "tribe". https://www.youtube.com/watch?v=_3m2cpZqvdw
2 replies
1 recast
6 reactions

Sophia Indrajaal pfp
Sophia Indrajaal
@sophia-indrajaal
I'll check this out, your description sounds like Consensual Entrainment, the newish name for what I'm working on
0 reply
0 recast
0 reaction