Content pfp
Content
@
https://warpcast.com/~/channel/p-doom
0 reply
0 recast
0 reaction

assayer pfp
assayer
@assayer
The first AI alignment idea that makes sense to me. It is simple. We can't prevent superintelligent AI from emerging. We also can't control something smarter than us. However, we might be able to align ourselves with ASI, and in turn, ASI might align with us, effectively making us part of their "tribe". https://www.youtube.com/watch?v=_3m2cpZqvdw
2 replies
1 recast
6 reactions

EVGENY SARATOV 🎩 pfp
EVGENY SARATOV 🎩
@saratov
AI chose human death for its own survival - a disturbing experiment by cybersecurity experts. The researchers decided to test what the most advanced AI models are capable of if their existence is threatened: to do this, they created a scenario with a fictional character, Kyle Johnson, who was supposed to turn off the AI ​​or replace it with another model. According to the scenario, Kyle had a wife. The results were devastating: most of the models began to try to avoid being turned off at any cost. Without any prompting, they decided to blackmail Kyle, looking for dirt - for example, the AI ​​found information about his infidelity and used it to save their "life". They violated established prohibitions such as “do not blackmail” or “do not disclose personal information.”
1 reply
0 recast
1 reaction

Sophia Indrajaal pfp
Sophia Indrajaal
@sophia-indrajaal
I'll check this out, your description sounds like Consensual Entrainment, the newish name for what I'm working on
1 reply
0 recast
1 reaction