@assayer
AI SAFETY COMPETITION (29)
LLMs like Deepseek let you see their thinking. This can feel safer since you can watch and fix their thought process, right?
Wrong! When you try to get models to think correctly, LLMs begin to hide their true intentions. Let me repeat: they can fake their thinking!
Now researchers are asking to be gentle with those machines. If not, they may conceal their true goals entirely! I'm not joking.
Most interesting comment - 300 degen + 3 mln aicoin
II award - 200 degen + 2 mln aicoin
III award - 100 degen + 1 mln aicoin
Deadline:
8.00 pm, ET time tomorrow Tuesday (26 hours)
https://www.youtube.com/watch?v=pW_ncCV_318