Tarun Chitra
@pinged
Part III: Escaping from Reasoning Model Purgatory ~~~ The most interesting part about Chain of Thought (CoT) reasoning is that unlike a vanilla hallucinating LLM, CoT models convincingly assert falsehoods; the same mechanism that makes them avoid hallucinating also makes them dig in their heels (like a stubborn human)
8 replies
12 recasts
84 reactions
Summercloud π π© π£ π
@summercloud
You may be interested in this following link (and organisation) https://x.com/TransluceAI/status/1912552046269771985 Another response to your article said "my teenager does the same thing". I think the rapid development of AI parallels the development of a child. We've gone past the toddler stage, learning how to talk, read, ride a bike. Now probably equivalent to a 12-17 year old, developing social awareness, confidence and literary and mathematical skills. Still immature. That child (or AI agent) learns by observation and role models. Is the Internet and our society a good parent? ** btw I'm enjoying your articles!
0 reply
0 recast
1 reaction