Content
@
https://warpcast.com/~/channel/llm
0 reply
0 recast
0 reaction
christopher
@christopher
Apple released analysis on current LLM/LRM capabilities. Fundamentally finding that models essentially “give up” rather than scale their thinking appropriately. We get both “overthinking” and wild goose chases. Sophisticated pattern matching. https://ml-site.cdn-apple.com/papers/the-illusion-of-thinking.pdf
1 reply
0 recast
16 reactions
christopher
@christopher
Their belief is that LLMs/LRMs will get better at doing very specific tasks and require some attention in the loop. Similar to RAG faults. But again, this won’t scale nicely.
0 reply
0 recast
8 reactions