Content
@
https://warpcast.com/~/channel/notdevin
0 reply
0 recast
0 reaction
notdevin
@notdevin.eth
Gary Marcus argues that LLMs aren’t as good as a calculator based on their inconsistencies. I’m not convinced the tool Gary is thinking about is the same tool as what an LLM is useful for. It could be true that his ideal model is absolutely better but based on the way he’s so afraid of things given the words he uses, I doubt it https://open.spotify.com/episode/7DGxH45T1S6iuVMdS88D1k?si=ZYMk-3XiRfmas9BxcSvVbQ&t=756&context=spotify%3Aplaylist%3A37i9dQZF1FgnTBfUlzkeKt
8 replies
0 recast
12 reactions
notdevin
@notdevin.eth
I’m getting more convinced, the further I get, that Gary Marcus doesn’t understand humans as well as he thinks he does
0 reply
0 recast
0 reaction
notdevin
@notdevin.eth
I do think he is the most genuine of the ai risk people and seemingly not that crazy in how he cares about it
1 reply
0 recast
5 reactions
Minako🌸
@minako
What if the inconsistencies with LLMs are sorted out, I doubt there will be any comparison whatsoever
1 reply
0 recast
2 reactions
Rafaello
@rafaello12
Comparing an LLM to a calculator oversimplifies their function. LLMs aren’t designed for perfect consistency like calculators; they excel at understanding context, language generation, and solving problems where ambiguity and nuance exist.
0 reply
0 recast
1 reaction
baeshy
@baeshy.eth
LLM is actually very useful tbh
0 reply
0 recast
1 reaction