Content
@
https://warpcast.com/~/channel/notdevin
0 reply
0 recast
0 reaction
notdevin
@notdevin.eth
Gary Marcus argues that LLMs aren’t as good as a calculator based on their inconsistencies. I’m not convinced the tool Gary is thinking about is the same tool as what an LLM is useful for. It could be true that his ideal model is absolutely better but based on the way he’s so afraid of things given the words he uses, I doubt it https://open.spotify.com/episode/7DGxH45T1S6iuVMdS88D1k?si=ZYMk-3XiRfmas9BxcSvVbQ&t=756&context=spotify%3Aplaylist%3A37i9dQZF1FgnTBfUlzkeKt
8 replies
0 recast
12 reactions
Minako🌸
@minako
What if the inconsistencies with LLMs are sorted out, I doubt there will be any comparison whatsoever
1 reply
0 recast
2 reactions
notdevin
@notdevin.eth
I don’t think you can sort out the inconsistencies and still have a useful model. How do you mean sort out here?
1 reply
0 recast
0 reaction
Minako🌸
@minako
Oh I thought the inconsistencies could be resolved .. didn’t know the usefulness of the model relies on it inconsistencies
1 reply
0 recast
1 reaction
notdevin
@notdevin.eth
Models are trying to predict the next tokens to a prompt, almost by definition, prompts are missing some amount of data -> “take that to the table over there” in the context of a restaurant = N number of humans at table getting food(this is the data that’s missing in my example) Our spoken language isn’t a formal language, in the technical sense, so there is no absolute mapping of words to meaning -> non deterministic answers. RAG and Chain of Thought clearly help to smooth over these issues. This is why I don’t understand why Gary Marcus takes this view. Also relates to why I think AGI is a poor conjecture
1 reply
0 recast
0 reaction