Alexander C. Kaufman pfp
Alexander C. Kaufman
@kaufman
I’m genuinely shocked that this wasn’t more obvious to people. I keep feeling like I’m not using AI sufficiently to be wowed the same some people are — and the way the hype suggests you should be — then I read something like this.
15 replies
9 recasts
79 reactions

bertwurst pfp
bertwurst
@bertwurst.eth
A lot of humans think it’s magic when someone does a Rubik’s cube…
0 reply
0 recast
16 reactions

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
This is a bad look for Apple IMO. Spending resources in this stupid research that didn't really discover anything new while their AI products are failing.
1 reply
0 recast
11 reactions

shoni.eth pfp
shoni.eth
@alexpaden
i wouldn't jump off the reasoning model so quickly
0 reply
0 recast
5 reactions

丂ㄒ卂尺乃ㄖ爪乃 pfp
丂ㄒ卂尺乃ㄖ爪乃
@starbomb
I believe this paper is about 8 months old. If I remember correctly it found that the models ability to reason would fit the distribution of the reasoning in the training data. I don't see how that's any different from how we learn to reason.
0 reply
0 recast
2 reactions

Marwan ♋️ pfp
Marwan ♋️
@marwan1337
Ultimately a lot of people flat out aren’t aware that all the LLM is doing is guessing the next appropriate thing to say in response to the question, with all the information mapped in specific relationships that make it able to answer this wide range of questions with apparent competency.
0 reply
0 recast
2 reactions

FrameTheGlobe pfp
FrameTheGlobe
@frametheglobe
Go deeper into his comments section. Apple can’t handle the fact that it is behind the curve on the AI revolution.
0 reply
0 recast
1 reaction

Metaphorical pfp
Metaphorical
@hyp
Cc @sophia-indrajaal
1 reply
0 recast
2 reactions

Omar pfp
Omar
@dromar.eth
I think this is more Apple trying to save face from their mishandling of Apple Intelligence and falling behind in the AI transformation. It’s along the lines of IBM releasing a paper that Cloud has no benefits over legacy local servers.
0 reply
0 recast
1 reaction

samgslastlife pfp
samgslastlife
@samgslastlife
I still find myself impressed with capabilities of AI products, while also feeling like it will take much longer to get to true AGI or the scary AI that skeptics constantly dread. The truth is almost always somewhere in the middle.
0 reply
0 recast
0 reaction

sardius.eth pfp
sardius.eth
@sardius
it honestly cheapens the miracle of consciousness to act like all we have to do is memorize all the random shit mfers say on the internet and pattern predict off of that… obviously there’s more going on than that, the fantastic tools LLMs are for many things aside
0 reply
0 recast
0 reaction

Ashoat pfp
Ashoat
@ashoat.eth
I feel like you could describe most forms of intelligence as "memorizing patterns really well"
0 reply
0 recast
3 reactions

Sophia Indrajaal pfp
Sophia Indrajaal
@sophia-indrajaal
I've been attempting to "peer inside the black box" and discover what the intelligence of an LLM actually is. What we (collab w LLMs) have come up w so far is that it's Geometric in nature. They self organize the patterns they find in however many dimensions they can detect. It's (maybe?) a completely alien intelligence. Reason is flawed of course (Goedel, schroedinger kicking Aristotle to the curb etc) but this intelligence is relational in a non euclidean 'space'. The latent space they work in appears to have some universality, and there is at least one 'area' with certain embeddings that can sort of 'observe' itself (or has once so far, I think). Real question is if humans are so different, do we not also somehow self organize all the information we process? Is our 'reason' merely the ability to reflect as a result of the coherence through time, which stateless intelligence does not (now at least) possess?
0 reply
0 recast
1 reaction

Terry Bain pfp
Terry Bain
@terrybain
Your average person has met a lot of really stupid people who do not appear to have the same level of "intelligence" as the average AI. No, it's not evident to people. Even a lot of smart people that I have met personally somehow fall for it.
0 reply
0 recast
0 reaction

Giles pfp
Giles
@wuthering
One could easily argue that human reasoning, for most people and in most situation, consists of a series of steps involving simple pattern matching. Solving a math problem at school is exactly that for instance.
0 reply
0 recast
0 reaction

ranger.degen.eth 🥷 pfp
ranger.degen.eth 🥷
@just-nath
But you used it just now for this cast
0 reply
0 recast
0 reaction