Chris Carlson
@chrislarsc.eth
I did some deep diving on how AI/LLMs/chat products work recently, and now I can spot how this thread and the comparisons between "models" is completely nonsensical https://x.com/itsalexvacca/status/1927393691267690922
1 reply
0 recast
1 reaction
Brenner
@brenner.eth
Wdym?
1 reply
0 recast
0 reaction
Chris Carlson
@chrislarsc.eth
The UIs we use are not just “model” They are: model + prompt + tools Catching you in a lie has everything to do with the system prompt and the access to tools, not the model itself
1 reply
0 recast
1 reaction
Brenner
@brenner.eth
Disagree - the post-training of the model, which is an immutable part of the model, has a lot to do with it
1 reply
0 recast
0 reaction
Chris Carlson
@chrislarsc.eth
Sure, but use the APIs only and get *wildly* different answers to a question like this
1 reply
0 recast
1 reaction
Brenner
@brenner.eth
Source?
1 reply
0 recast
0 reaction
Chris Carlson
@chrislarsc.eth
*for something that leverages tools to answer this particular question, like o3. Just send the same question to API only. I guarantee it’ll be extremely different
1 reply
0 recast
1 reaction