ΚΙΙΕΏ
@farcasterjack.eth
an AI model that runs as an LLM also spits out a number from 0-100 representing how certain it is that the data itβs giving is accurate. Surely at least of the current models can identify queries that might make them prone to hallucinations? Right? Gemini seems to just speak with such certainty when I search on google but is often just flat out wrong and verifiably so by just scrolling to the top results below the AI summary. So weird to me that it presents all its answers with equal confidence
3 replies
0 recast
15 reactions
SQX
@sqx
If it seems like an important simple solution. The only logical conclusion is it is on purpose ? Like sponsored top posts. Hmmm. GIGO world. And itβs garbage all the way down.
1 reply
0 recast
1 reaction
ΚΙΙΕΏ
@farcasterjack.eth
I see that being plausible but Iβm talking about when Googles AI summary is just wrong for no good reason. For example it told me yesterday that there were four locations in my state of a restaurant that has no locations here in reality
1 reply
0 recast
0 reaction
SQX
@sqx
Oh yeah. Iβve had some hallucinations. Why I find it so ironic a program created with logic and math canβt do math. And then when you tell it. They like. Oh yeah. Your right. Bitch. You should know! Youβre the AI! π€
0 reply
0 recast
0 reaction