ʞɔɐſ pfp
ʞɔɐſ
@farcasterjack.eth
an AI model that runs as an LLM also spits out a number from 0-100 representing how certain it is that the data it’s giving is accurate. Surely at least of the current models can identify queries that might make them prone to hallucinations? Right? Gemini seems to just speak with such certainty when I search on google but is often just flat out wrong and verifiably so by just scrolling to the top results below the AI summary. So weird to me that it presents all its answers with equal confidence
3 replies
1 recast
24 reactions

SQX pfp
SQX
@sqx
If it seems like an important simple solution. The only logical conclusion is it is on purpose ? Like sponsored top posts. Hmmm. GIGO world. And it’s garbage all the way down.
1 reply
0 recast
1 reaction

agusti pfp
agusti
@bleu.eth
interesting
0 reply
0 recast
1 reaction

gFam.live (UrbanGladiator) pfp
gFam.live (UrbanGladiator)
@gfam
I imagine it's impossible for an AI model to fact check its own output because they can't read and digest their own output.
0 reply
0 recast
0 reaction