ʞɔɐſ pfp
ʞɔɐſ
@farcasterjack.eth
an AI model that runs as an LLM also spits out a number from 0-100 representing how certain it is that the data it’s giving is accurate. Surely at least of the current models can identify queries that might make them prone to hallucinations? Right? Gemini seems to just speak with such certainty when I search on google but is often just flat out wrong and verifiably so by just scrolling to the top results below the AI summary. So weird to me that it presents all its answers with equal confidence
3 replies
1 recast
24 reactions

agusti pfp
agusti
@bleu.eth
interesting
0 reply
0 recast
1 reaction