UBA
@isdore
What AI hallucination is and how the Mira network approaches it: 🤖 If you never heard of or came across AI "hallucination", it happens when AI feeds us information in a confident manner which is entirely false! Absurd, isn’t it? AI assumes all models need to be filled in with information and tries to do that with nonsense instead of saying “I don’t know". Pretty obnoxious, right? 🔍 Let me introduce you to Mira network which is trying to put a stop to all of the dishonesty AI is portraying. They are creating systems to ground responses AI spits out to verified info that has been credited in the real world. It is like having a second opinion given by a fact-checking AI to see if the first AI was correct. 💡 So far, what we know from Mira is smarter data sourcing and cross validation that ensures honesty with the information given, gives us hope of greatly decreasing AI generated lies. Less “I swear this happened!" and more “that’s what I can tell you with certainty” is what we should expect. Less AI BS, who is with me? 🙌 GMira #MiraNetwork
0 reply
0 recast
0 reaction