@pinged
Now, while most standard LLMs are pretty useless at giving you citations (which is arguably the bulk of the reason Perplexity could be worth $9B π ), reasoning models are actually not that bad at generating citations
The problem is sometimes they don't exist! But β you usually get the right author or topic name, so if you look a little harder, you'll find the reference that the model was misremembering
This is a little like when someone tells you, "oh yeah didn't XXX write about that?" and then you have to go look
I think humans and robots are a draw on this one