@monteluna
Karpathy released this interesting microgpt code that gives a look into how LLMs work under the hood.
The interesting part: "What’s the deal with “hallucinations”? The model generates tokens by sampling from a probability distribution. It has no concept of truth, it only knows what sequences are statistically plausible given the training data."
https://karpathy.github.io/2026/02/12/microgpt/