dreski pfp
dreski
@dreski
LLMs reflect human understanding because they’re trained on human-generated text, essentially compressing our collective experience into a digital form. They cannot represent knowledge beyond human perception unless we change how we train them—perhaps through artificial data or simulations. Their apparent ‘thinking’ is simply the result of changing context, not a dynamic consciousness. Errors or hallucinations happen because they’re trained on conflicting human ideas. Future improvements in LLMs depend on better managing context and expanding their training data beyond human limitations.
0 reply
0 recast
0 reaction