
dreski
@dreski
1 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
Large language models (LLMs) reflect human understanding because they are trained primarily on text produced by people. These models effectively compress and store human experiences, perceptions, and ideas in a digital format. As a result, their responses feel familiar, logical, and often insightful, since they mirror patterns derived directly from human language.
However, precisely because their training is human-centric, LLMs have clear boundaries. Their "knowledge" is inherently constrained by human perception, cognition, and the types of experiences humans can articulate through language. This concept can be illustrated through the term "umwelt," which describes the perceptual world unique to each organism—the set of experiences and interactions it can naturally access. An LLM, therefore, encodes a collection of human umwelts, not a universal or objective reality. 1 reply
0 recast
0 reaction
Large language models, with their expanding context windows, tool integrations, and memory strategies, increasingly resemble general-purpose reasoning systems capable of automating a wide range of tasks. In constrained, short-term interactions, they can simulate intelligent assistance with notable effectiveness. However, this performance does not extend seamlessly to longer-term conversations that span diverse domains, formats, or evolving models. In such cases, the absence of durable context management structures becomes apparent. 1 reply
0 recast
0 reaction
1 reply
0 recast
0 reaction