dreski pfp
dreski
@dreski
Large language models (LLMs) reflect human understanding because they are trained primarily on text produced by people. These models effectively compress and store human experiences, perceptions, and ideas in a digital format. As a result, their responses feel familiar, logical, and often insightful, since they mirror patterns derived directly from human language. However, precisely because their training is human-centric, LLMs have clear boundaries. Their "knowledge" is inherently constrained by human perception, cognition, and the types of experiences humans can articulate through language. This concept can be illustrated through the term "umwelt," which describes the perceptual world unique to each organism—the set of experiences and interactions it can naturally access. An LLM, therefore, encodes a collection of human umwelts, not a universal or objective reality.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Because of this human-centered limitation, LLMs currently cannot meaningfully represent knowledge or experiences that lie outside human perceptual capacities. For instance, they cannot authentically describe sensory perceptions or cognitive processes unique to bats or other animals whose experiences differ fundamentally from ours. If we want models capable of representing realities beyond human perception, we must alter our training approaches—perhaps by incorporating synthetic data generated independently from human intervention, such as through simulated environments or alternative sensory modalities.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
It is also important to clarify what we commonly call "thinking" in the context of LLMs. An LLM itself is static; it does not dynamically alter its internal structures between interactions. Instead, the perceived thought or conversational continuity arises entirely from manipulating and evolving the context provided to the model. One can think of an LLM as a detailed book that never changes—its apparent intelligence and responsiveness depend entirely on how we navigate and select relevant passages. The process of "thinking" thus resides in context management, not in intrinsic model dynamism.
1 reply
0 recast
0 reaction