dreski pfp
dreski
@dreski
Large language models (LLMs) reflect human understanding because they are trained primarily on text produced by people. These models effectively compress and store human experiences, perceptions, and ideas in a digital format. As a result, their responses feel familiar, logical, and often insightful, since they mirror patterns derived directly from human language. However, precisely because their training is human-centric, LLMs have clear boundaries. Their "knowledge" is inherently constrained by human perception, cognition, and the types of experiences humans can articulate through language. This concept can be illustrated through the term "umwelt," which describes the perceptual world unique to each organism—the set of experiences and interactions it can naturally access. An LLM, therefore, encodes a collection of human umwelts, not a universal or objective reality.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Because of this human-centered limitation, LLMs currently cannot meaningfully represent knowledge or experiences that lie outside human perceptual capacities. For instance, they cannot authentically describe sensory perceptions or cognitive processes unique to bats or other animals whose experiences differ fundamentally from ours. If we want models capable of representing realities beyond human perception, we must alter our training approaches—perhaps by incorporating synthetic data generated independently from human intervention, such as through simulated environments or alternative sensory modalities.
1 reply
0 recast
0 reaction