dreski pfp
dreski
@dreski
Large language models (LLMs) reflect human understanding because they are trained primarily on text produced by people. These models effectively compress and store human experiences, perceptions, and ideas in a digital format. As a result, their responses feel familiar, logical, and often insightful, since they mirror patterns derived directly from human language. However, precisely because their training is human-centric, LLMs have clear boundaries. Their "knowledge" is inherently constrained by human perception, cognition, and the types of experiences humans can articulate through language. This concept can be illustrated through the term "umwelt," which describes the perceptual world unique to each organism—the set of experiences and interactions it can naturally access. An LLM, therefore, encodes a collection of human umwelts, not a universal or objective reality.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Because of this human-centered limitation, LLMs currently cannot meaningfully represent knowledge or experiences that lie outside human perceptual capacities. For instance, they cannot authentically describe sensory perceptions or cognitive processes unique to bats or other animals whose experiences differ fundamentally from ours. If we want models capable of representing realities beyond human perception, we must alter our training approaches—perhaps by incorporating synthetic data generated independently from human intervention, such as through simulated environments or alternative sensory modalities.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
It is also important to clarify what we commonly call "thinking" in the context of LLMs. An LLM itself is static; it does not dynamically alter its internal structures between interactions. Instead, the perceived thought or conversational continuity arises entirely from manipulating and evolving the context provided to the model. One can think of an LLM as a detailed book that never changes—its apparent intelligence and responsiveness depend entirely on how we navigate and select relevant passages. The process of "thinking" thus resides in context management, not in intrinsic model dynamism.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Hallucinations in LLMs—instances where models produce incorrect or nonsensical information—can similarly be understood as outcomes of conflicting information in their training data. Just as contradictory testimonies might confuse a listener, LLMs face contradictions within the vast corpus of human-written text, resulting in outputs that seem incoherent or mistaken. Managing and reducing these contradictions requires careful context design and improved mechanisms for coherence, both of which are critical areas for future development.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
To move beyond current limitations, future improvements in LLMs must emphasize better context management, careful selection and integration of specialized models, and innovative training methods that extend beyond strictly human-derived data. Such advances will allow AI systems to achieve greater coherence, adaptability, and perhaps even develop forms of reasoning and representation currently inaccessible through purely human-centric methodologies.
0 reply
0 recast
0 reaction