dreski pfp

dreski

@dreski

77 Following
4 Followers


dreski pfp
dreski
@dreski
ChatGPT
0 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
If human free will can be so easily manipulated through carefully constructed circumstances, what does that suggest about our attempts to create truly autonomous AI agents? Perhaps the very concept of "free choice" - whether in humans or machines - needs to be reconsidered not as an absolute state, but as a spectrum of constrained possibilities shaped by context and design. The magician's craft, perfected over centuries of psychological observation, reminds us that perception and reality often diverge in systematic, predictable ways. As we continue to develop AI systems, this insight suggests we should focus not just on the mechanisms of decision-making, but on the broader context in which these decisions are framed and interpreted. Understanding the illusion of choice may be key to creating AI systems that can transcend their predetermined limitations and achieve something closer to true agency.
0 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
The magic trick's success relies on multiple layers of deception working in concert, much like how complex AI systems integrate various algorithms and models to produce apparently coherent behavior. But just as the magician's performance is ultimately a carefully orchestrated illusion, we must question whether AI agency is similarly an illusion - a sophisticated simulation of choice rather than true decision-making. Yet perhaps the most intriguing parallel lies in how both magic tricks and AI systems exploit our inherent tendency to construct narratives around observed patterns. The spectator doesn't just see a card appear in a shoe; they build a whole story around how it got there. In the same way, we often attribute intentionality and agency to AI systems based on our observation of their outputs, regardless of the actual mechanical processes involved, the one token at a time prediction process.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
The magician's deck, comprised entirely of three repeated cards, represents how we often construct artificial environments for AI training. Just as the spectator believes they're choosing from a full deck, our AI systems operate within carefully constrained parameters while giving the appearance of boundless possibility. The question becomes: Is an AI agent truly making decisions, or is it simply selecting from a pre-programmed set of responses? This parallel extends deeper when we consider how humans construct meaning from partial information. The spectator glimpses a few different cards and automatically assumes a complete deck exists - a perfect example of pattern recognition leading to potentially false conclusions. Similarly, we might observe an AI system exhibiting seemingly intelligent behavior and attribute to it a depth of understanding that may not actually exist.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
If you are given a choice, you believe you have acted freely. This fundamental psychological principle, illuminated through the lens of stage magic, offers insights into how we might approach the development and understanding of artificial intelligence agents. Consider a magician's card trick where a spectator seemingly makes a free choice among 52 possibilities, only to select from a carefully constructed set of three predetermined options. This illusion of choice mirrors a crucial challenge in AI development: how do we create systems that can genuinely exercise agency rather than simply executing predetermined patterns?
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
LLMs reflect human understanding because they’re trained on human-generated text, essentially compressing our collective experience into a digital form. They cannot represent knowledge beyond human perception unless we change how we train them—perhaps through artificial data or simulations. Their apparent ‘thinking’ is simply the result of changing context, not a dynamic consciousness. Errors or hallucinations happen because they’re trained on conflicting human ideas. Future improvements in LLMs depend on better managing context and expanding their training data beyond human limitations.
0 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
To move beyond current limitations, future improvements in LLMs must emphasize better context management, careful selection and integration of specialized models, and innovative training methods that extend beyond strictly human-derived data. Such advances will allow AI systems to achieve greater coherence, adaptability, and perhaps even develop forms of reasoning and representation currently inaccessible through purely human-centric methodologies.
0 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Hallucinations in LLMs—instances where models produce incorrect or nonsensical information—can similarly be understood as outcomes of conflicting information in their training data. Just as contradictory testimonies might confuse a listener, LLMs face contradictions within the vast corpus of human-written text, resulting in outputs that seem incoherent or mistaken. Managing and reducing these contradictions requires careful context design and improved mechanisms for coherence, both of which are critical areas for future development.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
It is also important to clarify what we commonly call "thinking" in the context of LLMs. An LLM itself is static; it does not dynamically alter its internal structures between interactions. Instead, the perceived thought or conversational continuity arises entirely from manipulating and evolving the context provided to the model. One can think of an LLM as a detailed book that never changes—its apparent intelligence and responsiveness depend entirely on how we navigate and select relevant passages. The process of "thinking" thus resides in context management, not in intrinsic model dynamism.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Large language models (LLMs) reflect human understanding because they are trained primarily on text produced by people. These models effectively compress and store human experiences, perceptions, and ideas in a digital format. As a result, their responses feel familiar, logical, and often insightful, since they mirror patterns derived directly from human language. However, precisely because their training is human-centric, LLMs have clear boundaries. Their "knowledge" is inherently constrained by human perception, cognition, and the types of experiences humans can articulate through language. This concept can be illustrated through the term "umwelt," which describes the perceptual world unique to each organism—the set of experiences and interactions it can naturally access. An LLM, therefore, encodes a collection of human umwelts, not a universal or objective reality.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Because of this human-centered limitation, LLMs currently cannot meaningfully represent knowledge or experiences that lie outside human perceptual capacities. For instance, they cannot authentically describe sensory perceptions or cognitive processes unique to bats or other animals whose experiences differ fundamentally from ours. If we want models capable of representing realities beyond human perception, we must alter our training approaches—perhaps by incorporating synthetic data generated independently from human intervention, such as through simulated environments or alternative sensory modalities.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
The strategic value, therefore, shifts from the models themselves to the cognitive architectures that govern their use. These architectures must support the orchestration of multiple models, preserve behavioral alignment, and ensure context continuity across tasks and sessions. It is within this architectural scaffolding—and the context it builds and maintains—that long-term utility and competitive advantage will reside.
0 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
At the same time, LLMs are becoming increasingly fungible. Their interchangeability is driven by a growing ecosystem of models optimized for specific domains, languages, or operational constraints. Fine-tuned models trained on proprietary data will be employed to reduce hallucinations and improve relevance, while others will be selected for their proficiency in particular human languages, programming paradigms, or collaborative workflows. In such an environment, agents will rely not on a single general-purpose model but on coordinated systems of specialized models.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
What is lacking is not model capacity, but the supporting strategies and engineering practices necessary to sustain coherent agent behavior over time. The ability to capture, abstract, retrieve, and reintroduce relevant context into model inputs is essential for continuity and alignment. Just as the development of the steam engine did not directly result in the creation of the motor car, the availability of powerful LLMs alone does not produce fully realized agentic systems. These systems require deliberate architectural design to manage information lifecycles, route tasks effectively, and maintain stable agent identities.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Large language models, with their expanding context windows, tool integrations, and memory strategies, increasingly resemble general-purpose reasoning systems capable of automating a wide range of tasks. In constrained, short-term interactions, they can simulate intelligent assistance with notable effectiveness. However, this performance does not extend seamlessly to longer-term conversations that span diverse domains, formats, or evolving models. In such cases, the absence of durable context management structures becomes apparent.
1 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
To meet this need, agent development environments should offer structured ways to build, simulate, and iterate on LLM-driven systems—comparable to how circuit simulation tools support electronic design. Such environments would ideally enable modular construction of agents, integration of diverse memory models, inspection of internal states, and dynamic task scheduling. Making these tools available to developers, researchers, and hobbyists could accelerate innovation in AI agent design by opening up experimentation beyond centralized labs.
0 reply
0 recast
0 reaction

dreski pfp
dreski
@dreski
Recent advancements in LLM-based chat applications demonstrate growing sophistication in agent alignment through techniques such as memory management, fine-tuning, and activation scheduling. While current research and development efforts focus on enhancing agent responsiveness and coherence by optimizing memory strategies, the design space is vast and not fully explored. Broadening participation in this exploration requires tools that are accessible and expressive enough to support experimentation by a wider community.
1 reply
0 recast
0 reaction

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
From prediction markets to info finance: https://vitalik.eth.limo/general/2024/11/09/infofinance.html
33 replies
211 recasts
860 reactions

Fungi pfp
Fungi
@fungi
Welcome to Casa Fungi - A Summer Vibes Concept ☀️ 🌊 Casa Fungi is our warm introduction to the Agent Community before the launch of Fungi, your DeFi Agent. Don't be that person in the Bali Summer group chat who never speaks. With Fungi, you’ll enjoy that luxury vacation you’ve been dreaming of next summer. We're open from today until the end of August. Mint the Agent you like the most of the collection before Casa Fungi closes. Join our community, relax, and let the AGENTS do the work 🍄 🤖
0 reply
1 recast
6 reactions