Brenner
@brenner.eth
I think *randomness* is part of what makes us human and why these LLMs feel so robotic. if you ask the LLM "give me 1 word" over and over again in different chat sessions, it'll mostly give you the same word
2 replies
0 recast
7 reactions
Leeward Bound
@leewardbound
is it truly randomness though or is it the context of your day to day stream of consciousness? if you were to put an LLM in a robot body, send it to school for 12+y, add RAG that includes both realtime events and past memories, and loop in some kind of a subconscious "current mood" sentiment, i think you'd surely get a different 1 word every time, right?
1 reply
0 recast
1 reaction
Brenner
@brenner.eth
Yeah you’re right, I’m using “randomness” very broadly. “Loop in some kind of subconscious mood” is the the key here - it’s the coroutines of neurons that are always firing in our head that lead to the “randomness”
1 reply
0 recast
1 reaction
Leeward Bound
@leewardbound
it raises a really interesting point; i hadn't considered this before - im personally working on a self-learning RAG framework that continuously builds itself from ingested knowledge and user interactions, bc this sounds the most useful for improving reasoning and thinking skills but if we wanted to improve creative writing and improv comedy outputs (like "pick a random word"), to make it feel more "human", a subconscious mood tool would certainly yield really fascinating outputs and probably make it more relatable, "today im writing emo haikus and drawing charcoal self-portraits, bc im sad about world affairs" etc
0 reply
0 recast
1 reaction