Content
@
0 reply
0 recast
0 reaction
dylan
@dylsteck.eth
dang imagine your ai agent / copilot using this to make changes to your codebase then autonomously deploy to vercel or a github action ๐ https://x.com/rauchg/status/1866209983588900917
2 replies
2 recasts
7 reactions
Zach
@zd
I was actually thinking about this concept last night before I went to sleep. It would be really cool if talking to an agent created memories that update the system prompt instead of just being stored in some database. Just as humans learn a lot from their environment, so too could the agent. But in this case, the learnings would actually impact its *personality* rather than just being a memory.
1 reply
0 recast
0 reaction
dylan
@dylsteck.eth
ahh that'd be cool! there's this tool i've seen called braintrust that can run llm evals like tweaking your prompt based on responses/response quality, but i wonder if that's more meant for an agent that's single purposed(like it browses the web and you wanna make sure the llm doesn't hallucinate) https://www.braintrust.dev
1 reply
0 recast
0 reaction