Brenner pfp
Brenner
@brenner.eth
Need a secondary agent to just always be asking "is this a good idea"? to everything my primary agent is planning on doing not that different than all the voices in my head
1 reply
0 recast
13 reactions

Chris Carlson pfp
Chris Carlson
@chrislarsc.eth
good read to counter the concept of mult-agent architecture/thinking; instead, the same agent should simply be given the "is this a good idea?" prompt as a subtask https://cognition.ai/blog/dont-build-multi-agents
1 reply
0 recast
2 reactions

Brenner pfp
Brenner
@brenner.eth
1 reply
0 recast
0 reaction

Chris Carlson pfp
Chris Carlson
@chrislarsc.eth
ah yes. well, many ways to architect. i appreciate them sharing their learnings so other teams don't waste time making similar mistakes
1 reply
0 recast
1 reaction

Brenner pfp
Brenner
@brenner.eth
A lot of other thoughts here. 1. the way they break the subtasks down matters a lot. Ideally, they don't need the context of the other ones, similar to when you break work out for multiple people 2. The context compression LLM *is another agent*... 3. The original ages will be biased that it is a good idea because it's their idea. A separate agent (potentially with a different random seed) has a higher chance of having a differing opinion
1 reply
0 recast
0 reaction

Chris Carlson pfp
Chris Carlson
@chrislarsc.eth
i think you're right! my response was mainly driven by my own experience of: 1. learning how these systems are built 2. starting to think about possibilities and imagining "complex/more sophisticated" architectures with multiple agents 3. reading the above and having the realization, "ah! multi-agent != better in a lot of instances and here's why"
1 reply
0 recast
1 reaction

Brenner pfp
Brenner
@brenner.eth
I’ve been thinking (but not yet experimenting) in the other direction: https://chatgpt.com/share/685b083f-3fa8-8012-9b10-30b3457fadc2
1 reply
0 recast
0 reaction

Chris Carlson pfp
Chris Carlson
@chrislarsc.eth
my intuition says that more complex architectures become moot as models improve and token limits increase. improvements gained from a micro-agent set up are not worth the effort to create and maintain?
1 reply
0 recast
1 reaction

Brenner pfp
Brenner
@brenner.eth
Michael Levin and biology would disagree
1 reply
0 recast
0 reaction

Chris Carlson pfp
Chris Carlson
@chrislarsc.eth
ai becoming more human does not mean it's architecture should be more biological
0 reply
0 recast
0 reaction