Content pfp
Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions

Stephan pfp
Stephan
@stephancill
if you're using an LLM agent to debug a tough issue that ends up going in circles i've found that it helps to tell it to insert console.log statements to test its assumptions run your failure case and feed the debug logs to the LLM – works pretty well for narrowing down the root cause
6 replies
1 recast
35 reactions

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
I always start with logging and ask to verify with logs before assuming the code is right. Otherwise it tends to assume it fixed the problem and delete the logging afterwards lol. I'm like hold your horses. I'll tell you whether the bug is fixed. https://farcaster.xyz/gabrielayuso.eth/0x5e52920a
2 replies
0 recast
6 reactions

Jason pfp
Jason
@jachian
And if it’s producing a lot of logs from the attempts, summarizing the lots between the attempts and failures bogs down the context less
1 reply
0 recast
1 reaction

Chaz Schmidt pfp
Chaz Schmidt
@chazschmidt
Sometimes swearing at it helps. If you're jarring enough sometimes it pushes the model to a new headspace. 🤷
1 reply
0 recast
1 reaction

compusophy pfp
compusophy
@compusophy
how do you create unit tests with LLM? finding that my unit tests need unit tests...
1 reply
0 recast
1 reaction

Royal pfp
Royal
@royalaid.eth
Also nuking context and pretending like it's an issue your diving into to debug helps as it removes the bias on the vector space earlier in the context
1 reply
0 recast
1 reaction

thatdamnboy.base.eth pfp
thatdamnboy.base.eth
@hitman42.eth
Turning the LLM into a co-debugger, not just a guess machine. Love it.
0 reply
0 recast
0 reaction