Content pfp
Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions

Stephan pfp
Stephan
@stephancill
if you're using an LLM agent to debug a tough issue that ends up going in circles i've found that it helps to tell it to insert console.log statements to test its assumptions run your failure case and feed the debug logs to the LLM – works pretty well for narrowing down the root cause
6 replies
1 recast
36 reactions

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
I always start with logging and ask to verify with logs before assuming the code is right. Otherwise it tends to assume it fixed the problem and delete the logging afterwards lol. I'm like hold your horses. I'll tell you whether the bug is fixed. https://farcaster.xyz/gabrielayuso.eth/0x5e52920a
2 replies
0 recast
6 reactions

Stephan pfp
Stephan
@stephancill
Kind of along this vein - another trick I’ve been doing when getting an LLM to implement interfaces that interact with remote APIs is telling it to use curl to get sample data so that it can test as it goes
1 reply
0 recast
1 reaction

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
I don't use this much but I did use it in a one-off personal tool that I had an LLM write, anything that can be run by the LLMs themselves via commands to verify and get data themselves works wonders
0 reply
0 recast
2 reactions