iain pfp
iain

@iain

LLM Driven QA: If an LLM thinks naming or fields or API structure should be different on first pass, maybe there's something there. Granted, it may be based on previous incorrect training data but may be a interesting way to extract naming that may be normal for internal systems and slightly strange for external systems. Another example of this is missing fields: I've had a few LLMS hallucinate fields that customers asked for and they were possible to add so we added them. It's essentially machine QA for developers rather than waiting to hear back feedback from developers as how to use an SDK. Noticing these things developing and building examples for the Zora SDK. I've found LLMs to be a hidden SDK design partner. Instead of being annoyed when the LLM messes up a field or API structure take it as feedback that maybe your users also want that field or find that name confusing. That's not just random noise – it could be a hidden insight. While building the Zora SDK, LLMs have hallucinated features that customers actually wanted. It's like having a fast developer QA partner. It also gets confused looking up docs and helps determine what parts of documentation to improve. The key is staying open to those unexpected suggestions and confirming these findings with others on your team because, sometimes, hallucinations are just that – hallucinations.
0 reply
0 recast
0 reaction