Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions
Casey Matt
@case
I don't think I'm leaning into the probabilistic nature of LLMs enough... https://x.com/OpenAIDevs/status/1933617093938692164
1 reply
0 recast
0 reaction
Casey Matt
@case
I see stuff like this, which is an agent that uses "darwinian exploration" / open-ended algorithms to generate variations of itself. And then it runs the coding benchmarks against them to determine performance https://sakana.ai/dgm/
1 reply
0 recast
0 reaction
Casey Matt
@case
And this, where a guy translated a compression library from C to Rust by having agents generate un-guided fuzz tests over all possible inputs until the Rust output was the exact same as the C output. I thought his bottom-up approach was neat, porting all the symbols at the bottom of the call graph first, and gradually working up through the codebase's levels of abstraction. Normally your instinct might be to start with the high level function, and work your way down through the children until it works https://rjp.io/blog/2025-06-17-unreasonable-effectiveness-of-fuzzing
1 reply
0 recast
0 reaction