Content pfp
Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions

Darryl Yeo đŸ› ïž pfp
Darryl Yeo đŸ› ïž
@darrylyeo
LLMs are good at replicating state-of-the-art frontend code from a decade ago and it shows. Fifty lines of added imperative runtime code to implement a basic UI interaction that with a declarative, newly baseline browser built-in takes three. It's a real uphill battle. A gravity well pulling all my web apps toward verbosity, redundancy and mediocrity by default – because for better or worse, that’s the historically dominant software development culture represented in the training data, and which is now not-so-subtly being imposed on me. I can only hope an inflection point will come where specialized model training is actually cost-effective and we can force the LLMs to “unlearn” entire corpuses of old training data through exclusion, so we don't have to keep wasting precious context tokens correcting the few truly decent and up-to-date generalist models we currently have with oodles of docs embeddings and system prompt overrides. https://farcaster.xyz/polymutex.eth/0x43c25e74 https://farcaster.xyz/polymutex.eth/0x206cea70
4 replies
1 recast
14 reactions

polymutex pfp
polymutex
@polymutex.eth
"Gravity well" and "uphill battle" have nice similarities to latent space representation and gradient descent. By that analogy (or by that "token", if you will), we need the ability to terraform latent space to our needs.
1 reply
1 recast
2 reactions

Darryl Yeo đŸ› ïž pfp
Darryl Yeo đŸ› ïž
@darrylyeo
100%!
0 reply
0 recast
0 reaction