vrypan |--o--|
@vrypan.eth
I think that many of the problems devs face when coding with LLMs can be traced back to the context window. My uneducated guess is what we usually describe as "when the code gets complex, the LLM starts breaking things" is because the agent can't fit the whole code in its context, and it has not good way to decide what is the right context to use in order to do what we ask it to do. Do you think we will see LLM-friendly or LLM-optimized programming languages? What would they look like? For example, humans break down their code to packages and libraries, etc. in order to manage and maintain it more efficiently. Would an LLM-optimized language do something similar but try to break down code in units that fit in a context window? Maybe designed so that the source code requires fewer tokens (even if it's not human-friendly)? Or have a way to efficiently "summarize" the functionality (api/interface) of a code unit so that it can use it in other units efficiently? Are there any projects working on something like this?
5 replies
0 recast
6 reactions
akshaan
@akshaan
Even when the context window is large enough to fit all the code, it’s likely model quality degrades as the input size increases. There’s been some interesting evidence of this effect: https://research.trychroma.com/context-rot
0 reply
0 recast
1 reaction