Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
Why do people think context window size is a big limitation for AIs to be able to fully replace humans as coders? I'd say most of us keep a very compressed version of the codebase in our head and rely on code search and tools to get the details we need. Coding agents are already starting to do this using grep and other tools. They just need that initial context to get them going and then they can do the rest. Creating that initial compressed representation of the codebase they can reference to get them started would do the trick without keeping the entire codebase in the context.
9 replies
2 recasts
50 reactions

Lokp Ray pfp
Lokp Ray
@lokpray
ikr... It used to be the case that context window size was a big deal when chatgpt was first released but with all the recently development in the past 1-2 years including but not limited to: - RAG for live code search - LoRA & fine-tuning - Agentic workflows: SWE-agent, MCP - Context caching In an agentic future (no hype intended here), focus has been more on various methods of accessing live raw data than eating everything up from context window for processing Also, we have 1M+ context window these days from Google/Meta and it's going to go (prob exponentially) as they scale up the number of transformer layers in the NN (aka just more compute needed for initial training)
1 reply
1 recast
3 reactions

Gabriel Ayuso pfp
Gabriel Ayuso
@gabrielayuso.eth
I don't see much talk about LoRA and fine-tuning these days. Seems that most are just fine hitting the base models since cost has gone down and speed has gone up. I'm sure folks like Cursor are doing it.
1 reply
0 recast
1 reaction