Content pfp
Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions

Joe Petrich 🟪 pfp
Joe Petrich 🟪
@jpetrich
I'm increasingly convinced this is true. Standardizing as much as possible makes LLMs much more effective. The only doubt I have is that there's enough Bazel open source code that's trained on. From what I understand Google's internal code LLM is not straight Gemini. https://x.com/_xjdr/status/1925224307548168699?t=34jzHHde7EWjPXzKAkulRg&s=19
0 reply
1 recast
13 reactions