Lokp Ray pfp
Lokp Ray
@lokpray
I was planning to code a bit on the plane and wanted to use local LLM only to realize that my laptop RAM can only run these two DeepSeek models: DeepSeek-Coder-V2-Lite DeepSeek-R1-Distill-Qwen-1.5B I thought the future was localized LLM with privacy 🥲 PS: the peak spec for macbook pro max has 128gb but still...
1 reply
0 recast
2 reactions

Kasra Rahjerdi pfp
Kasra Rahjerdi
@jc4p
i know the deepseek distills sounds enticing but for local stuff you really can’t beat the gemma / llama coding fine tunes
1 reply
0 recast
0 reaction

Lokp Ray pfp
Lokp Ray
@lokpray
deepseek coder should be decent? also llama 3 needs more RAM unless i quantized down to 4/8 bit lol + context window will be tiny due to all the bits left for input never used gemma. will take a look 🫡
1 reply
0 recast
1 reaction