Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction
kbc
@kbc
Running ollama on a m2 with 8gb of ram was a stupid idea. I got the following under-used machines that I'm wondering if I can take them apart or use to run local llm - macbook pro 2012, 4gb ram but could upgrade. 2nd CPU slot got burned and no idea how to fix it - macbook air 2018 that charges but doesn't turn on - pi4 (currently used for a keyboard synthesizer)
4 replies
0 recast
14 reactions
Colin Charles
@bytebot
These are some old machines save for the pi. MacBooks are also notoriously hard to disassemble (especially the air). I’ve used this - but not on machines that you have. Also maybe replacing the base os with Linux might help you. Ollama on the pi will work but will be slow… https://github.com/exo-explore/exo
1 reply
0 recast
0 reaction
Daniel Lombraña
@teleyinex.eth
You need more RAM and a better CPU. Try with Gemma LLM as it might work.
0 reply
0 recast
1 reaction
shoni.eth
@alexpaden
i think your only reasonable bet is gemma 2b personally everything else prob too slow as noted below you can also try exo
0 reply
0 recast
1 reaction
Colin Charles
@bytebot
These are some old machines save for the pi. MacBooks are also notoriously hard to disassemble (especially the air). I’ve used this - but not on machines that you have. Also maybe replacing the base os with Linux might help you. Ollama on the pi will work but will be slow… https://github.com/exo-explore/exo
0 reply
0 recast
0 reaction