Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction
kbc
@kbc
Running ollama on a m2 with 8gb of ram was a stupid idea. I got the following under-used machines that I'm wondering if I can take them apart or use to run local llm - macbook pro 2012, 4gb ram but could upgrade. 2nd CPU slot got burned and no idea how to fix it - macbook air 2018 that charges but doesn't turn on - pi4 (currently used for a keyboard synthesizer)
4 replies
0 recast
14 reactions
Colin Charles
@bytebot
These are some old machines save for the pi. MacBooks are also notoriously hard to disassemble (especially the air). I’ve used this - but not on machines that you have. Also maybe replacing the base os with Linux might help you. Ollama on the pi will work but will be slow… https://github.com/exo-explore/exo
1 reply
0 recast
0 reaction
kbc
@kbc
I'll need to decide if this is just a nice side project to try out something new. And if yes, what will I learn. someone else already planted the idea in my mind to replace the os with linux
1 reply
0 recast
0 reaction
Colin Charles
@bytebot
Your machines are far too old for regular macOS. But ask yourself: what is the purpose of running a local LLM? Your m2/8gb ram could run a small mistral pretty easily I reckon. Slow token output, but should be “ok”.
1 reply
0 recast
0 reaction