Content pfp
Content
@
0 reply
0 recast
0 reaction

downshift. πŸŽοΈπŸ’¨ pfp
downshift. πŸŽοΈπŸ’¨
@downshift.eth
what hardware should i buy to run LLMs locally? naive question, i know, but i don't have a good sense of where to start. are there useful public guides for deciding? the state of the art is changing rapidly, so figured i'd ask here first...
4 replies
0 recast
5 reactions

Colin Charles pfp
Colin Charles
@bytebot
have you looked at ollama? it can run even on reasonably simple machines, but it is slow if cpu only. you could get a good mac studio, or even string some machines together with exolabs. guess i hae to ask - what is the budget, and what kind of model sizes are you planning to run
1 reply
0 recast
1 reaction

Stephan pfp
Stephan
@stephancill
Best resource for running LLMs at home https://www.reddit.com/r/LocalLLaMA
1 reply
0 recast
1 reaction

π‘Άπ’•π’•π’ŠπŸ—Ώβœ¨ pfp
π‘Άπ’•π’•π’ŠπŸ—Ώβœ¨
@toyboy.eth
Why do I feel like @alexpaden or @vrypan.eth can give an accurate answer to this
1 reply
0 recast
2 reactions

shoni.eth pfp
shoni.eth
@alexpaden
You can daisy chain the Mac minis also I can’t remember the app for that @bleu.eth
0 reply
0 recast
0 reaction

shoni.eth pfp
shoni.eth
@alexpaden
I use a maxed out Mac Studio which is the best non gpu option iirc the cheapest ram too
0 reply
0 recast
0 reaction