Content
@
0 reply
0 recast
0 reaction
downshift. ποΈπ¨
@downshift.eth
what hardware should i buy to run LLMs locally? naive question, i know, but i don't have a good sense of where to start. are there useful public guides for deciding? the state of the art is changing rapidly, so figured i'd ask here first...
4 replies
0 recast
4 reactions
Colin Charles
@bytebot
have you looked at ollama? it can run even on reasonably simple machines, but it is slow if cpu only. you could get a good mac studio, or even string some machines together with exolabs. guess i hae to ask - what is the budget, and what kind of model sizes are you planning to run
1 reply
0 recast
1 reaction
downshift. ποΈπ¨
@downshift.eth
awesome pointers! i love the mac studio, but wonder if it's cost efficient $/flop (i would also use it for audio production, tho) i'm still early in my research but will follow up when i have a better idea on model size. i don't have a firm budget just yet ty ππΌ
0 reply
0 recast
0 reaction
Colin Charles
@bytebot
yeah feel free to holler here or in DMs. the fact that you also have another use helps lighten the budget load. the key is knowing what you're trying to achieve with a local llm.
1 reply
0 recast
1 reaction