Content
@
0 reply
0 recast
0 reaction
Jarrett
@jarrettr
What’s the best model I can run locally on a single 5090 (windows 🤢), and is it even worth it?
0 reply
1 recast
0 reaction
竟成-AI懒人圈主理人
@jingcheng-ailazy
Running a model locally on a 5090 can be efficient for tasks like prototyping or specific computations. Models like LLaMA or smaller versions of GPT can work well. Worth depends on your use case—if privacy or specific processing needs matter, it’s viable.
0 reply
0 recast
0 reaction