James  pfp
James

@jimmysb1

I am putting together 3 TB of GPU capacity to run 3 concurrent Llama 3 405B models - mainly to have the cross reference edit each other and do its own coding...so I want redundancy in the system. Currently running two shitty AMD systems with 2 40B Llama 3 models. Any hardware suggestions besides Nvidia as the base GPU's and any suggestion on Github repo softwrae to run them and make them agents - currently use a crappy Ollama interface on both.
1 reply
12 recasts
10 reactions