typeof.eth
@typeof.eth
Debating getting a DGX Spark to offload some AI compute to a local LLM. Does anyone have better recommendations?
1 reply
1 recast
4 reactions