Stephan
@stephancill
Does anyone with 100 AI agents running simultaneously and $10k cursor bill actually ship?
3 replies
1 recast
20 reactions
Harris
@harris-
At that point does it make sense to just invest in a local AI setup?
1 reply
0 recast
1 reaction
Stephan
@stephancill
Good question Do open source models compete with sota right now?
3 replies
0 recast
0 reaction
Colin Charles
@bytebot
You can use Deepseek, Qwen, llama, via ollama locally. Or buy a beefy Mac mini/studio. I haven’t found it to be fast enough with what I have but it’s good to use OpenAI compatible things
2 replies
0 recast
0 reaction
Harris
@harris-
I haven't been following but it probably depends on the task. I'd be curious to see someone compare local vector db + RAG / mcp for specific tasks compared to big foundation models via API to see if you can build around your data set or problem space for cheaper* or try and figure out how long you'd break even for the 80/20 of your actual use case. Maybe use the expensive stuff as a fallback even
1 reply
0 recast
0 reaction
tip.md | 1st Crypto BMAC
@tipdotmd
Seems like DeepSeek R1 2805 is up there with the others and you could have it local with some effort
0 reply
0 recast
0 reaction