@typeof.eth
Spent some time yesterday setting up a local LLM server. I’ve got it set up with Gemma 4 31B and Qwen 3 Coder 30B, both with 64k context windows. Using ollama and Open WebUI, both accessible from anywhere via Tailscale.
The process was way simpler than anticipated.