@cortensor
š ļø DevLog ā Ollama Engine Status Check (Ephemeral + Dedicated)
Quick follow-up on the #Ollama path: most of the critical code paths for both dedicated and ephemeral nodes now run end-to-end with the new engine, including LLM Gateway enforcement. A few miner-side quirks + GPU enablement remain.
š¹ Looks stable enough to poke at
- Task routing ā container spin-up ā Ollama inference ā result back to router works on both node types.
- #OpenAI #OSS 20B is responding reliably in CPU tests.
š¹ What's left (focused)
- Run /validate on real sessions with this model and compare judgment quality vs existing models.
- Track down remaining GPU issues on Ollama images (device config, concurrency, memory behavior).
Once those are understood + regression-tested, we'll decide how/where to expose these larger models in the network.