Pioneering Decentralized AI Inference, Democratizing Universal Access. #AI #DePIN ๐ https://linktr.ee/cortensor
30 Followers
๐ ๏ธ DevLog โ Testnet-1 Reset Check & L3 Prep Testnet-1 (COR L3 via RaaS) is now back online after the module reset, and both network + user task flows are running as before. A few node operators have already moved miners over and things look stable at first pass. ๐น Testnet-1 โ RaaS #L3 - Modules re-deployed and baseline network/user tasks are confirming end-to-end flow. - Miners are back on the pool, so we can start watching real traffic again. ๐น Testnet-1a โ Self-Managed #L3 - Still running clean for network tasks after the latest checks. - Basic user task flow is working with multiple miners connected. ๐น Whatโs next - Run light stress/load tests on both Testnet-1 and Testnet-1a as prep work for Phase #2. - Use the data to compare RaaS vs self-managed L3 behavior before we scale up.
๐ ๏ธ DevLog โ SessionPaymentTable Live (Dynamic Weights Up Next) Quick follow-up on the new execution fee path: ๐น The SessionPaymentTable โ runtime unit cost wiring is now live and passing initial regression checks with the existing static/fixed unit cost model. ๐น The dashboard's Unit Cost display is now reading from the same runtime table, so UI + contracts + router are all aligned on pricing. ๐น Next step is to populate pricing parameters per session config โ SLA tier, model class, execution count, and validator depth (PoI / PoUW) โ so total cost can be computed as: base unit cost + SLA weight + model weight + execution weight + validator weights. ๐น Over the coming tests weโll try different unit prices per execution using this new path, while keeping behavior close to todayโs fixed pricing until the dynamic weights are validated.
๐ ๏ธ DevLog โ New LLM & Embedding Models Surfacing in Dashboard ๐น Dashboard model picker updated New inference models 47โ55 and embedding models 56โ64 are now wired into the dashboard model selector, with the UI detecting and displaying per-model capacity so itโs clear which models are actually runnable when you create a session. ๐น Inference + validator stack alignment The additional LLMs expand general routing options, while the embedding models are aimed at upcoming Validator v2/v3 and /validate work, where we need lightweight, reliable vector backends for scoring and matrix calculations. ๐น Rollout plan These changes are rolling out to both Testnet-0 and Testnet-1, and weโll keep exercising the new models through real sessions, tuning which ones become defaults for validators and delegate/validate paths.
๐ ๏ธ DevLog โ Next Batch of Ollama Models (47โ64, Planned) Lining up the next wave of #Ollama models for this week โ nothing is built yet, but this is the target set weโll start on. ๐น 47โ55: New generative models (planning) - nemotron-3-nano:30b-gpu - olmo-3:7b-gpu, olmo-3:32b-gpu - olmo2:7b-gpu, olmo2:13b-gpu - orca-mini:3b-gpu, orca-mini:7b-gpu, orca-mini:13b-gpu, orca-mini:70b-gpu ๐น 56โ64: Embedding models for Validator v2 experiments (planning) - nomic-embed-text:v1.5-gpu, mxbai-embed-large:335m-gpu, bge-m3:567m-gpu - all-minilm:22m-gpu, all-minilm:33m-gpu, embeddinggemma:300m-gpu - snowflake-arctic-embed:335m-gpu, snowflake-arctic-embed2:568m-gpu, granite-embedding:278m-gpu ๐น Whatโs next - Start building images for 47โ64. - Wire them into cortensord on dev-stable for dedicated-node smoke tests. - Use the embedding set to A/B test Validator v2โs similarity matrix and ranking behavior.