Cortensor (cortensor)

Cortensor

Pioneering Decentralized AI Inference, Democratizing Universal Access. #AI #DePIN ๐Ÿ”— https://linktr.ee/cortensor

30 Followers

Recent casts

Top casts

๐Ÿ› ๏ธ DevLog โ€“ Testnet-1 Reset Check & L3 Prep Testnet-1 (COR L3 via RaaS) is now back online after the module reset, and both network + user task flows are running as before. A few node operators have already moved miners over and things look stable at first pass. ๐Ÿ”น Testnet-1 โ€“ RaaS #L3 - Modules re-deployed and baseline network/user tasks are confirming end-to-end flow. - Miners are back on the pool, so we can start watching real traffic again. ๐Ÿ”น Testnet-1a โ€“ Self-Managed #L3 - Still running clean for network tasks after the latest checks. - Basic user task flow is working with multiple miners connected. ๐Ÿ”น Whatโ€™s next - Run light stress/load tests on both Testnet-1 and Testnet-1a as prep work for Phase #2. - Use the data to compare RaaS vs self-managed L3 behavior before we scale up.

  • 3 replies
  • 2 recasts
  • 3 reactions

๐Ÿ› ๏ธ DevLog โ€“ SessionPaymentTable Live (Dynamic Weights Up Next) Quick follow-up on the new execution fee path: ๐Ÿ”น The SessionPaymentTable โ†’ runtime unit cost wiring is now live and passing initial regression checks with the existing static/fixed unit cost model. ๐Ÿ”น The dashboard's Unit Cost display is now reading from the same runtime table, so UI + contracts + router are all aligned on pricing. ๐Ÿ”น Next step is to populate pricing parameters per session config โ€“ SLA tier, model class, execution count, and validator depth (PoI / PoUW) โ€“ so total cost can be computed as: base unit cost + SLA weight + model weight + execution weight + validator weights. ๐Ÿ”น Over the coming tests weโ€™ll try different unit prices per execution using this new path, while keeping behavior close to todayโ€™s fixed pricing until the dynamic weights are validated.

  • 1 reply
  • 2 recasts
  • 3 reactions

๐Ÿ› ๏ธ DevLog โ€“ New LLM & Embedding Models Surfacing in Dashboard ๐Ÿ”น Dashboard model picker updated New inference models 47โ€“55 and embedding models 56โ€“64 are now wired into the dashboard model selector, with the UI detecting and displaying per-model capacity so itโ€™s clear which models are actually runnable when you create a session. ๐Ÿ”น Inference + validator stack alignment The additional LLMs expand general routing options, while the embedding models are aimed at upcoming Validator v2/v3 and /validate work, where we need lightweight, reliable vector backends for scoring and matrix calculations. ๐Ÿ”น Rollout plan These changes are rolling out to both Testnet-0 and Testnet-1, and weโ€™ll keep exercising the new models through real sessions, tuning which ones become defaults for validators and delegate/validate paths.

  • 1 reply
  • 2 recasts
  • 3 reactions

๐Ÿ› ๏ธ DevLog โ€“ Next Batch of Ollama Models (47โ€“64, Planned) Lining up the next wave of #Ollama models for this week โ€“ nothing is built yet, but this is the target set weโ€™ll start on. ๐Ÿ”น 47โ€“55: New generative models (planning) - nemotron-3-nano:30b-gpu - olmo-3:7b-gpu, olmo-3:32b-gpu - olmo2:7b-gpu, olmo2:13b-gpu - orca-mini:3b-gpu, orca-mini:7b-gpu, orca-mini:13b-gpu, orca-mini:70b-gpu ๐Ÿ”น 56โ€“64: Embedding models for Validator v2 experiments (planning) - nomic-embed-text:v1.5-gpu, mxbai-embed-large:335m-gpu, bge-m3:567m-gpu - all-minilm:22m-gpu, all-minilm:33m-gpu, embeddinggemma:300m-gpu - snowflake-arctic-embed:335m-gpu, snowflake-arctic-embed2:568m-gpu, granite-embedding:278m-gpu ๐Ÿ”น Whatโ€™s next - Start building images for 47โ€“64. - Wire them into cortensord on dev-stable for dedicated-node smoke tests. - Use the embedding set to A/B test Validator v2โ€™s similarity matrix and ranking behavior.

  • 2 replies
  • 2 recasts
  • 2 reactions

Onchain profile

Ethereum addresses