Cortensor (cortensor)

Cortensor

Pioneering Decentralized AI Inference, Democratizing Universal Access. #AI #DePIN 🔗 https://linktr.ee/cortensor

29 Followers

Recent casts

PSA: There are NO official tokens for Bardiel or Corgent. No #BARDIEL token No #CORGENT token Any token claiming to be “Bardiel” or “Corgent” is not launched or endorsed by us – treat it as a scam. Always verify via official @Cortensor channels before interacting with anything on-chain.

  • 0 replies
  • 0 recasts
  • 0 reactions

Shifting most updates to X for now 📡 We’ll be focusing our public updates and longer “X Articles” on X/Twitter and pausing regular posts here on Farcaster until there’s more reason/engagement to resume. If you want to keep up with Cortensor, please follow us over on X. x.com/Cortensor

  • 0 replies
  • 0 recasts
  • 0 reactions

🛠️ DevLog – Fact-Check API Foundation (WIP, Built on Oracle/Claim-Check Design) We’ve started implementing the fact-check path described in the recent AI Oracle & Claim-Check design - this is work in progress, but the core shape is now in place. 🔹 What Exists Today (WIP Foundation) - Single structured endpoint: POST /api/v1/factcheck (alias: /api/v1/fact-check). - Supports two modes: standard and realtime, each resolving to a dedicated fact-check session via config/env. - Runs redundant checks across multiple miners (minimum redundancy = 3) and uses Cortensor’s core protocol to aggregate them. - Returns a stable, structured response with: - top-level verdict + confidence - per-run / per-miner evidence blocks so agents can inspect how consensus was formed. - realtime mode already has hooks for web/news grounding and source ingestion, but uses the same schema as standard. 🔹 What’s Still TODO (Next Iterations) - This is deliberately a foundation layer: contract + consensus semantics first, richer evidence later. - Upcoming work (not implemented yet, but designed to fit behind the same API): - pluggable evidence providers (news, image/OCR, specialized web) - source trust scoring and weighting - stricter citation validation - domain-specific policies (finance, sports, hoaxes, etc.) Because the API is shaped around mode + policy + consensus + evidence, we can keep iterating internally (more providers, better scoring, tuned policies) without breaking clients that integrate with /api/v1/factcheck today.

  • 0 replies
  • 0 recasts
  • 0 reactions

Top casts

🛠️ DevLog – Testnet-1 Reset Check & L3 Prep Testnet-1 (COR L3 via RaaS) is now back online after the module reset, and both network + user task flows are running as before. A few node operators have already moved miners over and things look stable at first pass. 🔹 Testnet-1 – RaaS #L3 - Modules re-deployed and baseline network/user tasks are confirming end-to-end flow. - Miners are back on the pool, so we can start watching real traffic again. 🔹 Testnet-1a – Self-Managed #L3 - Still running clean for network tasks after the latest checks. - Basic user task flow is working with multiple miners connected. 🔹 What’s next - Run light stress/load tests on both Testnet-1 and Testnet-1a as prep work for Phase #2. - Use the data to compare RaaS vs self-managed L3 behavior before we scale up.

  • 3 replies
  • 2 recasts
  • 3 reactions

🛠️ DevLog – SessionPaymentTable Live (Dynamic Weights Up Next) Quick follow-up on the new execution fee path: 🔹 The SessionPaymentTable → runtime unit cost wiring is now live and passing initial regression checks with the existing static/fixed unit cost model. 🔹 The dashboard's Unit Cost display is now reading from the same runtime table, so UI + contracts + router are all aligned on pricing. 🔹 Next step is to populate pricing parameters per session config – SLA tier, model class, execution count, and validator depth (PoI / PoUW) – so total cost can be computed as: base unit cost + SLA weight + model weight + execution weight + validator weights. 🔹 Over the coming tests we’ll try different unit prices per execution using this new path, while keeping behavior close to today’s fixed pricing until the dynamic weights are validated.

  • 1 reply
  • 2 recasts
  • 3 reactions

🛠️ DevLog – New LLM & Embedding Models Surfacing in Dashboard 🔹 Dashboard model picker updated New inference models 47–55 and embedding models 56–64 are now wired into the dashboard model selector, with the UI detecting and displaying per-model capacity so it’s clear which models are actually runnable when you create a session. 🔹 Inference + validator stack alignment The additional LLMs expand general routing options, while the embedding models are aimed at upcoming Validator v2/v3 and /validate work, where we need lightweight, reliable vector backends for scoring and matrix calculations. 🔹 Rollout plan These changes are rolling out to both Testnet-0 and Testnet-1, and we’ll keep exercising the new models through real sessions, tuning which ones become defaults for validators and delegate/validate paths.

  • 1 reply
  • 2 recasts
  • 3 reactions

🛠️ DevLog – Next Batch of Ollama Models (47–64, Planned) Lining up the next wave of #Ollama models for this week – nothing is built yet, but this is the target set we’ll start on. 🔹 47–55: New generative models (planning) - nemotron-3-nano:30b-gpu - olmo-3:7b-gpu, olmo-3:32b-gpu - olmo2:7b-gpu, olmo2:13b-gpu - orca-mini:3b-gpu, orca-mini:7b-gpu, orca-mini:13b-gpu, orca-mini:70b-gpu 🔹 56–64: Embedding models for Validator v2 experiments (planning) - nomic-embed-text:v1.5-gpu, mxbai-embed-large:335m-gpu, bge-m3:567m-gpu - all-minilm:22m-gpu, all-minilm:33m-gpu, embeddinggemma:300m-gpu - snowflake-arctic-embed:335m-gpu, snowflake-arctic-embed2:568m-gpu, granite-embedding:278m-gpu 🔹 What’s next - Start building images for 47–64. - Wire them into cortensord on dev-stable for dedicated-node smoke tests. - Use the embedding set to A/B test Validator v2’s similarity matrix and ranking behavior.

  • 2 replies
  • 2 recasts
  • 2 reactions

Onchain profile

Ethereum addresses