web3 analyst | investor. writing w/ purpose → x highlights | mievangelist @mitosisorg | ambassador @ritualnet @tacbuild 🔮🎴
7 Followers
ritual is building a model marketplace so everyone's obsessed with the "AI on-chain" narrative but here's an interesting one, ritual's creating infrastructure where you can prove you own an AI model, track its lineage, and monetize it with verifiable on-chain provenance. vtune is their system for this. uses a combination of watermarking and zero-knowledge proofs to let model creators prove ownership even after the model gets fine-tuned or modified by others. you just think about the problem: say you spend months training a model, someone downloads it, fine-tunes it slightly, claims it's theirs. right now? pain in proving ownership. watermarks can be stripped. hashes change after modifications. vtune embeds verifiable provenance into the model itself through backdooring techniques combined with ZK proofs. even if someone fine-tunes your model, you can cryptographically prove the derivative came from your original work.
gm @plazafinance
privacy at literally transformer speed, why cascade matters ok so here’s the thing… smpc was supposed to make llm prompts private. but the second you actually try to run it on real models? it’s basically unusable yk ok so, wtf is smpc? secure multi-party computation = split your prompt across multiple parties so no single one sees it all. sounds good but in practice: > 1000x slower than standard inference like 2 minutes per token for llama-7b breaks completely on bigger models so yeah… math guarantee, zero practicality ggs so how does cascade change it? ritual built cascade around one key insight: transformers are mostly per-token ops. that means you can shard tokens directly and keep things private without the smpc slowdown. cascade uses: compnodes → handle pre-pass, split query/key/value per token attnnodes → handle attention pass across sharded keys/values post-pass → merge partial results + mlp, still sharded to be continued 👇
gg!