@onchainlu
https://x.com/tbpn/status/1927818545733456030?s=46
pretty crazy a partner at a top vc is saying this
it may be easier to train sota models in co-located clusters, but there's a physical ceiling here. u literally can't fit infinite gpus into one spot or bring enough power to a single location.
demand for compute is and will continue to grow much faster than the supply -- DCNs (distributed compute networks) are the way to get around this bottleneck.