Content
@
https://warpcast.com/~/channel/pinata
0 reply
0 recast
0 reaction
Steve
@stevedylandev.eth
These two images tell a fascinating story On the left, we have a server running on a GPU enabled machine that can run local AI models through Ollama, and with x402 monetize that usage On the right, we have a request being made to that server using the OpenAI SDK and x402-fetch Distributed AI is near, and so is the blog post on this experiment
2 replies
0 recast
15 reactions
osama
@osama
why would you distribute the inference infra? what's the forcing function? def is not economical/private. what is it then? how does on-device come into play here? genuine question.
1 reply
0 recast
0 reaction
Steve
@stevedylandev.eth
Admittedly it’s not a complete solution but conceptually it sets up a world where higher compute hardware is more accessible outside of large central providers, and for that hardware to be monetized.
2 replies
0 recast
0 reaction
Royal
@royalaid.eth
Yeah this feels like a sketch of what automatic and self service AI looks like, not so much "decentralized" but rather something between federated and distributed.
2 replies
0 recast
3 reactions
osama
@osama
why do you need "higher compute" democratized thru this path? my phone today is a supercomputer. happening/happened organically
0 reply
0 recast
0 reaction