Content pfp
Content
@
https://warpcast.com/~/channel/pinata
0 reply
0 recast
0 reaction

Steve pfp
Steve
@stevedylandev.eth
These two images tell a fascinating story On the left, we have a server running on a GPU enabled machine that can run local AI models through Ollama, and with x402 monetize that usage On the right, we have a request being made to that server using the OpenAI SDK and x402-fetch Distributed AI is near, and so is the blog post on this experiment
2 replies
0 recast
15 reactions

osama pfp
osama
@osama
why would you distribute the inference infra? what's the forcing function? def is not economical/private. what is it then? how does on-device come into play here? genuine question.
1 reply
0 recast
0 reaction

Steve pfp
Steve
@stevedylandev.eth
Admittedly it’s not a complete solution but conceptually it sets up a world where higher compute hardware is more accessible outside of large central providers, and for that hardware to be monetized.
2 replies
0 recast
0 reaction

Royal pfp
Royal
@royalaid.eth
Yeah this feels like a sketch of what automatic and self service AI looks like, not so much "decentralized" but rather something between federated and distributed.
2 replies
0 recast
3 reactions

Kyle Tut pfp
Kyle Tut
@kyletut
I feel like we are stumbling around in the dark trying to figure out what the room is that we are in. We've stubbed our toe on a chair so far but don't know what else is around us. If AI is going to be as dynamic as people think, access to computing definitely needs to have less boundaries but we don't have good examples of that today. In 10 years, we will be able to point at Steve's example as a primitive example of everyday computing but we are probably missing some key components right now.
2 replies
0 recast
3 reactions

Steve pfp
Steve
@stevedylandev.eth
Yup totally
0 reply
0 recast
1 reaction