Content pfp
Content
@
https://warpcast.com/~/channel/pinata
0 reply
0 recast
0 reaction

Steve pfp
Steve
@stevedylandev.eth
These two images tell a fascinating story On the left, we have a server running on a GPU enabled machine that can run local AI models through Ollama, and with x402 monetize that usage On the right, we have a request being made to that server using the OpenAI SDK and x402-fetch Distributed AI is near, and so is the blog post on this experiment
2 replies
0 recast
14 reactions

osama pfp
osama
@osama
why would you distribute the inference infra? what's the forcing function? def is not economical/private. what is it then? how does on-device come into play here? genuine question.
1 reply
0 recast
0 reaction

Leeward Bound pfp
Leeward Bound
@leewardbound
this is suuper cool
0 reply
0 recast
1 reaction