Content pfp
Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions

geoist pfp
geoist
@geoist
Imagine your app makes decisions based on LLM outputs. How do you prove to users that you didn't fake or tweak it?
3 replies
0 recast
3 reactions

Jay Brower (jaymothy.eth) pfp
Jay Brower (jaymothy.eth)
@jayb
https://vibescore.eigencloud.xyz/ uses Eigenlayer to secure AI inference with the weight of ethereum-grade security in this app, you submit a tweet, and an LLM "judges". 50k in prizes being given out. How do you know the operator didn't lie? EigenVerify re-runs the inference, and makes sure that the operator didn't misspeak. Cool example of this - we're cooking more atm.
1 reply
0 recast
1 reaction

Devin Conley pfp
Devin Conley
@dcon.eth
are weights open source? could build some kind of optimistic consensus to let users verify results and dispute if needed. or zkml proofs, but that's expensive
1 reply
0 recast
1 reaction

Jay Brower (jaymothy.eth) pfp
Jay Brower (jaymothy.eth)
@jayb
are you building something like this?
0 reply
0 recast
0 reaction