Content
@
https://opensea.io/collection/dev-21
0 reply
0 recast
2 reactions
Nastya
@nastya
Been diving into EigenLayer AVS lately, sharing my super general and practical understanding of what is it for: * You have some task – like a piece of program, request, onchain action etc. It produces an output. * You need to prove to a skeptical 2nd party that it’s correct and truly came from that task. * Here comes a network of machines (AVS), that can re‑execute the task, confirm the result, and get paid for it. * The 2nd party can cryptographically verify that machines really did this job. * But what if some of these machines were malicious? * That's staking comes in – machines put their money to join the network, earn rewards, but risk penalties for bad behavior. EigenLayer essentially creates a marketplace for computational trust. But another practical question that comes up to me: what are the actual things you’d want to verify with this?
5 replies
5 recasts
55 reactions
Ayush Garg
@axg
you can access/verify/resolve almost anything; take the quote cast as an example https://farcaster.xyz/axg/0xddddf233
2 replies
0 recast
2 reactions
Nastya
@nastya
Yeah, you can verify a lot of things, but seems like not everything is worth verifying, especially with the added cost. I’m also wondering if there are use cases that currently don't exist or are limited because of a lack of trust/verifiability
1 reply
0 recast
3 reactions
Nastya
@nastya
Just looked at the quoted cast - I don’t quite understand there how a policy that would live as AVS help prevent an agent from being tricked. If that policy were simply baked into the agent’s code, wouldn’t it serve the same purpose as a safeguard, even without AVS part?
1 reply
0 recast
0 reaction