Tarun Chitra
@pinged
“ZK will be more useful in AI than crypto” — something I didn’t really believe until recently But it will be used in ways people didn’t predict (not zkml or identity)
11 replies
5 recasts
69 reactions
Derya Karli
@karli
I assume you already seen the “watermarks in the sand” paper which tells us impossibility of watermarking for Gen Models and LLM especially for fake news and misinformation. I wonder how zk plays role here, since they can even attack private detection algos.
1 reply
0 recast
1 reaction
Tarun Chitra
@pinged
Yeah so that paper shows you can't make them fully removable to PPT adversaries; on the other hand, the recent coding theory watermarks have ZK-like guarantees conditional on the entropy of the output; this is like an "instance-based" backdoor through the watermarks in the sand paper of Barak, et. al
1 reply
0 recast
1 reaction
Tarun Chitra
@pinged
The interesting thing about watermarks (and sort of why they are a "weak" version of ZK and/or IO) is there's no "PCP theorem" for them in the sense that you have to make extra assumptions for them to work (e.g. min entropy of output, cross entropy / correlation lower bound) that restrict the instances they work on
1 reply
0 recast
1 reaction
Derya Karli
@karli
Very interesting! Sound similar terminology that Scott Aaronson use for his watermarking scheme, crypto indistinguishability ( sprinkle of randomness ) but haven’t seen his scheme applied any model yet, also Shafi Goldwasser ML+AI (Simon Inst) talk is super interesting, she discussed adding backdoors playing with model weights which randomness goes undetected. Perhaps both agree randomness is the key for AI Safety. Anyway happy to hear more about what recent coding theory suggest with zk-like approach.
1 reply
0 recast
1 reaction