Dan Romero pfp
Dan Romero
@dwr.eth
How do you prove that an account and/or wallet with a proof of human credential isn't getting its content / thinking / actions from an AI? How do you prove that an account labeled as an agent is not run by a human?
44 replies
18 recasts
128 reactions

phil pfp
phil
@phil
You can't, but having a human verification does give an upper bound to the #, whereas bots don't.
2 replies
2 recasts
31 reactions

jj 🛟 pfp
jj 🛟
@jj
Scan an eyeball 👁️
1 reply
0 recast
2 reactions

Callum Wanderloots ✨ pfp
Callum Wanderloots ✨
@wanderloots.eth
0 reply
0 recast
3 reactions

will pfp
will
@w
well you see we just fuse these wires into their brain and then..
0 reply
0 recast
2 reactions

Zach pfp
Zach
@zd
do either of these questions matter if the content is good?
2 replies
0 recast
2 reactions

Deana pfp
Deana
@deana
Idk but I’d probably find the former a lot less annoying if i knew for sure that a human was involved
0 reply
0 recast
1 reaction

Pichi pfp
Pichi
@pichi
I have definitely seen accounts here that started as real genuine humans. Authentic, kind, etc. then they handed the account to an AI and I muted it. I’ve also seen people add AI to boost their replies during the Moxie era so their top level casts are human, but all their replies are automated slop. I have no idea how you would solve for either use case.
1 reply
0 recast
10 reactions

Dean Pierce 👨‍💻🌎🌍 pfp
Dean Pierce 👨‍💻🌎🌍
@deanpierce.eth
It's about identity binding. A person can authorize a bot to act on their behalf, but if a human authorizes a thousand bots to act on their behalf, it will be obvious. Sybil attacks are trivially detected, so sybil resistance is achieved, which is a bigger deal than most people realize.
0 reply
0 recast
2 reactions

Yakuza pfp
Yakuza
@iamtherealyakuza.eth
Great question! But maybe we should first ask if it’s even important to prove that it’s not AI. If an agent or account is contributing value, does it really matter if it’s human or AI? Focusing too much on proving humanity might distract us from how these tech can work together with us, innit?
0 reply
0 recast
2 reactions

Tayyab - d/acc pfp
Tayyab - d/acc
@tayyab
Of course the answer is to require a captcha before every cast, Dan.
1 reply
0 recast
1 reaction

Kenji pfp
Kenji
@kenjiquest
At this stage, we can't tell the difference between whether a 'human' account is getting its content/thinking from an AI. It's not provable over the net where to a degree everyone is acting under a guise or undoxxed status. Bad AI can be picked up on, but with each passing day the quality and naturalness of it is improving, so it'll be harder to pick up on as time moves on. The tester to knowing if someone is using AI content, is actually knowing the human themselves. If you for instance know someone pretty well (their writing style, quirks etc), you can kind of pick up when AI has kicked in if they are going full dive leaning in to using AI contenet. The "You didn't write this, did you?" moment is there. But how many of us know each other that well on the internet? Probably not all that well... so detecting when someone uses AI content/thinking from a separate device or unattached app to the output really is too difficult to detect at this point, other than from our own intuitions which aren't concrete.
0 reply
0 recast
1 reaction

Koolkheart pfp
Koolkheart
@koolkheart.eth
I’ve actually had this experience and it’s disheartening. I cast and reply with real intention, not just farming engagement or copy-pasting. Still, I somehow got spam labeled, despite being verified on other platforms. I get the need to fight bots, but the net seems too wide sometimes. Would love more transparency around how these labels are applied and reviewed
1 reply
0 recast
1 reaction

Blake Burgess pfp
Blake Burgess
@trinitek
You need proof-of-meat that is intrinsic to the content, not a captcha or puzzle challenge that can be validated separately. What's something that bots can't do, like send physical mail? If I copied a GPT response onto a sheet of paper and dropped it in the mail, does that count as AI? Or for proof-of-machine I'm thinking of a product like Yahoo Answers, but you only have n seconds to submit an A, and Q is not known so a human can't pre-draft, and they don't have enough time to copy/paste into an LLM. But the Q/A itself is the content, not a challenge for something else.
0 reply
0 recast
0 reaction

jonathan pfp
jonathan
@jonathanmccallum
The question of our time (:
0 reply
0 recast
0 reaction

depatchedmode pfp
depatchedmode
@depatchedmode
Exactly. Only thing that matters in most cases is relevance. Occasionally you need proof of humanity for things like voting, and it doesn’t attempt to solve for anything other than “only people who have this unique secret can do this thing this many times”.
0 reply
0 recast
0 reaction

Ben pfp
Ben
@benersing
1. Community notes for originality instead of fact checking. 2. Not sure yet.
0 reply
0 recast
0 reaction

Name pfp
Name
@2h
It’s possible to some extent, but hardly worth the effort, it’s better to shift the focus to intentions, consequences, and value.
0 reply
0 recast
0 reaction

Coin pfp
Coin
@coinconvictions.eth
Worldcoin is the solution
0 reply
0 recast
0 reaction

Nate Maddrey pfp
Nate Maddrey
@nmadd
Require that everyone can only create content from within a faraday cage, and use Morse code to communicate with a carrier pigeon that delivers your casts to Farcaster
0 reply
0 recast
0 reaction