Varun Srinivasan pfp
Varun Srinivasan
@v
What makes an account "inauthentic"? We're considering a new labelling system for inauthentic accounts. These accounts, often incorrectly called bots, will be hidden from like counts and other metrics in Warpcast. Inauthentic accounts give real people the ick because they're clearly trying to get value for their account at the expense of everyone else on the network.
42 replies
24 recasts
191 reactions

Name pfp
Name
@2h
It’s a very complex task, and perhaps it’s worth approaching the problem from a different angle. What is the core issue? Users who are not engaging in harmful activity may receive “spam” labels. But why is this bad for them? Their comments get hidden, they don’t receive tokens, they don’t count toward metrics, and so on. But is it possible to ensure that they don’t face these consequences? I think we need to define which accounts are not causing any obvious harm and simply not apply these restrictions to them. Here are a few ideas. We could implement a system with 4 levels, named as follows: 1. Content Creator/Influencer 2. User 3. Suspicious User (low activity, no verifications) 4. Spammer (clearly harmful)
2 replies
0 recast
0 reaction

Name pfp
Name
@2h
For comments, we could implement content sorting options: showing popular, early, or most recent posts first. This would help bring some order to the chaos. Regarding “spam” labels, they should only be applied in the most obvious and extreme cases - such as when a user is sharing malicious links or engaging in blatant spam distribution. This can be easily identified. The ultimate goal is to provide equal rights to as many users as possible: those who are just beginning to interact with the platform, those who are in read-only mode, those who haven’t decided on their content yet, or those who simply don’t know how to post well. This way, they won’t be isolated and will continue to participate on equal terms with others, receive airdrops, be heard, and so on.
0 reply
0 recast
0 reaction