Idea of how to identify protocol abuses

After a long time without entering last.fm I remembered that they have a way to compare the music I listen to what they call SCROBBLES and they compare it with the rest of the users, so when we go to a profile they tell us the level of compatibility we have with that user based on our tastes.

When it comes to music it is very rare to find a 95% compartibility with another user but it could be done, very few people listened to the same bands as me.

But when it comes to making money as with the YUP protocol people can like anything as long as they get rewards, so I propose that this system is integrated into the protocol and so you can identify when a group of users always vote the same every day and based on the coincidence of votes design a way to give less rewards to users who have a high compatibility and add a penalty to the visible influence on the extension, if I vote for the same thing and I have high vote match with other users and I have 99 influence I would like to see it go down so I know I am doing something wrong and stop doing it.

These are some examples of how last.fm indicates compartibility with other users, this system besides detecting people organized to abuse the protocol if they wish they can place it publicly to find other users with the same taste.

I only clarify that if a group of users vote the same thing day by day, use all their voting power to vote the same things and have a near 100% match they should not receive rewards compared to users who have very low compartibility.

You can use a period of one week, two weeks or months to determine what the compartibility is and that the protocol regulates or lowers the rewards for those who have high compartibility with their votes, this will make the users vote what they like and with variety and not only what is popular.

Because it is very rare that a person has the same tastes as other people if you take away the money factor.

image
image
image
image

3 Likes

Thanks for taking the time of writing this proposal, here are some random thoughts:

A mechanism that is similar to last.FM can’t determine much, someone could measure the tolerance level to become a match in your system to any kind of group association and will act in a way that will make it indistinguishable from a non-abuser.

The reality is that the last.FM mechanism is based on the assumption that the data is honest when you include the possibility that it is not this system falls apart, and the results will be totally fake, this principle is called GIGO.

Worse you could start to make a lot of false positives, even if you play with different weights at some point you can arrive in a place “X” where the normal user would look more likely to be an abuser than the real abuser.

If we go a step further and look to some large social media companies like Instagram, they did a lot of things to prevent collusion, and their result today is that the most effective way of preventing collusion is super aggressive KYC(like a selfie with a random string, and ID card), forcing you to use mobile and limiting your activity.

Thousands of colluding groups for Instagram exist, bot fraud is in the billions and researchers estimate that some of the biggest platforms have in the most critical cases above 50% automated activity, so that’s why I think a simple mechanism won’t get you too far.

What I see as a first beneficial step is to implement BrightID, I checked the docs and the simple integration doesn’t look too difficult to implement, it could have some marketing value too since BrightID will list your APP on their platform, the only disadvantage of implementing BrightID is the costs that might incur.
You can list your app free without providing sponsorships, but you’ll be listed at the bottom and you can’t expect many users to get verified.

I will agree and is my belief too that the monetary incentives aren’t properly aligned with quality voting, from what I can understand for me it looks like the monetary incentives are more aligned with manufacturing popularity.
I think a user that is interested only in rewards could rapidly create a bullet point rational strategy of maximizing rewards.

Another issue is that some of these “abusers” are abusers just in the sense that we consider people that start to totally align with the monetary incentives abusers, the reality is that without aligning incentives the issue will be unearthed in perpetuity.

The protocol needs people to vote after each other and create some patterns since that’s the source of determining part of the score, the problem is that the user is incentivized by rewards to vote in a certain way creating data that he/she might not believe as accurate.

In an ideal world, the protocol should be able to reflect as close as possible the popularity of something, but without finding a way of removing the incentive for voting something specific in order to maximize rewards, it can be really achieved.

Similar platforms like YUP existed, for example, it was an app called StumbleUpon it was sold and pretty ruined, now it’s called Mix, but these platforms and every major platform that tries to measure content sentiment can be relatively successful because they don’t have strong monetary incentives, the weaker monetary incentive of these platforms is just content boosting.

1 Like