A possible fix to Youtube’s #adpocalypse

I might be too deep into the water but I’ve been following the Vox #adpocalypse recently and have wondered why Youtube does not have a sliding scale for video reviewals. I’ll explain what I mean by that and give my own 2 cents and suggestion as to a possible way that we could end up making the entire reviewal process more transparent or at least give creators a better sense of trust in the platform and how it goes about taking down content.

Sample Sizes: a Mathematical tale

Sample sizes follow a mathematical formula which I will not force you through but you can read more here – but it gives you a specific level of confidence based off a population size. So let’s say I want to know an opinion or feedback out of 300 million people with a 99% confidence level and a 2% margin error – the formula tells me that I need 4161 responses to my survey.

What does that number tell you? That if you were to implement a crowdsourced reviewal process you would need 4000 people to give their review in order to have a solid response if it institutes harassment or violates any specific policy.

Here is a suggestion

Youtube can implement a random, transparent and ever-changing creator based reviewal process where creators and/or viewers will be randomly selected (once they say they’re ok with it) to review videos. A user will only qualify to be a reviewer if they meet specific criteria that are transparent and consistent platform-wide. Things such as 1. they have never engaged with this specific channel that they’ll be reviewing. They should also take into account things like balance the crowd-sample so that there are people in and out of the industry-type, and political side being reviewed so that it’s diverse and represents a trustworthy spectrum of viewers and society.

The sample size can either be based off the channel’s size or the number of viewers or if you want, base it the U.S population or some random large populational sample 500million, 600 million?

Users would be selected at random and can opt-out, if they choose to respond it would be an objective form – even though the analysis would be subjective – but the algorithm can suggest 4 or 5 possible violations and a no violation option – each violation has a weighted response value in points – harrassment=10, offensive language = 7, etc..etc.. Youtube can even make these data points public with what each violation point weight is.

In the end we would have a sliding scale rank of how much a video has violated the terms if 45% of the people state that the video has violated a policy and it adds up to a rank of 3000 points

Then youtube can have a clearer analytical decision. If users violation reviewal rank is under 30% nothing happens, if it’s between 55-70% they will get their content removed if they get 3 analysis of 70% they get demonetized for 30 days, and so on and so on.. But everything would be transparent and clear cut. Make the scale transparent, give a reason for each enforcement action based on the % and then open up for creator forums/debates/surveys where the community can have discussions around why the 30% enforcement should be this and the 45% should be that.

Obviously there will be fringe users saying that Youtube should be a 0% platform, but life and society is not 0% – let’s face it – and if Youtube wants to be a reflection of our world it should start somewhere and if you’re an ever-evolving platform than propose something clear, start with 40% and say that your goal is to reach 15%, or use your massive research revenue to assess real-world scenarios and reflect those numbers online and set a goal of actions that we(the entire community) will take to bring those numbers down…

The same goes the other way, people will say that the world isn’t fair and it should be 90%, well if Youtube has a transparent research-based approach I personally believe the majority of people would oppose having such a high “baseline harassment rank” policy.

The only tension point in this entire approach I’m suggesting would be selecting the reviewers, but with a good random selection algorithm, privacy protections and maybe an independent website(s) where reviewers say that they have reviewed a specific video or not can mitigate any mistrust. Create community standard or certification processes for any person that wants to start a reviewer validation website and let them have at it. We should have reviewal websites from both sides of the aisle.

I’m not a ratings broadcast specialist but this is a version or similar approach to how traditional news outlet ratings work but we would apply it to reported videos/channels.

This is just a suggestion from someone who loves technology, appreciates all of the infrastructure and incredible community of creators in so many different industries that youtube has provided, but whatever they’re doing right now as far as content reviewal, harassment policies is NOT helping anyone, because it’s not making anything more clear, it’s not solving deeper issues or even creating healthy conversations between opposing worldviews.

So here’s to the hopes that maybe Youtube will listen, think about sample sizes, crowdsourced reviewals, sliding scales and stop making creators having to walk on eggs because everyone is uncertain and afraid of what the next Youtube blog update is going to bring…


Publicado

em

por

Comentários

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *