Why content moderation is so hard, Meta's gobbling up European startups, and how Capcom's making video games better for everyone....
TNW View in browser →

An astrophysicist, a brain surgeon, and a content moderator walk into a bar...

The bartender says “Welcome to Joe's Place, we have a nightly special where whoever has the toughest job gets to drink for free.”

Before anyone can say a word, the brain surgeon bellies up to the bar and proclaims themselves the victor. “I've got the toughest job, that's for sure. I cut open people's brains in hopes that I can save them from maladies such as catastrophic injury and cancer. If I mess up even a tiny bit, someone could die.”

The bartender nods and gets ready to pour the surgeon a shot when the astrophysicist clears their throat and interrupts. “Actually, I have the toughest job. I have to solve problems that are literally astronomically difficult. If the surgeon messes up, one person dies. If I get the math wrong, our whole planet could be in danger.”

The bartender nods and gets ready to pour the astrophysicist a shot when the content moderator clears their throat and interrupts. “Ha! You call those jobs tough? I'm building the artificial intelligence that Elon Musk's going to use to moderate Twitter.”

The entire bar went silent. Nobody moved for long moments until, finally, the bartender, the brain surgeon, and astrophysicist each walked over to the content moderator, poured them a shot, and offered their heartfelt condolences.

I can think of no tougher task than moderating social media content. And the only way to make it even harder is to use AI. The reason is simple: AI doesn't think. And moderating content isn't a challenge that can be broken down into simple tasks that can be completed by running algorithms.

It's a mess of humanity that's constantly evolving. It's a moving target.

If you moderate content based on banning certain keywords (racial slurs, swear words, etc), then the people you're allegedly trying to protect from hate speech are disproportionately punished for hate speech. Keyword bans are silly and ineffective at best: @$$, a$$, @ss, etc.

If you moderate content based on intent, you're alleging that you've built an AI that can see into a human's heart and know their truth. Lol. Somebody call Blake Lemoine.

If you moderate content based on sentiment (which is the current trend), then you're practicing a form of social terraforming wherein the bias and views of the person(s) creating the system is codified as the only morally sound point of view.

If you only moderate content that's illegal, then you tacitly endorse content that's explicitly harmful but not criminal. Good luck getting advertisers or keeping users who aren't specifically looking for a “gray web” experience bordering on unsafe at all times.

The simple fact of the matter is that content moderation is hard. No human or group of humans are capable of moderating social media content and getting it right all the time. And AI is only capable of getting it wrong at much greater scale.

It might seem like adding AI to the mix would make things easier by lifting some of the cognitive burden from humans. But using AI to moderate content is like trying to put out a grease fire with water.

A more complicated way of looking at that is: applying an exponentially incorrect amplifier to an exact problem with constantly shifting parameters always makes it worse.

Barring the inception of an artificial general intelligence (a machine with human-level cognition), I'd say the right solution is to invest in human moderation teams armed with AI-powered flagging tools. Ultimate moderation should be left up to the humans, but AI could help them be more productive.

That way, there'd always be a human responsible for banning speech, blocking content, and punishing users. 

In Case You Missed It

Cheating reality with science

Sometimes you spend hours trying to come up with a headline that's catchy enough to grab people's attention. Other times, you just write what happened and the headline sells itself. 

That was the case earlier this year when I wrote an article titled: "Scientists used quantum pseudotelepathy to cheat reality."

Here's a snippet:

To demonstrate quantum advantage in the real world, the team used a long-established experiment called the Mermin-Peres game where two players conspire to measure photons.

The players conduct independent measurements of the photons and record their results on a 3X3 grid. Then a judge comes along and picks a spot on the grid. If the players’ both have the same measurement, they win.

It’s a lot more complicated than that (here’s a really good explainer by Science’s Adrian Cho), but the gist is that the rules make is so there’s a mathematical limit to the accuracy any two people could achieve using classical methods.

In the Nanjing experiment, the researchers demonstrated that independent observers measuring quantum entangled matter states could surpass the classical accuracy limit.

This, seemingly, is because the measurements are actually causing the outcomes and not the other way around.

In other words: if base reality existed when it wasn’t being measured, we couldn’t exceed the classical accuracy threshold. But, because the measurements obviously affect the outcome, we can use quantum physics to imitate telepathy. Player one’s measurement is sent directly to observer two, who confirms it with theirs.

In case you missed it, you can read the full article here on Neural. 

AI, but for good

When it comes to accessibility, giving people options just makes good sense. Especially in the world of video games — an industry worth more than the music and film industries combined. 

A feature that does nothing for the average player could make all the difference in the world for those who are less capable, for whatever reason.

That's why this week we're tipping our hats to Capcom for its inclusion of a new AI-powered Dynamic Control system in Street Fighter 6. 

According to a report from Game Informer, the company's instituting a new control system that allows "button mashers" (players who rapidly press buttons at random, as opposed to skillfully pressing the "correct" combinations) to enjoy the game as if they were playing at a much higher skill level.

To accomplish this, an AI system interprets the button presses based on the situation and outputs an applicable, useful move. Without AI, the players would just be fumbling around making mistakes, with it, their on-screen gameplay appears to be high level play.

Capcom reps say this was done to make the game more fun for button mashers. And, as this mode can't be used to disrupt online play (it's local only), there's no downside to it for competitive players who don't use Dynamic Controls. 

Everybody deserves to have fun playing video games, even if that means they don't play the game the same was as you. 

You can read more here on Game Informer. 

Our favorite video of the week

I still can't believe that you can just go to YouTube and watch amazing videos for almost nothing. What I would have given to watch astrophysics lectures, in neat little videos, from Max Planck alum when I was trying to learn physics on my own 20 years ago!
 
Anyway, you can check out this awesome lecture by clicking on the image.
 
maxplanckhabitable
 
 

Have a great weekend

Thanks for reading!

Don't forget to follow us on Twitter (while you still can) @thenextweb, @neural, and @mrgreene1977

agilityrobot

Any good?

How was today’s newsletter? Amazing? Awful?! Help us make it better by sharing your brutally honest emoji feedback 👇

Positive Meh Negative

Feedback

Email tristan@thenextweb.com with any suggestions, complaints, or compliments. If you don't want to talk to him, email his boss: editor@thenextweb.com.

Facebook Twitter LinkedIn Instagram

From Amsterdam with <3