AI is already making inroads. For example, it’s now helping the human reviewers at The New York Times ferret out the trolls. Unlike the pessimistic view that robots will wipe us from the planet, this is a great example of how AI can ease our everyday lives. Instead of replacing people entirely, we’ll see robots alongside them, augmenting human efforts.
Because of its AI system, The Times was able to open up comments on a greater number of its articles. Called Moderator, this AI system relies on machine learning developed by Jigsaw, a subsidiary of Google’s holding company, Alphabet. Before Moderator was rolled out, readers were allowed to comment on only 10 percent of Times articles because its 14 human moderators had to evaluate 12,000 comments a day. Now with Moderator helping to weed out the vitriol, the goal is to boost commenting to 25 percent of all stories, and then to 80 percent.
The key part of The Times’ effort is the “alongside the humans” aspect. More often than not, what tends to grab headlines is that AI does things at the expense of humans. But in actuality, AI is most effective as an augmentation of what humans are doing — not a complete replacement of them. I’ve worked on these types of systems at Facebook that are likely similar to what the Times is using. It’s just not the case that you hand things over to the “robot” and humans are blotted from the equation.
AI handles the easy calls, humans the trickier ones
When using AI to augment human work, the goal is to leverage AI to assist the people you already have. For instance, moderating comments on news articles is a very human thing. Humans are very good at evaluating the nuances of content and determining if it’s appropriate. And AI — machine learning, in this case — is becoming sophisticated enough to filter out those comments that are beyond the pale.
This allows human moderators to focus on reviewing content that’s trickier to call. The AI, by weeding out the stuff that’s clearly troll behavior, gives humans the ability to spend more time on the more challenging stuff. People often think of AI as very much a binary proposition: you insert a computer into the process and the person disappears (then the computers go rogue and attack us all).
Classic Troll Meme
But this isn’t how things work. Computers and humans have had a symbiotic relationship for a long time now (consider your intimate relationship with your smartphone, for example). In some cases, yes, AI will mean fewer people to do whatever job it is. But in general it’s much less a direct replacement than people recognize.
People imagine that computers are really smart, and that AI is like a bunch of really smart computers, so it can solve all these ridiculously hard problems that humans can’t. That’s just not true. AI is good at mostly the same things computers have been good at for as long as they’ve been around — doing the same “mundane” task over and over again, precisely, and at incredibly high speed.
In contrast, humans are exceptionally good at coping with and solving for ambiguity and complexity; they are creative. They can spin an entire world out of nothing and turn it into a novel. They can come up with an idea for a company on a paper napkin.
In a billion videos, AI can find the cats
Granted, as humans get better at doing AI, what counts as “mundane” changes. It used to be that recognizing images in video was something computers couldn’t do. Now they can. If you’ve got a billion videos and want to know which ones have cats in them, computers can do that. We’ve expanded technology to such a degree that it’s able to perform more sophisticated and nuanced tasks, and can do so at a massive scale.
At the same time, while technology has advanced, we tend to underestimate just how complicated are the things we do in everyday life. For example, was my coworker’s seemingly snarky comment this morning just a bit of friendly ribbing or something more sinister? This is an ordinary yet particularly human experience. We don’t think twice about how effortlessly we interpret tone or inflection in human conversation, but it’s a monumental task for AI.
Many of us also don’t appreciate the massive scale at which these AI systems operate. Many news stories that criticize Facebook for censoring content are written as though Mark Zuckerberg reads every piece of content posted on Facebook. There’s just no way you can have humans review it all. Why? Because every 60 seconds, 510,000 comments are posted on Facebook.
AI already augments human work. In fact, any organization that’s publicly claiming great results from AI is most certainly using an iteration of AI heavily guided by human beings. For example, when Facebook serves you advertisements, the entire process is done through machine learning. The algorithm accurately decides which ads to serve you based on how much an advertiser is willing to pay, the amount of ad inventory available and many other factors. But behind these algorithms is a very large group of humans constantly tuning them to refine what works and improve what doesn’t.
What The New York Times is doing now with the Moderator system is state of the art for how large web properties deal with abuse. And comment quality will improve significantly once we can reliably relegate comment trolls to the web’s trash can. After all, it’s not the computers or AI deciding what’s the right way for humans to behave. It’s humans creating a very powerful and scalable tool that can augment what the humans are doing already.
Like this article? Subscribe to our weekly newsletter to never miss out!