This article is part of the On Tech newsletter. You can register here to receive it weekdays.
When we get caught up in heated arguments with our neighbors on Facebook or in political YouTube videos, why do we do that? That’s the question my colleague Cade Metz wants us to ask ourselves and the companies behind our favorite apps.
Cade’s most recent article is about Caolan Robertson, a filmmaker for more than two years who has helped create ultra-far-right videos on YouTube that he thinks are intentionally provocative and confrontational – and often tweaked. deceitfully corrected.
Cade’s report is an opportunity to ask yourself these tough questions: Do the rewards of attention on the internet encourage people to post the most annoying material? How much trust should we believe in what we see online? And are we inclined to look for ideas that arouse our anger?
Shira: How does YouTube deserve to be blamed when people like Robertson create videos emphasizing social conflict and division – and in some cases manipulated?
Cade: It’s hard. In many cases, these videos became popular because they affirm some people’s prejudices against immigrants or Muslims.
But Caolan and the YouTube characters he has worked with have also learned to resolve or create conflicts. They may find that those types of videos have grabbed their attention on YouTube and other websites. And YouTube’s auto-suggestions also send a lot of people to those videos, encouraging Caolan to do many of the same things.
One of Facebook’s CEOs lately Written, in part, his company is hardly to blame for pushing people to provocative and polarizing material. That’s just what everyone wants. What do you think?
There are all sorts of things that amplify our inclination to sensational or outrageous things, including radio, cable TV and social media. But it’s irresponsible for anyone to say it’s just some people’s way. We all have a role to play in not setting the worst of human nature, and that includes the companies behind the apps and websites we spend our time with.
I’ve thought about this a lot in my report on artificial intelligence technology. People try to differentiate between what humans do and what computers do, as if they were completely separate. Not so. People decide what computers do and people use computers in ways that change what they do. That’s one reason I want to write about Caolan. He’s taking us behind the curtain to see the forces – both human nature and technological design – influence what we do and the way we think.
What should we do about this?
I think the most important thing is to think about what we are actually watching and doing online. Where I dread is thinking about emerging technologies including deepfakes that could create fake, misleading, or quirky documents on a much larger scale than people like Caolan ever could. It will be even harder to know what is real and what is fake.
Nor is it dangerous if we learn to trust anything that we see?
It’s correct. Some people in the tech industry believe that the real risk of deep scams is that people learn to not believe everything – even what is real.
How did Robertson feel creating a YouTube video that he now believes people are being polarized and cheated on?
To some extent, he regrets what he did, or at least wanted to separate himself from it. But basically, he’s now using the tactics he deployed to create extreme right-wing videos to create extreme left-wing videos. He’s doing the same thing on one political side that he used to do on the other side.