Social media | Technology

Free Speech / Hate Speech: Can Facebook And Google Be Moderated?

In its infancy, anyone with access could have a voice on the internet. Just two decades later, a handful of tech companies determine and enforce our online norms, governing what people can say with policies that seek to balance free expression and civic responsibility against corporate profit. Criticism over these guidelines — largely algorithmic, conceived and implemented behind closed doors in Silicon Valley — continues to grow, pitting the digital gatekeepers against democratic social practices that have long defined our civic discourse.

Reflexively, online vitriol has spilled over into real life. Politics, media, and personal interactions have all been tainted by poisonous, often hateful bickering — the residue of inept systems that rely on outsourced moderation, inhuman algorithms, and policies for platforms that many argue have become public services.

To discuss the convoluted state of our online discourse and the politics of technology platforms, we invited Adrian Chen, a staff writer at the New Yorker covering internet culture, to moderate a conversation, “FREE SPEECH/ HATE SPEECH: Can Facebook and Google Be Moderated?”

Chen was joined by danah boyd, founder and president of Data & Society, Principal Researcher at Microsoft Research, and a Visiting Professor at ITP at New York University; Kate Klonick, a PhD student at Yale Law School, Resident Fellow at the school’s Information Society Project; and Gregor Hochmuth, one of Instagram’s first engineers and recent co-founder of New York’s Dreams television network.

Stream this fascinating conversation and robust Q&A session in full, or skim through the highlights below.

12:02 – Adrian Chen: “I think that in general, what we’re at now with this huge debate around the gatekeeping role of platforms is the kind of realization on a large scale that these platforms aren’t going to go away. They’re not just robots that can kind of perfectly arbitrate all of these issues and that it’s a political struggle and it’s a kind of a debate that needs to be happening.”

13:59 – danah boyd: “It was 1995, 1996 when AOL got into a lot of trouble because of anorexia and content related to anorexia being on these sites and they were told ‘you can’t have content related to anorexia.’ Parents were upset, it was a conversation that was already regulated, conversations that were emerging at the time. And so they went okay, fine, so they knew it was a text based, right? This is AOL.

So they just did a scan, basically removed any reference to anorexia. Now, it took about 24 hours for those who were engaged in conversations in anorexia to switch their conversation to talking about Ana. Ana is a very popular name. You can’t ban the name Ana. So you’d have these new coded conversations where people would talk about how they had a really good day with their friend Ana, i.e., they had succeeded in starving themselves for the day, right? Then it because this interesting challenge of whack-a-mole, now we’re looking at this pro-Ana conversations, et cetera. All of that emerged in textual conversations and it was always this whack-a-mole to try to find ways around it.”

16:59 – danah boyd: “How do you articulate those boundaries? You’re talking about people making those decisions. People inside organizations. It’s really hard and you’re trying to do it based on what you imagine or what you see as content realizing that everything you do, someone will find a way to route around it. ”

29:44 – Gregor Hochmuth: “This is just my personal opinion  — Facebook is sort of ashamed that it can’t do it with AI and therefore just outsources it, puts it far away, and is just going to treat it as something as if it didn’t exist so no one that works there really has to deal with it. It’s not treated as a priority in that way. Because like it’s just a thing like you can’t do with computers yet, but eventually you would like to. ”

31:09 – Adrian Chen: “I think that the general conventional wisdom among people of all political stripes is that these companies have become too powerful, that they are now the kind of unaccountable gatekeepers of public discourse. They are, depending on what you see the issue is, either more censorious and biased towards one group or the other in terms of not letting their content on because of whatever reason or they are too liberal and kind of don’t take accountability for the content on their platform.”

50:47 – Kate Klonick: “The other thing to remember is that we use all of these services for free. We sign up for Facebook and they do this for free… but we pay with our eyeballs. We look at different things and they get ad views out of it. They’re incentivized to making you keep looking at things and having you engage and they want you to engage. I think that Gregor is completely right that the way you vote is to disengage, to go to a different platform. I think that… that this is how these platforms die.”

Authors