top of page

Indirect Message

Episode 4: Dangerous Speech

This transcription relies on AI technology and may contain errors.
Please feel free to submit mistakes for correction!

Hello citizens of the internet. Welcome back to Indirect Message, a podcast about the cultural impact of the internet. I'm Laci green. Thanks for being here. I hope everyone's easing into fall alright. It's my favorite time of year right now, that slow dip into chilly weather. I can once again indulge in warm drinks while wearing fuzzy socks. May or may not be doing both right now. A quick note, I have been getting a little bit of feedback from you guys asking for slightly longer interview portions, so I'm running a little poll on my Twitter through the weekend about your preference. If you wouldn't mind voting, you can find that poll at twitter.com/gogreen 8 throwback fact, gogreen18 was what I went by on YouTube before I started using my name. Anyway. Uh, today we'll go a little deeper in on the interview portion, see how it goes, which works out really well because my guest today gave me so much to think about and had a lot of interesting things to say. Alright guys. Let's dive in. Today, the battle for free speech online. Friendly heads up, this episode contains vague discussion of disturbing content online.

 

Chapter one: Your video has been removed.

Last December I received a distressing email from YouTube subject line. Your video has been removed.

Hi LaciGreen. As you may know, our community guidelines describe which content we allow and don't allow on YouTube. Your video "thinspiration" was flagged for review. Upon review, we've determined that it violates our guidelines. We removed it from YouTube and assigned a community guideline strike.

The video in question was about "pro-ana" blogs, short for pro anorexia, blogs that encourage young people to adopt eating disorders to lose weight. They usually congregate on Tumblr or Reddit. My video, from several years ago, attempted to speak to this phenomenon -- to offer a lifeline of alternative support to those who might be struggling. Thinking there had been some mistake, I appealed YouTube's decision to ban my video. Two days later, the verdict was in: YouTube rejected my appeal and the video has been offline since. In recent years, I've had several videos taken down on my YouTube channel, like an old video about pregnancy prevention. Other videos have been age restricted. For instance, a video of mine demonstrating how to use a condom properly is only viewable to users 18 and up. Eventually YouTube started disabling thumbnails for my videos in search engines. Instead of seeing the cover art for the video, people see a random, often blurry, frame. But the biggest hit of all came just a couple of weeks ago when YouTube terminated my entire account, a channel that I've spent years uploading hundreds of sex ed videos to and building a community of over 1 million people. In a moment, it was all gone. Their reasoning: YouTube terminated my channel for impersonating Laci Green. Yes. Really. I appealed the decision again. "Hi, I'm the real Laci Green. Here's my ID and my face and a timestamp." A few hours later: appeal denied. But I wasn't about to go down without a fight. Unable to access my YouTube account. I sounded the alarm on Twitter.

"Well, apparently a real life human at YouTube reviewed my case and upheld the ban. I'm so confused, I'm starting to wonder if maybe I did impersonate myself."

After countless frantic texts and emails, I finally got in touch with a friend at Google who directed me to someone who could help. The next day, my channel was restored. I'm so grateful they were able to help me, but I have to wonder: what if I didn't happen to have a friend who happens to work at Google?

 

Free speech, and more accurately, content moderation are at the top of a lot of people's minds on YouTube since a major implosion in 2017 dubbed "The Adpocalypse". At the time, mainstream media websites, like the Wall Street Journal, ran expose articles showing advertisements appearing next to controversial, hateful, and sometimes even terrorist videos. This prompted an advertiser boycott and close to $1 billion in ads were pulled off of YouTube. As you might imagine, this affected a number of YouTubers -- with some people effectively losing their job or taking a major pay cut overnight

"In the matter of 48 hours, as a result of many videos of mine being demonetized, uh, I'm down 90% of AdSense revenue and about 20% of views it's looking like."

But for Google, which owns YouTube, the boycotts and barely touched their bottom line. In order to woo advertisers back, YouTube tightened the rules about what types of videos were eligible for ads. Controversial content, which I've since learned includes sex education, was the first on the chopping block. Since then, those who create content on YouTube are sitting with the reminder that their livelihood --and their ability to express themselves-- is not a guarantee. Everyone on the web is always hanging on by a string. Every day, a new batch of YouTubers has a video taken down or their channel struck for reasons that are unclear. Fierce conversations rage on about censorship in advertising. Who gets a voice online and who doesn't? Who gets paid for their work and who works for free? When a new incident of censorship happens, usually the user tries to get in touch with YouTube first, but in nearly every situation, we're met with radio silence. I suspect this is because the robot on the other end of the decision....can't actually talk. I'm referring of course to the algorithm.

Chapter Two: The Biggest Influencer

 

Every time you look up something online, or even just scrolling through your feed, algorithms determine what you see and how often you see it. Because of that, algorithms are pretty powerful. They affect what we buy, what we talk about, whose voices we hear, and naturally...what we believe about the world. Forget about Pewdiepie, Ninja, or the Kardashians. The biggest influencers of them all are the algorithms. I should emphasize that these algorithms are necessary. There is simply too much information online. It has to be organized somehow. Not only that, but if there's disturbing content, for instance, nobody should be blindsided by a beheading on YouTube homepage...which actually happened a couple of years ago, when an ISIS video trended on YouTube for three full days before the company addressed it. This is all very unsettling. Part of the problem is that big tech companies rarely disclose the details about how their algorithms really work. There's no transparency. It's all very mysterious, a strictly guarded secret. And yet these algorithms have real world consequences for all of us. At a media conference in 2016, Angela Merkel, the chancellor of Germany, expressed her alarm:

 

"Algorithms, when they're not transparent, can lead to a distortion of our perception." They can shrink our expanse of information. This is a development that we need to pay careful attention to. The big internet platforms, through their algorithms, have become an eye of a needle which diverse media must pass through to access other people."

Another point of tension is that the formulas used by these algorithms tend to catch activist content that uses the same key words -- like my eating disorders video. Or the current lawsuit against YouTube by a group of LGBT YouTubers for algorithmic discrimination. It seems possible that in Google's attempts to curb anti-LGBT hate speech, they wound up censoring LGBT content as a whole. Or the case of Ford Fischer, a YouTuber who documents street activism. His footage is being used all over, in feature films like Blackkklansman, documentaries, and national news. Four months ago, his entire channel was demonetized.

 

"My whole philosophy is that by using raw footage and live stream to just show events exactly as they happened, with very minimal or absolutely no commentary, you can really allow viewers to decide for themselves. And so we've documented all sorts of popular movements. So four months ago when my entire channel was penalized when my entire channel was demonetized, they actually deleted outright two specific news videos. A recent study was done, which showed that there are about a thousand terms that YouTube will pretty much automatically count towards demonetization and included in that is homosexual and gay, heterosexual and straight a are not penalized by YouTube. But there is a difference between content that covers hate, uh, and content that is in itself hateful."

The question of how the deal with these problems is a complicated one. Transparency is a good start, but some of the other popular solutions currently at play create their own problems. Like having humans filter the content instead. Requiring a human to moderate what's acceptable online means exposing them to horrific videos and pictures all day, every day. Rape, murder, violent porn, terrorism. You get the idea. Many of the workers hired to do this work are traumatized and suffer PTSD. Roz Bowden, one of my spaces, first moderators wrote:

"When I left my space, I didn't shake anyone's hands for three years. I'd seen what people do and how disgusting they are."

Big tech companies usually pay pennies for this work abroad and even in the U S the picture is grim. One report found that the average content moderator makes about $28,000 a year. While the average Facebook employee takes home about $240,000 -- nearly 10 times the amount. But even if human moderation were theoretically a humane solution that was fairly paid, this doesn't solve the problem of content that falls into gray areas and I think gray areas are the trickiest area of all. Most people will agree, terrorism shouldn't be popping up on YouTube. But what about hate speech? What counts as hate speech? What type of content, if any, is so offensive that it deserves to be wiped from the face of the internet entirely?

 

Chapter three: the great gray area

Let's be honest with ourselves here. Dating apps kind of suck. They take a lot of energy. They get us hooked with addictive swiping that goes nowhere and they've normalized a lot of bad behavior like ghosting. But it doesn't have to be like this and there is a better way to do dating apps.

That's where a Sweet Pea comes in, the official partner of Indirect Message. Sweet Pea is a swipe-free app that helps us focus on what really matters, warm connections and meaningful with each other. Check Sweet Pea out for yourself in the app store. Every download really helps me out here on Indirect Message and I hope it helps you guys find some thing or someone that's valuable as well.

Here with me today to wade into the great, gray abyss is Jillian York. Jillian is the director for international freedom of expression at the electronic frontier foundation or the E F. F.  The EFF is a leading global organization that works really hard to protect our digital rights and I really respect them as an organization. Jillian also runs onlinecensorship.org, a project that helps people who have had their content unfairly removed or their social media account suspended. Jillian agreed to chat with me even while under the weather and traveling abroad. I'm truly grateful for it. And I hope you enjoy our conversation. Like every discussion on Indirect Message: the discussion of ideas should not be taken as an endorsement of them.

Jillian:
Yeah. Around the world. Governments utilize a variety of different techniques to block websites, block keyword searches. Um, and so it ranges from, you know, a few governments in Europe which might block like a handful of, of websites that contain illegal hate speech to, you know, the examples of like China or Saudi Arabia, which blocks so much content. I would say about 10 years ago, um, I could easily say that like more than a third of the world was experiencing a censored internet. And I would say it's much more now.

Laci:
Is the censorship that's happening in those countries mostly, can we boil it down to government control of the populace or is it about specific topics like sexual content?

Jillian:
It usually comes down to a few topics. Um, it's religion a lot of times. So anything that's dissenting from the majority, um, you've gotten criticism of the government. That's always a huge one. Um, and then of course things around morality, sexuality, um, that sort of thing. So you've got countries that block porn, but then you've also got countries that block health sexual health information, which I know is an important topic to you and a whole range of other things.


Laci:
That actually ties in with another question that I wanted to ask you about. Um, you know, what is and isn't allowed to be posted on Facebook or YouTube or Twitter. I think there's a lot of frustration about this online. I'm really ignorant on the issue of how content moderation actually works. I guess I'm wondering if you could shed some light on that and you know, how, how you think we're doing on the content moderation, if there's a better way to do it.

Jillian:
So every platform has two documents that are public facing. One is the terms of service, which is usually around your legal rights. And then there's the community guidelines or community standards, which is typically an ever-changing document, um, that, you know, users have to abide by. Traditionally you would have to flag a piece of content. So report it to the platform and then that piece of content would go into a queue. Then you've got the people who actually, um, enforce the policies, which are often content moderators around the world and in sometimes really religious countries like the Philippines. So that's an interesting part of this too. There's a documentary that I contributed to that came out last year around this called the cleaners and it looks at some of the cultural values that the content moderators bring into the game. So they would look at that piece of content and then there's another set of documents, usually a set of internal rule books that they would have to look at to decide whether the content gets deleted or ignored. So whether it stays up or it goes down. Now I think if we look at a different model, like Reddit, Reddit is more based around the actual community. So, you know, each subreddit is, um, managed by moderators who are volunteers and they decide often what's okay and what's not. I mean, I think there may be some centralized rules at this point, but those are usually around, you know, really what Reddit would consider really harmful things rather than, you know, nudity, stuff like that.

Laci:
The Reddit model is interesting. I've always really liked it. It felt more fair. But I guess it just depends on what subreddit you're on. Do you think there's some merit to that model?

Jillian:
Oh, absolutely. I mean, I think, I think there's a real problem with the idea of centralizing the system. I mean, it's essentially, you know, it drives me nuts that Facebook calls it community policing and that Mark Zuckerberg is always referring to two point something billion people as a community. Um, Facebook is not a community. It's, you know, billions of different people in, you know, tons of micro communities, let's say. And I've always thought that it would be better if, you know, Facebook groups for example, can police themselves or if I can police my own page, I mean sure you might have certain things that are centralized and completely against the rules such as, um, of course child sexual abuse imagery, which was against the law. But you know, a platform might also make the decision that, um, that terrorism and so they'd have to of course define that, but that terrorism is against the rules.

But for all of the other stuff, for the stuff that, you know, that qualifies as hate speech these days, which I mean, really no legal definition of hate speech anywhere in the world would cover some of the stuff that Facebook takes down. Um, I think that sort of thing is better adjudicated amongst a community of people. Uh, and they can decide to be as restrictive as they want. You know, I live in Berlin and Germany and one of the things that's really common there is bars. We'll put up signs that say, you know, no racism, no homophobia, whatever. And so you know that as you're walking into that bar and you know that those are the rules and if you don't like it, you can go to a bar where those things are okay. Um, but right now we don't have that kind of choice on these platforms and we've got a bunch of frankly rather unqualified people making and adjudicating the rules.

Laci:
If an alternative is for people to go into smaller communities then and to kind of, you know, decide their own rules, is there any issue with the community that say...allows pretty much anything or a community that is a little bit ban happy? Are those sorts of things admissible because, well, if you don't like it, there's always another micro-community that you can join?

Jillian:
Well it's funny cause if you caught me a few years ago, I would've said, you know, more of that like let's just have a, a total diversity of platforms and let's allow anything. But I think the trouble comes with, um, some things like, so 8-Chan as an example here, right? Like, I think the trouble comes when you allow people to post the livestreaming of a murder. Um, that's the sort of thing where I think, you know, how do we handle that? Is there any merit to that ever being allowed up and does it inspire more violence? But let's take that extreme out of the equation for a minute. Cause I do think that that's an edge case in the broader topic. Um, so outside of that sort of thing, I do think that there's definitely merit to unmoderated platforms and to more heavily restricted platforms that where everybody consents and agrees to that restriction.

So really good example of this is Ravelry, the knitting site that bans any talk about Trump. I think maybe even at any topic talk about politics. I mean that's absolutely fine. They want to talk about knitting and maybe, you know, occasionally throw in some other light conversation that's completely within their right. And on the flip side of that, I think it's absolutely fine to have a platform like, I dunno, gab where conservatives gather, um, those things are, are fine and normal. I do think that there's some risk if those are the only platforms that you're participating in of the sort of filter bubble concept of chambers. Yes. Echo chamber siloing of, of opinions. But I think realistically those aren't the only platforms that people are participating in. If you're on Ravelry, you're probably also on something else. Um, if you're on gab, you're probably also on something else too.

Laci:

f you have people who maybe have some sort of hateful or violent opinions all coming together in one space where they're kind of quarantined off from the rest of the internet. Um, do you, do you think that that could be dangerous in some way? Maybe by insulating people it encourages them to radicalize and maybe make them more likely to act out in the real world offline?

Jillian:
Definitely. I mean, I do think that there's a huge risk to that. So let's talk about ISIS for example. Um, so ISIS I think is a group that everyone in the world can pretty much agree on, shouldn't exist, right. And yet a lot of that organizing, a lot of the recruiting, a lot of the discussions have happened on major platforms, particularly Facebook. Um, and so I think that, you know, having a group like that that's pushed off of Facebook and into a smaller, more private community, I mean, don't get me wrong, I'm not saying Facebook shouldn't ban them, but pushing them off into a community let's, um, on the literal or figurative dark web, um, could absolutely result in more real world violence. And I think that that's something that we need a lot more research on. Um, a lot more empirical data because a lot of the times when we're creating policies around stuff like this, um, people are just sort of going with gut reactions rather than actual data. But that said, we do know that those types of communities can engender real world violence. And I think that that's something that we have to think about.

Laci:
Do you think that AI can play any role in that? Is that a, a positive, a positive addition to the content moderation equation

Jillian:
right now? AI is good at just a couple of things. Um, one, it's, it's obviously very good at identifying child sexual abuse imagery. Um, we don't want human moderators to have to look at that sort of stuff. We know that it's traumatizing even just to see a couple of those images. So that's one good use of AI. Another way in which AI is being used. And this for me is a little bit more controversial, but um, but it's quite clear. So a lot of the platforms use AI to identify terrorist imagery and video. But beyond that, you know, I would say that AI is really terrible when it comes to text and sentiment analysis when we're using AI to try to determine meaning behind some what someone is saying. Um, there's this project that came out a couple of years ago called perspective API. Have you heard of it?

Laci:
I have not. No.

Jillian:
Okay. So it's, um, it's a project of jigsaw, which is a branch of Google actually, or a branch of alphabet. Um, and the idea behind it is that news websites or platforms could utilize it to detect toxicity of comments. And you can't see me right now, but I'm doing big finger quotes around toxicity because what happens is when I go and type things into it, so anytime that I put like the word fuck into it, it will determine that that is a very toxic comment. Even if I'm saying, wow, you know, the fucking red Sox did great. Um, but if I say something like men are superior to women, the level is much, much lower. Um, and so, you know, I mean, one of the wealthiest companies in the world working on AI can't even get this basic thing right. Um, and I think one of the things that we have to remember is it's not in the technology that's the problem. It's often the people who are building it in the people who are putting their own values into the system. Um, and so that's really what it comes down to is, are we capable of building a technology that could actually, um, look at this, look at the sentiment of comments and get this right? And I kinda don't think so because value's different across different societies.

Laci:
Yeah. It's the values. And then there's also the question of how do you systematically evaluate tone and context? It seems like asking a a machine to understand the complexities of a human exchange is really, I don't know. I don't know if I'd trust that or believe that that's a possibility.

Jillian:
No, I, I agree. And I mean, I think that, you know, even if it is a possibility, is that really the society that I want? And I can honestly say no it isn't. I mean, if we look at like a really basic example, the word Dyke, um, Dyke is a slur that, that lesbians took back over the years, as you well know. Um, and it's a word that Facebook has a really hard time with because they want to take it down when it's being used as a slur. But in doing that also results in them taking it down. When, when women are using it to talk about their own communities.

Laci:
Is that your main concern? That it, it can't distinguish.

Jillian:
Yeah. I mean, part of it's that it can't distinguish and part of it's also just that I'm going to, I don't know, maybe I, maybe I have no right to say this, but that's the sort of thing where I'd much rather just use the big old block button to get rid of that person and to have a centralized platform decide that they're not allowed to speak anymore. I mean, do I think that people should use the word Dyke and as a slur? Absolutely not. Um, but I'm, you know, I'm not sure that that's our biggest problem. And I think one of the things that we've, we've seen with these quick and easy solutions to, to speech we don't like is that the more of these things we put in place, the more complex it gets. And so when we think about like just the, the example of women's breasts, this one really drives me nuts of course. Cause I think it's discriminatory. Um, that's something where I'm like, is this really something that you want to put this many resources behind? Or should we be focused on the things that actually drive violence in societies? And, and real physical harm.

Laci:
Yeah, I'm with you 100%. I feel the same way. Um, that's a little bit of a, you know, controversial opinion that we have. I think that you shouldn't just press the, you should press the block button and not have people not have these companies, uh, enact these sweeping policies that could end up coming back to hurt us. It actually brings me to one of the situations that I wanted to discuss with you. This was kind of an explosive one on YouTube. I'm sure you, you saw it online. Um, so one of the conversations that goes around and around and around in the YouTube sphere is what the line should be. Um, and how we should handle this stuff. Obviously in earlier this year, there was a prominent right wing YouTuber, uh, Steven Crowder. Have you heard of him? Uh, yes. So he was accused of harassing a gay blogger.

Um, Carlos Maza and Maza made this video of all the times, you know, Crowder had talked about Mazda and it was really jarring. He's calling him these names. He's making fun of his lists. And in response to this, there were, I don't know, probably tens of thousands of people on Twitter and YouTube calling for YouTube to do the right thing. Um, what they're describing as the right thing here. And I'm either demonetizing Crowder's channel or just outright removing him from YouTube and, uh, YouTube decided not to. They stood by the decision to keep Crowder's channel up, said it was a very difficult decision, but that it would put them into tricky territory to start, uh, you know, deciding kind of getting in the middle of some of these issues. Do you think that YouTube made the right decision here and in handled this properly?

Jillian:
Well, so I think it's really hard to talk about this in a vacuum. And I, and I was watching a lot on the debate on Twitter about it and I think a lot of it was sort of people were really trying to attempt to talk about it in a vacuum. Right? There is context here and that's important. I think a lot of the people who are demanding that, that video come down, we're doing so by trying to hold YouTube accountable to its own stated policies. I guess what I would say is that yeah, in context I think they should have taken it down. But that said, you know, I think we can ask questions about whether that speech should be against the rules, whether it matters who the speaker is and how much power they have. Um, and that's obviously a conversation that's coming up around Facebook, Facebook's new policy about politicians.

Laci:
Can you elaborate on that? Cause it also, I should know it wasn't just one video. Like I think part of the problem is that he made so many videos attacking the same person and attacking his sexuality specific.

Jillian:
Yeah. Yeah. And I think, you know, this is where the, the, the division between hate speech and harassment comes into play. Um, I think that there's a lot of conflation of those two concepts. Um, I think that the, the multitude of videos and the amount of content that he was directing at Maza, you know, kind of makes that harassment.

Laci:
So online harassment is a pattern. Each of a said they're against harassment, right? Which you're right. I think a lot of people were just kind of confused. I also saw a lot of people who are just calling for it to be taken down because it's nasty and it's ugly and this is right on the tails of pride month. And it's like how can you to claim to be, you know, an LGBT positive platform and then also allow this sort of thing to exist. But at the same time, you know, a large part of my brain is like what are the implications of taking something like that?

Jillian:
Exactly. And this is what worries me. So I'm, and I identify as queer and one of the things that I think about all the time is if we're going to bend that, does that mean that I also can't get on YouTube and rail about religious conservatives? And so I think that we do have to consider the way that these rules once put into place tend to proliferate across different groups and tend to affect different groups in different ways. So I think a good corollary to this is when it comes to terrorism and violent extremism, we've seen attempts to document state violence because they contain violence and because they contain what might be terrorist groups. Um, these sort of documentary attempts have been removed from YouTube as well. And that's really troublesome because we've actually seen YouTube videos being used in UN war crimes tribunals at this point. Um, and so when these platforms are this content, the evidence that a crime happened, exactly. And so I mean that's not a, it's not a perfect corollary to the issue of harassment and hate speech, but I think that it's easy to see how once a rule is put into place, it can be used against different groups in all sorts of different ways.

Laci:
So Twitter is an interesting one. I think I had read one of your articles and maybe it was not you, but someone you were quoting who had, uh, referred to Twitter as the free speech wing of the free speech movement. It gets maybe the most, the most permissive form in terms of like the big social media platforms. Um, and I think part of why it's garnered this reputation is a, the policies you're talking about, but, um, Jack Dorsey, you know, he's been in the media a lot because he has refused to ban far right content and refuse to ban ni some Neo Nazi content. Now, I do think people throw that around too much. You know, it's kind of like a, a hammer that you throw around against people you're disagreeing with, which has been happening for a long time. Um, but for actual Neo Nazi content where people are using Twitter tonight to share their social views, to try to recruit people to those belief systems, um, is Twitter doing it right by allowing pretty much, uh, you know, they've got this, everything goes, anything goes sort of policy.

Jillian:
Well, so let me, let me break one myth real quick. Twitter does not have an anything goes policy. And if you look at their, um, here, I'm going to pull up an article from 2018 that says Twitter has suspended nearly 1.2 million terrorist accounts since 2015. So that was in a three year period. Um,

Laci:
okay. Wow. Did not know that.

Jillian:
Yeah. So there's no transparency around how they define terrorism, first of all. Um, we think that it's probably based on the U S list of designated terrorist organizations. Well, if we're going to look at it that way, I think, you know, they, they are not and really have not been for a long time. The free speech wing of the free speech party. Okay. Yeah. But that said, let's look at this problem for example. So around Nazi content in your Nazi content. I mean, I think the first thing that they got wrong was verifying certain people. Um, you know, I know that they say that verification is just a in method of identifying somebody as they are, but the way that they revealed it is absurd. Using VR B verification as a weapon to wield against people is also sort of ridiculous because, you know, just taking away a blue check Mark doesn't make Richard Spencer any less of Richard Spencer.

Right. He's still exactly the same person. Um, and so wielding that as punishment is sort of a bizarre choice on their part. And then I think the next thing is that, you know, I, this is where it gets complicated because unlike a lot of folks, I don't think that white supremacists should be taken down entirely just because they identify as white supremacist. Rather. I think that the smarter thing to do would be to look at the precise speech itself. Um, and I'm not saying that, you know, taking down tweets is the right answer. I think that that that's actually been sort of a, a problematic decision, uh, in a lot of cases and including around, uh, issues around test of pasta. But I do think that we should be looking at the speech, not the speaker, uh, for starters at least.

Laci:
Hmm. So let's parse that out a little bit. Um, in terms of taking the tweets down, I'm describing it as problematic. Can you lay out why you think that's the case?

Jillian:
I want to know what's going on. I want to know who's who. And so if, you know, if you've got Richard Spencer on Twitter, but only his innocuous tweets are left up, um, are people as easily able to identify who he is? And I know that that might sound silly of course, because everybody knows who he is. But I'll give you an example, a real life example. A few years ago I was in Budapest and I saw a sign for like a, I think I like an army, Navy store, um, us stuff, right? I love those kinds of stores and it had a Confederate flag and I thought, Whoa, that's, that's kind of weird, but I don't know, whatever. Maybe they don't really know what it means. Well, it turns out they really do know what it means. And because the swastikas band in Hungary, the Confederate flag has become a stand in for neo-Nazis.


Laci:
Yeah. So even when you tried to censor people or take away the symbols, they'll find new symbols and they'll find new platforms.

Jillian:
Yes. Yeah. And then I think the third reason is of course the Streisand effect, which I, most people are probably familiar with at this point. But if not, it's the idea that when you censor speech it Ben proliferates times infinity or you know, whatever. And that is something that we've absolutely seen with some of these takedowns. I mean, I think, you know, it's interesting because I think that there are valuable arguments to be made in favor of de platforming. I think that the, the downfall of Milo Yiannopoulos is one of those examples where like, I've, I, you know, as much as I, I'm not sure about whether he should of been removed from every platform, I'm also really thrilled to see the downfall of his career, um, on a personal level.

But I do think we need a lot more research around this. What are the effects of de platforming? Does it result in more of that speech? Can it, or can it also, you know, result in a better world? And I, I'm not sure that we have the research and I think a lot of times we're making policy based on gut feeling.

 

Laci:

Yeah. It makes me uncomfortable for my stuff because while I don't think he should be allowed to, um, harass people, which I think is why he originally was banned from Twitter. So, you know, I look at some of his YouTube videos or some of the things he's posting on Facebook. And while I vehemently disagree with it, the fact that he was able to effectively kind of be un-personed online, uh, as much as I feel the same way that you do, it's like, yeah, you shouldn't, I don't want you in my space and I don't want you on my timeline.

There's something really unsettling and worrying about it as well that these platforms can just sort of decide that you just can't exist online anymore.

Jillian:
Yeah, no, absolutely. And I've got, I've got two thoughts on this. One is a corollary and the other one is a potential solution. So I think in terms of the corollary, let's go back to [inaudible] foster for a second because I've talked to a lot of sex workers who are really concerned and you know, I don't think this is a conspiracy at all. I think it's very likely that platforms are sharing information about them. And so, you know, they'll get banned from say Patrion and then the next day and they'll get banned from the other similar sites to Patriot or Facebook and then YouTube. Um, and I think that that's really worrying. I mean weather the weather, what they're engaged in is legal or illegal.

It's the same sort of de platforming, um, that we've seen with and it's the same sort of across the board. We're not going to give you any place to engage in speech. Uh, Twitter remains up for a lot of them. The last place that they can exist online or at least on a centralized platform. And so that's the, that's the corollary corollary that concerns me. And again, I'm sure that there are going to be people listening who are like, Oh, but that's totally different. My low and sex workers, not the same thing, but I think what you have to remember is that the platforms don't think the way a lot of us think the platform is CDs as problems to be solved with the same solutions. And so they're using the same boxes to deal with different kinds of speech. And so when you apply a platform, or sorry, when you apply a solution to hate speech, that same solution often gets applied to like sexual content.

Then as for the potential solution, you know, I think one of the things that we haven't talked enough about, and there's a couple of interesting organizations doing work around this is demobilization, but not, not the kind of demonetization that we've seen on YouTube where it's YouTube deciding who sh who can or cannot make money, but rather the right of advertisers to be able to choose on a granular level where they want their ads to sit. And that's something that I very much like that I've been thinking about a lot lately. I'm not 100% sure where I come down. And I'm definitely speaking for myself on this one cause I'm not sure eff has a, has a positioning position on this yet either. But I think that, you know, if I were a company and I were deciding where I wanted to put my ads, I would absolutely want to keep them away from white supremacists, racist, homophobes, et cetera.

Um, and I think that that should be within the advertiser's right to do that. I mean, it's, it's their speech to right?

Laci:

That's kind of a market solution, huh?

 

Jillian:
Yes. Yeah. And I think a lot of times work at solutions. Um, you know, our God, I'm gonna get, I'm gonna get some flack for this one. But I think when it comes to speech, a lot of times market solutions are really the best option that we have. And you know, I mean, I also, I haven't said this, but I, it's important to me to note that I don't come from a perspective on free speech that's like, Oh, you know, I don't like what you have to say, but I'll fight to the death for your right to say it. Like I will absolutely not fight to the death for Nazi speech. But what I do believe is that authority really can't be trusted on these things and that we've never seen a form of censorship, at least not censorship by itsel, that solves a societal problem. And so just slapping a bandaid on it isn't going to fix it. Just taking down the speech isn't going to solve the underlying hate. Um, we have to deal with society on a totally different level. The internet just reflects the world, you know? And that's what's disheartening about it. And I think that's the temptation of a lot of people to want to jump to these kind of rash conclusions. Yeah. And you know, and it's also like we have to remember that with the companies in particular, um, this is all about money. And so, you know, I think like when it comes to things like we haven't talked about misinformation or disinformation, um, but that's an area where I'm sure that companies have the same interests that we as a society have. I think that disinformation can be profitable and, and that really worries me that we're looking to them for solutions on something that they don't really have a vested interest in solving.

And then on the other hand, governments, um, it really depends on which government we're talking about, right? I think there's a lot that we can learn from history. There's a lot we can learn from, um, the rise of authoritarianism around the world. I think that we should be instead looking toward better architecture, better, um, tools for users to be able to block people, to be able to manage their communities as they see fit. Um, and we should also be looking to build new platforms. Uh, you know, it doesn't have to be Facebook. Facebook is not the internet, right? Or YouTube for that matter

Laci:
when it comes to entities that are really trying to unite people all over the world to solve these problems. I mean, that's why I was initially, uh, looking into the EFF and reading your work was it seems like the EFF is trying to do that. Yeah. Yes.

Jillian:
Yeah. One of the things that we've been doing is a convening, um, different groups from all over the world who are working on questions of content moderation. So a couple of years ago, February, 2018, um, we worked with some different groups to create a set of principles called the Santa Clara principles for transparency and accountability and content moderation. And yes, they're there online. You write about this and I love it. Yeah, they're great. They're really basic. Um, and so there's more work to be done of course. But one of the things that we did was we managed to get more than a hundred different organizations, mostly free speech organizations, but some were other types of groups together to sign a letter to Facebook, urging them to implement these principles. And they actually responded by improving on a couple of areas. They're not 100% there yet, but they did an act, a couple of the things that we ask them to including a full scale appeals process. Uh, so that people when their content stigma and have the right to due process.


Laci:
Uh huh. That's amazing. Um, for people who are not necessarily going to join an activist group, how can you know the folks listening here who care about this stuff get involved or take action? What can, what can we lazy people, do?

Jillian:
the term keyboard warrior gets a lot of flack. But I do think that speaking up against about these issues is really important. These companies are increasingly looking to the public on some of these decisions at Twitter. Did a public comment period, um, when they launched or when they were creating their dehumanization policy. And actually the group that, the groups that I'm working with, um, around the, the transparency initiatives, they're also looking to reach out to the public. And so I do think that it's possible to have a voice. People need to speak up for free speech. I worry that the entire concept is starting to disappear from our public consciousness because we're chipping away at it, away from it bit by bit. One of the things that I think about so much is just when we're demanding that companies or governments or whoever the authority may be, when one demanding that they cracked down on more speech, I think that we always have to consider how a certain policy will affect some of the most vulnerable or marginalized communities.

Laci
On the next episode of indirect message.

"If men are incentivized to swipe right on, say 50 to 65% of the profiles that come across, women get overwhelmed. This creates an environment where people end up spending way more time swiping and leads to what researchers call extreme strategies."

We talk about the bizarre world of dating apps. This conversation blew my mind. I hope to see you guys there. I'll be back October 30th.

bottom of page