How does counterspeech prevent online extremism?

Words By Kosta Lucas
Date Published May 31, 2021
Available on All Major Podcast Platforms

On this episode of Undesign, we discuss online extremism and counterspeech with Erin Saltman. Erin is an experienced researcher and practitioner who is the current Director of Programming for the Global Internet Forum to Counter Terrorism (GIFCT). She was also Facebook’s Head of Counterterrorism and Dangerous Organizations for Europe, Middle East and Africa (EMEA) prior to this role.

Your Host
Kosta Lucas

Head of Community Practice, DrawHistory

Guests
Erin Saltman

Director of Programming, Global Internet Forum for Counter Terrorism

Share Episode

Related website

giftct.org

Transcript: Introduction

 

KOSTA: Hello, everyone, and welcome to Undesign.

I’m your host, Kosta.

Thank you so much for joining me on this mammoth task to untangle the world’s wicked problems and redesign new futures.

I know firsthand that we all have so much we can bring to these big challenges, so see where you fit in the solution as we undesign the concept of online extremism and counterspeech.

It’s safe to say we have all been digital witnesses to some pretty troubling things in our social media feeds, even without looking for them. For all of the good that the internet seemed to promise at the beginning, the bad and the ugly are becoming all too easy to find without meaning to. How often have you scrolled past:

— comment sections on a news article devolving into a slur-slinging fight or a mutinous stir?

— an acquaintance or distant family friend sharing a piece of dodgy news or a meme that you know is just a cover for derogatory and hateful content?

— graphic footage — or even livestreams — of a terrorist attack?

And these are merely in addition to all of the concerning behavior we don’t see in our feeds… like how terrorist groups use social media and gaming platforms both for recruitment drives and operational purposes.

Not only does this pose a risk in the most obvious sense, but what about the risks to our psychological well-being that come from constantly being exposed to that?

So what do we do? Censor? Demonetise? Deplatform? Moderate? Fix the dreaded “algorithms”? What happens when we actually engage with what is being said, aka counterspeech?

Helping us understand this very serious challenge is today’s special guest, Dr. Erin Saltman.

Erin is currently the Director of Programming for the Global Internet Forum to Counter Terrorism (GIFCT), an NGO designed to prevent terrorists and violent extremists from exploiting digital platforms.

She is also formerly Facebook’s Head of Counterterrorism and Dangerous Organizations Policy for Europe, the Middle East and Africa; and has a plethora of experience working with multi-sector stakeholders in building out CVE programs.

We really take this conversation back to back-to-basics by reestablishing what we mean by online extremism and counterspeech, and why we are even talking about it.

Erin then takes us through the findings of her multi-year research project with Facebook, which explored how people behave in relation to counterspeech. The findings might surprise you.

We also talk about the roles of governments and tech organizationss in dealing with online extremism. To date, the responses by big tech and governments in combating online extremism could be said to be a bit of a work-in-progress, and Erin sheds some extremely valuable insight into not only why that is, and what she thinks is the best role for these big players.

However, and perhaps most importantly, we talk about the everyday user. What can and should we do to take power back? While there’s a lot that is out of our immediate control, Erin guides us through the multitude of things we can do to move out of our own echo chambers and what we can do in response to the hateful or extreme content in our midst.

Some of the research my team did at Equator Analytics was, we looked at how many people are in the sky at any given time. If the sky was a city in the United States, last year in 2019, there will have been around 2 million people up in the air at any given time, which makes the sky one of America’s largest cities.

Transcript: Conversation

 

KOSTA: Erin, how are you?

ERIN: Good morning from my side. I think it’s a good afternoon on your side.

KOSTA: It’s a good afternoon over here. Thank you so much for joining us for another episode of Undesign where we’re going to be talking about the big, scary world of online extremism and counterspeech. Big topic. A topic we both know pretty well, I think.

ERIN: Yeah, I think we picked the right day and the right time to be talking about this.

KOSTA: Absolutely. The first question I want to throw out there (just to avoid us from nerding out straightaway, we can do that as we keep going) is, what is counterspeech to you? How would you define it to someone who is not necessarily in this area of work, and why are we even talking about it?

ERIN: I think if you go back maybe 10 or 15 years, counterspeech, counter-narratives, alternative narratives — these are all kind of talking about the same thing. These words really didn’t even exist in the public domain 10 or 15 years ago.

These came about along with the increased awareness on Islamist extremist recruitment. Particularly as soon as it started touching on white Westerners joining a group like the so-called Islamic state, if we’re really honest. There had been some counter-extremism NGOs, but the idea of a counter-narrative and alternative narrative, that lingo only is more recent.

Now it’s not more recent if we really just look at it as targeted strategic communications. What counterspeech (if we use it as a catchall term) is, is any effort, particularly online, to undermine, redirect, challenge or provide an alternative to hate-based extremist narratives.

I want to say “hate-based extremist”, because if we just say extremism, there’s really no confines of what that means. What is it to be extreme? A lot of times, that’s branded as a positive. There’s extreme peaceniks out there and we want to, probably, promote them.

KOSTA: That’s right.

ERIN: So I think that we want to see it. We don’t want to give it too much form in the definition as well, because as soon as you go online, counterspeech could be a video, a trending meme, a grouping around a hashtag, a mobilization or coordination around an event… Counterspeech can take on a lot of different forms.

KOSTA: It’s interesting that you mentioned hate-based extremism before, because like you said, being extreme is not in and of itself a morally bad thing or illegal or anything like that. So my question there — and at the risk of drawing a bit of a binary here — is, how do you define hate?

ERIN: It’s not illegal to be hateful of something — to hate pizza or particularly pineapple on pizza. You can hate many things.

KOSTA: That’s a big debate!

ERIN: It’s a huge debate. I’m a no-pineapple person. I’m sorry if you’re pro-pineapple.

KOSTA: I’m actually indifferent, which is a weird position. I’m so much of a fence-sitter, I know. Sorry. Anyway, back to hatred, pineapple and otherwise…

ERIN: We’re really talking about xenophobic, hate-based extremism. We’re really looking at things like the United Nations’ protected categories of people.

If you are hating someone, not because of their dislike of certain pizza types, but hating someone based on race, religion, gender, gender identity, nationality — we’re looking at those protected categories of identity. If you are hateful and spreading an ideology of hate and/or incitement or dehumanization because of those qualities (which nobody can choose for themselves, these are qualities you’re born with), then that’s the sort of hate-based extremism we’re looking to undermine.

KOSTA: It’s interesting because the through line between hatred and the actual committing of violence is not necessarily so straightforward.

Recent events have really brought that to light. We saw the storming of the Capitol Hill. We’ve seen various, very unfortunate mass shootings where the role of social media and the internet has been implicated in causing it or being related to it in some way.

From your purview and your experience to date, how would you describe the role of the internet in people’s decisions to take up radical violence or extreme violence of this nature? Because it’s the easiest boogeyman to blame, but the reality is often so much more complex. So how would you describe that to people — where the internet actually fits into whether people do these things or not?

 

ERIN: I think that this is one of the hot topics of the day is, how much is it the fault of the internet that people radicalize in the first place?

Obviously I work for the Global Internet Forum to Counter Terrorism. I previously spent four years as the Head of Dangerous Organizations Policy at Facebook for EMEA (Europe, Middle East and Africa). And before that, I spent time in a couple of different think tanks, NGOs, specifically as a practitioner looking at strategically deploying counterspeech.

What I found, what the research shows — just going to data, because there’s a lot of opinions and we want to try to stick to some facts — is that I’ve seen little to no real cases of auto-radicalization where, I always joke, you go online shopping for shoes and accidentally became a jihadist. Or, oh look, I was just trying to buy something on Amazon and now I’m a white supremacist. I don’t see that.

But on social media, we create self-selective echo chambers. We do by default. Everyone has the things they like, the things they don’t. You can go down certain rabbit holes. You can, depending on the nature of how you’re engaging, come across people that are proactively, even potentially recruiting based on your identity and what they see in that.

We do see that the online processes can be a catalyst for radicalization and can be a facilitator. And that should not be a surprise, because all of us use the online tools to do very specific things — to communicate quickly, globally, cheaply, and coordinate a bunch of logistics around our lives. So you can imagine that violent extremist groups use those platforms for all the same reasons, just for violent extremist purposes.

KOSTA: It’s interesting, because I guess social media companies (whether Facebook, Twitter, or whatever), they are designed for a specific purpose and they are being used to their fullest extent.

One thing that we realize, if you look at the history of terrorism and any sort of violent resistance movements or whatever, the role of technology has played a huge part in how their methods evolve. Even with the printing press, as that big technological advancement, we were able to disseminate things much more, in a way that we weren’t able to. But it doesn’t necessarily mean the material is changing people’s minds.

Is it fair to say that you still have to be looking for this stuff to be affected by it, or is it quite common? What’s your view on how easy it is to find and be exposed to radicalizing “extremist content”?

ERIN: I think that this is where we start getting into some controversial gray areas, because a lot of extremism blends increasingly into parts of mainstream politics, parts of mainstream identity cultures.

There’s a large gray area that is extremist starting to verge on what might be considered hate speech, but doesn’t quite break any policy line. It’s controversial, but it’s not illegal. It would be very much protected speech, but you can see that it’s going in the direction of — if we talk about social identity theory, self and other — the othering of certain groups in a negative light.

This is the prime space where it might be hard for a tech company to build a policy to remove that. Human rights would say that’s a vast overreach, over-censorship, but that’s the prime spot for counterspeech. That is the prime area where you want to engage in that space in a strategic way.

Now I also want to debunk some myths, if that’s okay, around who can be an extremist.

KOSTA: The floor is yours.

ERIN: I just want to point out that we are all susceptible to forms of extremism because of our own confirmation biases. We all have extremely strong confirmation biases. If we hear a piece of news that we would naturally already agree with, we’re going to agree with it and probably not question the facts behind it. If we hear a piece of news that completely 180 goes against our preconceived notions, that’s when we question its source.

You also have to think of, if somebody is already down an extremist loop — whether that’s a QAnon theory or whether that’s an Islamist extremist narrative around vaccine denial; we see a lot of the extreme groups right now are all about vaccine denial — then we can see that if you’re going to try to shove a counter-narrative at them that completely opposes their opinion, it’s not going to land, because it goes completely against their confirmation bias.

Even if you want to approach the counter-extremism space, you need to know, who is the credible voice that they will even allow into their space; what is the credible message that maybe finds them at a common ground, and then leave space to pivot from that common ground; and then what’s the right platform to reach them on? You might make a really sexy campaign on one platform, but all the target audience that you’re trying to reach is on another platform… Then you just spent a lot of time in the wrong place.

 

KOSTA: Right. As any sort of guidance note in counter-narrative development or counterspeech development, those golden elements of credibility and authenticity are pretty key in terms of whether something lands, permitting that you’re in the right place, as you say.

Which makes me then circle back to this idea of the role of counterspeech in the online space. We think about a lot of intervention types to try and make the internet a safer place in social media, a safer place to be, like takedowns, moderation…

What do we know about what effect counterspeech actually has on receptive or not very receptive audiences? Does it actually work? I feel like there’s a lot of cynicism towards that.

ERIN: Right, there’s a huge amount of cynicism.

KOSTA: Can you speak to that a little bit more? Why are people so down on counterspeech, do you think?

ERIN: Not everyone is down on counterspeech.

KOSTA: Of course, of course.

ERIN: It really depends on the country and culture you’re living in, on what counterspeech means or how it’s been deployed correctly or incorrectly, targeting your community or not your community. There’s some legacy work to remedy around.

On the one hand, to a previous point you made, tech companies still need robust policies to remove violating content. As soon as it’s no longer a gray area, and it is hate speech or it is incitement, those policies need to be in place to be able to remove that according to terms of service.

The big companies — when you look at their transparency reports, they’re removing millions of pieces of content around these themes every quarter or every half [year]. It’s quite robust. But it’s never robust enough.

There’s always the stuff that gets through the gaps. But then you have to supplement that with the counterspeech, because you can’t censor your way out of a problem. Removing content is targeting a symptom, not a cause.

Censorship takes the symptomatic hate speech, the symptomatic lines of hate, off. It doesn’t get at that spiritual core, which is, why is somebody feeling that this is their voice, that this is their truth, that this is their ideology? Counterspeech is the only way to try to get at that.

There are some theories that have gone out, and some research, and some of it’s inconclusive previously about, does [counterspeech] work? Can you actually measure behavioral change or sentiment analysis?

Thankfully, I was able to be a part of a really robust three and a half years of testing two different methodologies around counterspeech deployment — taking full advantage of my role at Facebook at the time and working with some incredible data scientists and engineers.

Even to approach that topic, step one was realizing that, for example, Facebook was not the credible voice. Facebook will never be the credible voice. If Facebook sends you a little bot or message and says, “Don’t be an extremist,” that’s not going to be the voice that turns you away from a hate-based ideology.

Nor is government, usually. I’ve worked with very sympathetic governments that really want to put their voice out there in the right place, and it’s good. But again, they’re probably not going to be the most credible voice with particularly vulnerable communities that might not have that trust layer to begin with.

So, just even to try to start a partnership with Facebook at scale to deploy counterspeech in a methodological framework, we had to partner with localized NGOs that were creating really credible, emotional, and compelling content that we tested previously, that had worked online, that we knew was resonating with the right audiences.

The next question is, what are you actually trying to counter? And then the third question is, if you have that voice to counter, what are you trying to do with that audience?

So if I say, “Don’t be an extremist” and it actually resonates with you at all, then what? That’s the big question. I can try to undermine your current thinking, which is already a pretty precarious position to be in, but then what do I replace it with? Or am I going to just leave you in this spiritual void, which many authors have written about? You can undermine extremism, and then you leave someone in a very vulnerable place for backsliding, because you’ve left them with… All the things of why they joined a violent extremist group are actually usually very positive.

People want to think that others joined violent extremist groups, one, because they’re crazy, which is not the case. We have very normative frameworks. If you’re a lone actor carrying out violence, there are much higher likelihoods of mental health issues at play. If you’re a group joiner, [there are] really normative frameworks behind the scenes of group joiners. It’s not, “Hey, you’re crazy.” That would be an over-simplification.

The second is usually, “Oh, you joined because you were economically disadvantaged.” So it was a “poor you” situation. When we look at recruitment of foreign terrorist fighters, when we look at a lot of the leadership around certain violent extremist groups, whether we’re talking about neo-Nazi and white supremacy groups, or Islamist extremist groups, or Buddhist extremist groups, they’re educated, they come from a hugely mixed background, age ranges… When I did work at the Institute for Strategic Dialogue, a great researcher, Melanie Smith, and I looked at just Western females that were joining ISIS, and that age range was from like, 13 to 45, and education from very little all the way up to PhDs, actual medical doctors joining.

So that we need to debunk as well. We have to realize we’re communicating with humans, and that they joined for reasons like finding an amazingly supportive social network, finding what they believe to be a social cause to help the world. They think they’re changing the world for the better. Those are the right qualities for a peace-builder usually, but they’ve been allured by the violent extremist narrative. That can be very compelling, and that’s what we want to try to structurally undermine and redirect.

KOSTA: What just came to mind there, as you were saying that, is this idea of these values that are very pro-social, right? They’re things that arguably we would all be motivated by in our own lives — for connection, for purpose.

But to then hold those beliefs and potentially support or do something really violent, which is in direct violation of those values, what do you think is the thinking there? And maybe this is a bit beyond the scope of the online space, but it can also be a bit of an insight into what it actually offers people to hold those two contradictory thoughts at once.

How do you go from having such positive values to doing something quite destructive in that way? Is that anything you have any thoughts on?

ERIN: This is a great question that I’m pleased to say many an author have grappled with before me, including very well-known individuals like Professor Bruce Hoffman and others. A lot of this questioning we should not think of as new. A lot of people questioned this, for example, after World War II, when you saw many normal societal individuals join the Nazi Party and carry out extreme abhorrent acts against their fellow man.

From a lot of the research I’ve seen, I break it down into three things that are very necessary to turn someone from somewhat extreme into a violent agent.

First, you really have to solidify that self-other dynamic. What is my in-group? Who am I and my group, and then who is the outgroup? Who is the enemy? You see in violent extremist language, whatever the ideology behind it, this self and other — whether it’s an incel movement and it’s men versus the Chads and Stacys, this can take many forms. But that self-othering, where that other is very crystallized and more defined than the self — your in-group is actually less defined than your out-group, but you find common ground in who you’re defining as the other.

And then the secondary layer on that is the dehumanization process. You really need to dehumanize somebody to justify violence against them or against that group. You see, in all these violent extremist groups, you start seeing that language comparing people to, if you look at the Rwanda genocide, comparing people to cockroaches and rats. If you look at anti-Semitic comments, comparing people to certain other dehumanized tropes, or anti-Muslim bigotry that uses animal terminology to try to dehumanize people. When you dehumanize a group, you don’t mind killing a cockroach. You don’t mind putting out a rat trap. And so that’s part of the psychological breakdown as well.

Then the third point that you see that is quite crucial is the in-group using militarized language — calling each other things like soldier, comrade, taking on these rule-of-law roles, because they don’t trust the situation with the actual rule of law. They don’t trust the system to carry it out. So they start taking on militaristic language and militaristic tropes for themselves.

I think that self and other, plus dehumanization crystallizing, and the self-proclaimed military role — that’s when we started seeing that justification of violence.

KOSTA: It’s such a disturbing set of constructions, right? Again, when you dive into the stories of how people go down this road and where the intentions start, you’re like, “Wow, this actually started off in a much more innocent place.”

It’s clear to me, hearing you say those things, that there are certain emotional needs that those sorts of constructions fulfill, right? Maybe in dehumanizing others or in a stronger group identification there — that’s playing on fear and safety and things like that, or trying to further protect one’s own family or one’s own group. Those are things, again, that apply to us at any moment in our lives to varying degrees.

I’ve always said to people — oh yeah, sorry. You go first.

ERIN: I think something you raised really made me reflect on the fact that what we’re also seeing, if you kind of boil down why extremist narratives succeed — we live in such a confusing, nuanced global complicated world, and if somebody comes along and tells you, “Actually it’s not complicated. It’s because of these one or two things. That’s why the world around you is in mayhem.”

If you can take a complicated worldview and boil it down into really binary, easy to digest, “good and evil” tropes, it’s a lot easier for our brain to comprehend as well. It’s a lot easier for us to say, “Oh my God, you’re right!”

When you see things like an immigration or a refugee crisis and you are struggling with your job, it’s a lot easier to have somebody come and say, “Oh, well, it’s a new world order, and here’s why these refugees are given better status than you, and you deserve more, and here’s why they are the enemy.”

All of a sudden, instead of actually having to understand really complicated global geopolitics and economics and human rights law and protect and safety and shelter and understanding conflicts in countries you’ve never been in, you were just allowed to have a trope that simplifies and clarifies the narrative. That’s actually very calming to parts of your brain.

KOSTA: Of course. Again, if we’re talking about people who by objective measures have high intelligence — if you’ve got PhD doctors, successful people in the traditional sense — there’s again another disconnect between the desire for a simple worldview, but also this ability to actually use their brain for very complex tasks and be very high-functioning. For me, that says that there’s this weird disconnect between the intellect and the emotions, that some people might feel.

When you are a bit smarter or “smarter” — I say that in quotation marks — perhaps these emotions can be easy to rationalize, if you’re feeling really strongly. I guess what I’m getting at here is there’s a really strong, emotional pull that the intellect can mold or fold itself around.

Do you think it’s fair to say, and again I’ve always had this feeling myself… There’s a lot of discussion around how successfully extremist groups actually market their propaganda. Do you think they just have an easier job?

ERIN: I think that marketing-wise, they do have an easier job because their messaging is simplified. It’s direct and it’s clear.

KOSTA: And the call to action is very clear.

ERIN: And the call to action is clear, and the enemy is clear. To counteract that by trying to say, “No, actually it’s really confusing and messy”, that’s not a very good marketing campaign. When you look at good counterspeech, it’s not about that.

There’s a difference between what we might call preventing violent extremism (PVE content) versus countering violent extremism (CVE content). The reason for that is, there’s a lot of content out there that is about building natural resiliency among communities, especially among young people, so that if they come across some of these extremist, hate-based narratives, they’re better prepared to throw them to one side or question the source.

When we talk about counter-extremism, we’re talking about trying to reach people that are already showing symptoms, in essence, of adhering to certain violent extremist ideological tropes. So they’re already starting to share some questionable content. They’re already starting to shut down other parts of their social network and be more involved in some of these more formal or decentralized hate-based groups.

Those are the groups that are the hardest to reach. And those tend to be the online communities that I, in my own research and with a lot of other amazing, brilliant people that I’ve piggybacked off of their brilliance, that I’ve worked with to try to deploy counterspeech more proactively.

The big golden question is that tech companies have the technology, so they have the ability to upscale, optimize, and target things. Whereas most NGOs and practitioners and grassroots organizers, or former extremists (which are some of the most compassionate, active, and incredible practitioners I’ve ever worked with), they are the credible voices. But they are not the tech companies.

The golden goose egg is combining the tech for what it’s good at (upscaling, optimizing, targeting) with the credibility and voices and local nuance and understanding of those activist efforts.

When I spent my sojourn at Facebook, that’s what my goal was — to develop some of those. And again, not Facebook-specific. Some of those models were begging and borrowing and stealing from the forefathers of things like the redirect method, which was built out by Jigsaw and was originally developed for Google search and YouTube. That’s understanding, “Okay, if you are searching proactively for very specific violent extremist–related terms, how can you redirect those search results to be counterspeech, to provide alternatives, to provide resources, to undermine the hate-based narrative?”

Now Facebook’s a little different to Google search. You go on Google to search for websites, instruction, guidance. You go on Facebook and Instagram to search for people in groups; the way that people search tends to be for people in groups.

When we created our own methodology, we were able to partner with some very strategic NGOs. First, you have to decide, what language am I targeting in? What country am I looking at? What ideology am I trying to undermine? All of that has to be at the forefront. There’s no one counter-narrative to rule them all. It has to be very nuanced.

KOSTA: Of course, I wish.

ERIN: I know, it would be so easy!

KOSTA: It would be so easy.

ERIN: We had to do that with a lot of thought. For the A/B testing of the redirect method, we looked first at Islamist extremist narratives and tropes around people that had been sharing and engaging with known terrorist content. Not just extreme content, but labeled and verified terrorist content.

We looked at Arabic and English in the UK and Iraq, and partnered with NGOs that were producing localized, targeted content in those spaces — the Adyan Foundation in Lebanon, which had a lot of Iraq-specific content that they developed out; ConnectFutures here in the UK, where I’m based, that was doing a lot of local engagement and had done everything from things around the far right to Islamist extremism, and they had a lot of local practitioners. Then we worked with ICSVE with Dr. Anne Speckhard, who’s a social psychologist who had done a lot of interviews with quote-unquote “jihadists” that were in jail, having been put in jail after joining the so-called Islamic state.

We had both upstream and downstream, hard-hitting and softer content that we took from these NGOs. We used that to really deploy more strategic counterspeech.

The other thing that a lot of these campaigns (if you consider it just an online marketing campaign)… In order to measure behavioral change, you can’t just throw someone a one-off campaign and say, “Did that do enough?” So we did some work with some amazing Stanford research students that were doing a Masters to test rates and frequencies around counterspeech — how much and how often would I have to show you a message for it to resonate with you?

That’s never been tested before in this space. It has been tested in other marketing schemes, but marketing an idea and the rate and frequency of an idea around counter-extremism hadn’t really been tested. In their very contained environment, they found that if you focused and could ensure that someone saw about two to three pieces of content a day for five to seven days, you could in fact measure a change in sentiment.

KOSTA: Wow.

ERIN: They did before and after tests. For their testing, they actually looked at opinions on gay marriage rights in Australia. We didn’t dictate for them, so they pivoted and looked at that. We took that rate and frequency model, and that’s what we tested with.

We used something — and tell me if I’m getting too technical — we used something called a “quick promotion” instead of just normal ads. Because if I target you with ads, first of all, I can’t tell if the same person — I mean, you don’t have personally identifiable information, so I can’t tell that the same person is going to see something more than once. I can’t control any rate and frequency. It’s a scattered approach.

But with quick promotions, we could say, we only want to target people that have in fact shared and/or engaged with a known piece of terrorist content. Then we want to put them through this pipeline where they’d get two to three pieces of content a day for five to seven days.

Again, a one-off violation doesn’t get you kicked off a platform; it gives you a warning. You have transparency that you’ve been warned about it, but you know, one-offs don’t get you kicked off.

That was really interesting because, in the aftermath, there were a couple of findings with that test model. And we did another test model around white supremacy and neo-Nazism, because I’m a masochist.

KOSTA: Yeah, the fun stuff. [laughs]

ERIN: The A/B testing showed us a few things that I think are really important.

One was that when we first looked at the statistical analysis — and I think something like 37,000 people went through the pipeline of this rate and frequency testing of counterspeech — the first important thing is, we saw no signs of increased radicalization from going through this process.

There are some people that rightfully question, “You’re dealing with vulnerable communities. Is counterspeech going to somehow further radicalize them?” If it’s not handled sensitively, you could actually push someone more in an extreme space.

It’s very good to point out that we did a lot of precautions around it, but there were no indications that somebody further or exponentially changed their rate or shared more bad content because of this exposure. So that’s important.

KOSTA: So was there like a net “do no harm”, at the very least?

ERIN: Yeah, we absolutely wanted to prove that “do no harm” was possible at the least.

KOSTA: That’s good.

ERIN: The second was that, statistically, initially it didn’t show any real significant change. But when we worked with a qualitative researcher, my coauthor Karly Vockery… On the quantitative side, I had this amazing data scientist Farshad Kooti; and on the qualitative side, I worked with this amazing researcher Karly Vockery.

Karly and I started going through it and realized, even by targeting people that had shared and engaged with terrorist content, there was a lot of noise. Because, as we know as practitioners, a lot of researchers share terrorist content. A lot of journalists sometimes share terrorist content. A lot of activists that are pointing to it and saying, “This is abhorrent, this does not represent my people” share this type of content. So one-off indicators, even if it’s a hard indicator like “You violated a platform, you shared really dodgy content”, that is not a good enough indicator that you are an extremist. That’s important that we don’t unduly target people.

KOSTA: Of course. You just brought to mind a terrible example that I was very familiar with, which was the sharing of the livestream video of the Christchurch shootings — people sharing that for all sorts of reasons, ranging from outrage, to just disbelief, to not knowing exactly what it was. Very clear violation.

Obviously that’s an extreme example, but that just brought to mind: one of the other guests that we had on Undesign referred to it. Not necessarily the extreme stuff like that, but the sort of the “outrage porn” type stuff where you’re talking about pretty contested issues that fall more in that gray area. Actually, do they fall in that gray area, or are they things that generally breach?

ERIN: It depends what type of content it is. It really depends on the platform, what the policy line is. But for example, the Christchurch video, all of that was removed. I remember I was at Facebook at the time, and we removed something like 1.5 million shares of that video within the first 24 hours alone.

KOSTA: Right, wow.

ERIN: Now if you were to say those 1.5 million are all white supremacist or neo-Nazi sympathizers or anti-Muslim bigotry individuals, that would be ridiculous. But I think even Twitter statistics showed that for something like 70% of those that shared the video, the sharing came from verified accounts.

KOSTA: Oh wow.

ERIN: It was not those that were there to support it. It was those that were saying, “This is horrible. Oh my God, have you seen this? I can’t believe this. Have you seen this?”. Sharing it to create awareness. That was an important find in our own study to weed out the noise.

That’s also important because you do see that sometimes law enforcement or government would like to say, “Hey, they’re sharing illegal content. Give us the profiles, proactively, of individuals that are sharing terrorist content.” And you’re like, “Well, we need a lot of other due diligence around that sort of process”, because then you would share a bunch of journalists and academics and researchers and practitioners. That’s actually way above and beyond what you want to proactively be sharing and labeling as extremist sympathizers.

KOSTA: Right.

ERIN: So once we weeded that out and found the more targeted group within the group that we thought was already targeted, we did see positive behavioral change. We saw that, in fact, they didn’t share any more violating content.

When we looked at the A/B testing of the likelihood to share more content, usually you’re going to have another violation within about 90 days.

They actually stopped violating. There are two things to take away from that, if I may?

KOSTA: Yes, please.

ERIN: The one thing is, that’s amazingly good. That’s great, because it means that they are not putting more violating content out that others could engage with.

The caveat to that is that behavioral change is not the same as sentiment change. I’m not 100% sure. You could have stopped because the counterspeech was compelling. You could have also stopped your behavior in that direction because you thought, “Facebook’s targeting me with counterspeech.” They are thinking “I’m dodgy. I’m going to change my behavior to evade detection.”

It’s an unknown, but behavior does not equate sentiment change. But the behavioral change is still a net positive, because you’re not putting more bad content for wider audiences to see on those bigger platforms.

KOSTA: And I guess terrorist content or extremist content relies on being seen and consumed. So to even starve it of that oxygen that a social media platform might afford it, by it not existing on the platform for whatever reason, generally that is a good thing.

ERIN: It’s huge. It’s huge. But then we get to the question of, can you influence sentiment? [For] people that create counter speech, the goal is to create a sentiment shift within somebody — to say, “Actually, I’m not going to target the Muslim community with my hatred” or “I’m not going to target the Jewish community” or “I’m not going to target somebody just because they come from this country or have this sexuality.” I mean, the LGBTQIAP+ community is targeted by all of these violent extremist groups, for the most part.

KOSTA: Oh man, they cop it from all sides.

ERIN: We shouldn’t forget they get really, really lambasted, and really different types of abuse that’s very visceral.

So in the second testing that we did, we looked at the concept a little bit more around search redirection. Instead of putting direct counter-narratives into something like your news feed, we went to the search function on Facebook and Instagram. We wanted to look at neo-Nazism and white supremacy–related search terms, groups, and individuals — again, going back to some of that redirect methodology.

We worked with an incredible NGO at first called Life After Hate that’s based in the US, and they have an incredible team. It was also founded by a group of former extremists, but also they have practitioners and some incredible people working with them, and very much a “do no harm” principle with a lot of resources on how to help disengage someone.

Because ultimately that’s the question of, “Well, what do you do now?” — now that you even could say, “Okay, I want to leave a group.” It’s really hard to leave some of these groups.

KOSTA: Where to next?

ERIN: They are your family. They’re your life. Sometimes they’ve helped you get your job. So leaving can actually put you or your family at risk, depending. Especially with some of these white supremacy, neo-Nazi groups, leaving can be very difficult.

We looked at white supremacy and neo-Nazi links in the US and worked with Life After Hate on developing these terms (co-developed with the NGO), whereby it was just something very basic. Initially, redirect on Google and YouTube is a search term, and then you scatter the counterspeech or the other messaging in the search results.

This was more unidirectional. When you used some of these search terms, if we thought you were looking to find, again, groups or individuals related to some of these white supremacy and neo-Nazi groups, at the top of your search results it gave you the option to connect with Life After Hate.

It was very basic. It was transparent. There was transparent language saying, Facebook supports partnerships with NGOs. It said who Life After Hate was. And then it’s up to the individual — when they clicked, they would go to the Life After Hate page. You’d go off of Facebook or Instagram, and there’s helpful resources, there’s videos and testimonials from formers, you could get disengagement support.

That testing — they found that they had a 200% increase on traffic to their site.

KOSTA: Oh wow.

ERIN: And we were able to work with them, and they found that their caseload of actual individuals going towards them doubled or tripled — of people actually reaching out saying, “I need some help.”

KOSTA: That’s profound.

ERIN: That’s profound. That’s a sentiment shift. That’s somebody saying, “I didn’t even know there was a resource to leave.”

After that fact ,we expanded this to some different countries with some different partnerships. Some of that is going on in Australia, Indonesia, Germany, and a specific one around QAnon. If you type in some dodgy QAnon terms, it also redirects you to some more academic information of debunking some of those conspiracies.

That was great. It was more unidirectional, and that’s really compelling to see — these public private-partnerships, they can work. They really do have that potential to not just try to censor and de-platform people, but actually provide them support, provide them alternatives, provide them that narrative that they can grasp, too.

KOSTA: Wow. I just want to reflect this back to you just to make sure I’ve understood the core of this, particularly for those listening in as well.

You had the two main streams of research going on, right? Where you’ve got the… not counter-narrative style, but the exposure to counterspeech content. And then you’ve got the referral stream to actual support services and offline supports and things like that.

In that first stream, we saw a “net no harm” sort of approach with sparks of positive observable behavior for the most part; pending the ambiguity that not everyone’s online behavior reflects what they really think. Overall, the impact it leaves on the online space is a positive one.

ERIN: Right.

KOSTA: But then, with the redirect assessment where people are being guided to an alternative, they are treated with respect by being transparent in how this option has come to their attention. And they’ve taken up that offer to go in a different direction. You’ve activated people’s agency, and you’ve seen actually a good response to that sort of approach.

Is that a very basic download of the research findings?

ERIN: Yeah. You saved me the last 10 minutes. I could have just pivoted to you, and you could have given that 60-second breakdown. I’m going to use that elevator pitch next time,

KOSTA: Look, free to a good home! As long as I’ve understood it, it means you’ve done a great job explaining it.

This is a really good place to pivot back to where the role of counterspeech is and what the role of the different players are, right? Because you made this point. And again, in my own experience, we come across this all the time where government are not the right people for these messages; Facebook, social media are not the right people for these messages, right?

ERIN: Right.

KOSTA: What is their role? What is their ideal role? Or what is their best use in this fight to minimize the various harms, whether it’s exposure or whether it’s going down a really violent path? What is the role of these bigger players in this space, from your view?

ERIN: I think this is such a key point, because when each of our sectors — and now I’m the example of somebody that went from academia to NGO practitioner to tech company, now back to NGO practitioner, so I’m a sector-crosser, if you will.

KOSTA: That’s why you’re the first person we came to for this topic.

ERIN: I really think that we only get to a real crucial impact point when each of our sectors does what they do best. We get to a bad point when sectors try to do what they’re not good at, in a way that could be harmful.

Governments have an important role to play. A lot of that is in providing the infrastructure, time, and space to community groups to have this time and space to develop, to do what they do best in developing their authentic voice, to developing programs and alternatives.

Again, I can maybe create a very swanky campaign to undermine some of this hate-based rhetoric, but then what? Where does somebody turn to, if they don’t have both online and offline community support? And I don’t say that because it’s light and fluffy. I say it because that’s what’s effective.

KOSTA: We actually know it works.

ERIN: So I want to be very clear, this is not just a strategic agenda.

We know that if you’re going to undermine, you then need to redirect someone to another point of social brotherhood and sisterhood, or another space that they can use that positive activism that they wanted to spend in a net positive, “do no harm” space.

Governments do provide a lot of infrastructure to those communities being able to thrive and do what they do best in providing social-good services to the country or to their locale. Tech companies are in a unique position to, again, be that mechanism to work with NGO and CSO partners, to take their messages — not take them without permission, but take them and work with them — and help upscale, optimize, and target those messages, and get them to the right people.

Because again, you, as a general human aren’t going to have access to the same tools. You might use some ads-marketing tools, which really works in the prevention space when you’re trying to target lots of people. Ads tools are usually optimized when you’re trying to reach over 100,000 people. That’s good for prevention.

If you’re trying to counter a very unique or niche section of society that is indulged in a very specific violent extremist threat, ads-marketing tools is not going to get you there. That’s when you need to work more in coordination on these more strategically designed and carefully designed processes.

I got to tell you, the reason some of this research took three and a half years is for all the right reasons. Although I was sitting there looking at my watch all the time, it had to go through lots of legal review, privacy review, research review, tooling review — like, was I going to sit in there and break the system or unduly target a population and have it be really harmful? A lot of review behind the scenes just to be able to launch these two different methods for testing them.

KOSTA: Yeah, right. And that brings me to my next question. Obviously, that’s really encouraging results, right? And we have reason to be encouraged that these things can be dealt with.

But the fact is that this is still a major issue, right? It’s something we are constantly grappling with. It looks a bit different with every passing year. If you had a magic wand right now, to just fix one part that you feel is impeding progress in this space at the moment, what would you do with that magic wand?

ERIN: This might come off as a little controversial, but I mean, we deal with counter-extremism. What’s not controversial about that?

KOSTA: Exactly.

ERIN: It would be… To preface, I would say, my theoretical background was looking at post-communist, far-right extremist radicalization, specifically in Hungary. That is a niche. I had that cornered, right?

KOSTA: That is niche.

ERIN: I hung out with a lot of what we would call far-right extremists in various parts of Hungary for quite a while, and then worked with an organization that just focused on Islamist extremism. Going into that type of job, I felt extremely out of my depth because, to profile myself, I am a white Western female. I have not studied Islam. I do not feel comfortable going in and speaking about… When we say Islamist extremist jihadism, that’s me talking about aspects of potential Islam. I’m not qualified to do that.

So I spent a good year or two working with those to make sure I was working with people that did understand that, that came from a background of that. But I think what was most shocking to me was that all my theory base was completely the same. Violent extremism has different wrapping paper, but the core push and pull factors that bring people into violent extremist groups, I really would argue, are pretty much identical.

KOSTA: They don’t change.

ERIN: If I had a magic wand, it would be to have society at large stop orientalizing what violent extremism is; so that when, all of a sudden, we see yet again a white male, white supremacy–driven attacker, we don’t sit there and go, “Oh my God, we don’t know how to deal with this.”

We have all the theoretical background because, first of all, it’s not new that we have that sort of attacker. But secondly, that we have done so much work around Islamist extremist jihadism when we look at white Western governments, because the fear was othering as well.

We othered what a quote-unquote “jihadist” looks like. A lot of money went towards that, and we don’t know always how to deal with domestic threats in the same way, because it’s not the other. It’s what we would normally consider our own in-group. It looks like us.

So I think if I had a magic wand, it would be to say, we have all the tools and learnings to adequately approach this. We should not see it as completely different from all the learnings we’ve established over the last 10 years, looking at a group like ISIS. We just need to repackage a lot of those learnings with the different, localized credible groups that have been dealing with those ideologies for many years.

KOSTA: Right. That’s a pretty good answer, actually, because it speaks to some of the key problems when we talk about online extremism — where we don’t necessarily see the internet as a space that is essentially an extension of a lot of people’s lives. But the way people conduct themselves in that space is usually in reaction, or in response to, or in complimentarity to their offline worlds.

Like you said, you need shoes, you go shopping online. You want to talk to someone, you want to kill some time on Words With Friends, you’re bored, you go online. These are social and emotional needs and personal needs that exist in our bodies and our hearts and our minds, that we then go online to fulfill. It’s just another avenue to find all these things.

We haven’t even spoken about the dreaded algorithms and things like that, or echo chambers, in a whole lot of detail, because this is such a massive topic. Maybe that’s for Part Two, hopefully another Part Two in the future, right?

But yeah, sorry. Did you have anything to expound on those kinds of buzzwords?

ERIN: You know, we’ve solved that. [laughs]

No, we can’t just throw in echo chambers and algorithms right at the end. I would say we need to be very specific. If I switched hats to more the GIFCT (the Global Internet Forum to Counter Terrorism), I think one of the other things… If I had a second magic wand, it would be to help the communication gap between practitioners and governments and tech companies.

Because sometimes there’s just a slightly different language being spoken, and there’s a different tech language at play. When somebody says, “Ooh, algorithms”… well, an algorithm is what makes the Twitter logo appear in a certain place. An algorithm is what makes your profile picture a certain way. An algorithm is a filter. An algorithm is everything.

What we’re really talking about when we say “algorithm” as a catch-all phrase is the fear that promotional or optimization algorithms are proactively putting extremist content into your feed because of the way that you were searching for something. That’s a really specific part of what people are concerned about.

You do see that a lot of the larger tech companies, for example, have already been doing a lot of adjustments to that, to discount certain search terms, to redirect. That is a constant adversarial shift though. Because, as we know, violent, extremist groups use coded language, change how they’re doing things proactively. They know it’s a bit of a cat and mouse. So you have to have civil society and researchers and academics working with the tech companies to constantly be aware of what that looks like, so that you can reprogram or insert different filters in it, so that you’re not inadvertently promoting that.

There’s some great work being done, for example, by the Centre for Analysis of the Radical Right. They wrote a paper that’s open access just on symbols, slogans, and slurs of the radical right, just to start understanding some of that language. If you look at the Boogaloo movement or QAnon… oh my God, there’s so much weird coded language.

And a lot of it would be false positives if you tested it around it. Like, “Where we go one, where we go all” or “Follow the white rabbit”. And you’re like, “Am I going to an Alice in Wonderland tea party, or am I joining the Highland extremist movement?”

KOSTA: Probably both just as weird, but anyway.

ERIN: Same with some of the iconography. If you look at the Boogaloo movement and someone is wearing a Hawaiian shirt, holding a glass of milk, doing the okay sign, I don’t know if they are a dad at a barbecue or a white supremacist.

Because Boogaloo says, wear a Hawaiian shirt. They’ve used white milk; white supremacists have used white milk as an indicator. And actually the okay sign is a indicator for white power. So again dad at a barbecue, white supremacist. Or both, I don’t know.

KOSTA: Yeah, not mutually exclusive, unfortunately.

ERIN: Changing algorithms will not necessarily… That there is some of that, that the big companies work with a lot of experts to try to work around. But some of it is not just algorithms. Some of it is the cat and mouse of adversarial shifts.

We can’t algorithm our way out of violent extremism. Violent extremism predates the internet.

Most of what we get in our day to day from the online services are really positive. We just take it for granted now. You look at not that long ago, we didn’t have any of this. There’s a lot of critique, pro and con, around where we’re at with the internet of things. But I would say there’s never one solution.

It’s not going to be one algorithmic change. It’s not going to be one piece of counterspeech. It is only when all our sectors come to the table, recognizing that we all want the same thing… Very few actors in this space want to further violent extremism, that’s a very small percentage of society.

But if we can come, not trying to bash heads and point fingers, but saying, “Governments, what are the policies that help? What are the infrastructures that get support to the right communities? Tech companies, what are the tools you have at play to counter the negative, to put ‘do no harm’ in place, and to upscale and optimize these counterspeech initiatives? And then civil society, do your thing. Be authentic, be creative.”

But [civil society and nonprofits] need the time, space support and money to do that. It’s not just, “Oh, I’ll do this in my spare time.” You actually need the infrastructure to support that.

KOSTA: That’s great. Just as we close out, in the last few minutes of this discussion, I really want to think about the everyday person and what their role is.

Whether it’s counterspeech, whether it’s looking after other people in their orbit, whether it’s feeling they’re in an unsafe space online… What do you think? If you’ve got people that are interested in being a good online presence or a proactive digital citizen, do you have any advice for them on what is the everyday person’s best use or most productive use of their space within their own circles? I want to think about the everyday person. Where do they fit in all of this?

ERIN: I think, at a bare minimum, for my magic wand #3, it would be to just say that we all have a little more work to do on questioning our own biases and how we take on quote-unquote “facts” or information online, myself included.

I am constantly trying to go with my bias check and say, “That sounds right. Well, why does it sound right? It sounds right because I already agree with it.” I love people that agree with me. Everyone loves people that agree with them.

Even myself, I have to go and say, “Huh, what’s the source of that? Is that data, is that opinion, or is that fact?” Because we are increasingly (as a whole, not just extremists), we go towards what feels… I think Stephen Colbert even said it’s about “truthiness”, not truth. We go with, “That sounds right.” And when it sounds right, we don’t go further. And when it sounds wrong, we adamantly argue against it.

So I think we all have a role as digital citizens, especially if you’re parents with children, [to ask] “What’s the source truth? Is that an opinion or is that a fact? Where am I getting that from? Am I only listening to one news outlet or one person all the time?”

I should probably listen to people I don’t agree with. I tend to learn more by listening to people that I don’t agree with, even if it riles me up a bit, than if I just stick to my own echo chamber. So I think we all have a little bit of work to do in that space.

KOSTA: Right. That’s interesting. And I’m glad you brought up this idea of self-awareness and introspection in the everyday person’s role in this. It’s really easy to see other people’s not-so-great examples of online behavior, or just not necessarily being rigorous with the news that they share. But are we holding ourselves to the same standards as we might be holding other people?

Do you have any thoughts for people who might come across people in their social spheres, on their social media, that are sharing like… We’ve got the gray zone stuff, where people can disagree and it can get polarizing. What about the stuff that’s just outright concerning?

You know that old adage in terrorism, which is like, we might not have a definition for it universally but we know it when we see it? It’s one of those things.

ERIN: Yeah, that’s crossed a line.

KOSTA: Yeah, right. That you’re like, “Oh, okay. What do I do?”

ERIN: There’s two things. And I’ve talked to a lot of people, particularly during this pandemic, because we’re in a very specific moment in history around the pandemic. Particularly, very close friends of mine that say, “Oh my gosh, my uncle or father or aunt or sister is a total Q Anon junkie. I had no idea. I had no idea, and they are spitting vitriol, and now they’re not going to get the vaccine, or ‘Oh my God, did you know that governments are doing this?’”

And I have to say, if you’ve ever worked with governments, it’s kind of hard to think that there’s this big global cabal. Because you think, gosh, that that person couldn’t get their Zoom to work. How are they going to coordinate a big global cabal around the world?

But that being said, I do think we’re at this moment where you have a choice to engage or not engage. That’s Choice #1.

If you see something… At the very least, if it’s really crossing the line, if something is hate speech, or something is COVID denial, or something that would cross a company’s terms of service, the one thing I would remind people is that that flagging process is anonymous. It’s not even allowed to be requested by government requests for data. If you flag, even if it’s your slightly racist aunt, if someone has one of those, everyone has one…

KOSTA: Unfortunately they do.

ERIN: … then you could flag their content. And nobody will ever know, particularly that person, that you flagged it.

But that will help potentially get it removed. Or not. Sometimes it really depends on the policy or the nuance of that content.

So that’s one thing. The second thing is, you don’t have to engage if you don’t feel comfortable engaging, but if you do engage, there’s some good. Maybe I can find a pamphlet to send around, to attach to this. There was some good work done by ISD about the trade-offs of engagement, because you don’t want to put yourself at risk by engagement.

KOSTA: Yeah, great. That’s right.

ERIN: You could get bullying and harassment. Some people are very open to engage, and some people aren’t.

Nobody wants to be told they’re wrong. I don’t like being told I’m wrong.

KOSTA: I hate it.

ERIN: I think I’m clever. I think I know everything, right? So nobody wants to be told they’re wrong.

Finding a point of common ground, or actually just finding a point of interest saying, “Huh, I’d never heard that. In fact, I heard the opposite. Where did you find that?” Or “Wow, I’d not heard that before. Would you mind talking me through that? Because I’ve actually found this.” Not coming at it antagonistically, but actually coming at it as a conversation — civil discourse, surprisingly enough, does go a long way.

Finding people at common ground, instead of calling them a lunatic or thinking that they’re the devil or saying that they’re wrong, doesn’t surprisingly go very far. If you have the time and space to give a little — or have a couple articles that you would point to and say, “Oh, that’s funny, I found the opposite, but I’d love to talk through that with you,” or “Could you explain more?” — then you get to a point where you can have a conversation that goes a lot further than just trying to tell someone they’re wrong.

KOSTA: And really, that’s just all examples of living, breathing counterspeech, isn’t it?

ERIN: Yeah, counterspeech comes in many forms.

KOSTA: That’s right. Well, Erin, I’m conscious of your time, and I think that’s probably a pretty nice place to wrap up.

Again, thank you so much for your time. I could sit under the learning tree with you for a very long time and just listen.

ERIN: Likewise. You do incredible work with your team.

KOSTA: I appreciate that a lot. That means the world coming from you.

For listeners who want to learn more about what you do or some of the stuff you’ve been up to, where can they find you?

ERIN: There’s quite a lot of information on GIFCT.org. I post some of my own research and what’s come out on my own little website. I hate saying it out loud, because it sounds really trite, but it’s mostly just to keep track of things like that.

I don’t know how this goes out, but I can also send a few links. Those studies that I talked about, particularly the search redirect and A/B study for counterspeech deployment, actually came out recently. I’m very proud that we were able to get it out.

KOSTA: Congratulations, that’s massive.

ERIN: Thank you. And it’s, again, a labor of love with two of my co-authors. It came out in Studies in Conflict & Terrorism. It’s open access, so it’s not behind a paywall.

KOSTA: Great. We’ll be sure to share that. Basically every episode has its own suite of resources, and we can talk more about what you’d like to include in that so the conversation just doesn’t start and end here.

Thank you so much, Erin. All the best, and hopefully see you for a Part Two!

ERIN: Absolutely. There’s always more to be done, but we’re sitting here fighting the good fight. The masochism continues.

KOSTA: Thank you for all your hard work. Thanks Erin.

ERIN: Talk to you soon.

KOSTA: You’ve been listening to Undesign, a series of conversations about the big issues that matter to all of us. Undesign is made possible by the wonderful team at DrawHistory. If you want to learn more about each guest or each topic, we have curated a suite of resources and reflections for you on our Undesign page at www.drawhistory.com.

Thank you to the talented Jimmy Linville for editing and mixing our audio. Special thank you to our guests for joining us and showing us how important we all are in redesigning our world’s futures. And last but not least, a huge thank you to you, our dear listeners, for joining us on this journey of discovery and hope. The future needs you.

Make sure you stay on the journey with us by subscribing to Undesign on Apple, Spotify, and wherever else podcasts are available.

More Podcasts
Epilogue

How do you create a social change podcast? Angel Chen & Jeffrey Effendi from DrawHistory

Available Now (Aired May 9, 2022)

On this episode of Undesign, we look back on our inaugural first season with host Kosta Lucas and DrawHistory founders Angel Chen and Jeffrey Effendi to share podcast learnings and insights from behind the scenes, as well as opportunities and pitfalls for aspiring podcasters seeking to make change in the world around them.

Episode 9

How can technology be inclusive for everyone, everywhere? Garen Checkley from Next Billion Users, Google

Available Now (Aired July 20, 2021)

Joining Undesign on this special surprise episode to discuss inclusive technology is Garen Checkley, Product Manager at Google's Next Billion Users initiative. Garen shares plenty of insightful stories on Google’s mission to make technology helpful for everyone, everywhere, from their new research on building voice-first products, to an important report on gender equity online. This surprise episode was released to celebrate DrawHistory's sixth birthday, and the enduring role that technology design can play in creating societal change.

How does counterspeech prevent online extremism?

Fresh Ideas and Ideals. No Spam.

Subscribe to our newsletter to receive regular updates on the latest insights, ideas, and discussions at the intersection of social change and creativity.
Consent(Required)