WWNO skyline header graphic
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Local Newscast
Hear the latest from the WWNO/WRKF Newsroom.

Child sexual abuse material is on the rise online. Will lawmakers and big tech finally act?

Sign up for the On Point newsletter here

In just one year, reports of suspected child sexual abuse material online jumped by 35%. A big spike in an ongoing trend.

“What I think has been particularly troubling over the last 20 years is that the victims are getting younger and younger and younger. The acts are getting more and more violent and becoming more prevalent,” Hany Farid, a professor of computer science at UC Berkeley, says.

The content is easy to find, because it’s not hiding. It’s circulating on the largest social media platforms in the world.

“It is not in the dark recesses of the Internet. It’s on Facebook, it’s on Instagram and TikTok. It’s on social media. You don’t have to look hard to find this material,” Farid says.

Critics and child advocates say that companies are not doing enough to take down the content.

“Nobody wants to hear about this. It’s awful. But we can’t bury our heads in the sand and think, well, this isn’t going to happen,” Farid says.

Today, On Point: Child sexual abuse material online, and what lawmakers and the tech industry can do to stop it.

Guests

Yiota Souras, senior vice president and general counsel at the National Center for Missing and Exploited Children, a private, non-profit organization established in 1984 by the U.S. Congress.

Hany Farid, professor of computer science at the University of California, Berkeley. Professor Farid, along with Microsoft, invented PhotoDNA, the main method for detecting illegal imagery online.

Also Featured

Jane Doe, whose son John Doe was coerced and threatened by a predator into making and sharing child sexual abuse material online at age 13. The family is currently suing Twitter for allegedly not taking the material down once notified. They are being represented by the National Center on Sexual Exploitation Law Center and The Haba Law Firm.

Damon King, principal deputy chief of the Child Exploitation and Obscenity Section’s (CEOS) within the U.S. Department of Justice, Criminal Division.

Transcript

MEGHNA CHAKRABARTI: Jane remembers looking at the message on her phone. It was January 20, 2020.

JANE DOE: It was like 11:30 at night, and I got a text from … an associate at work asking me, is your son so-and-so? And I said yes. And they said, ‘Well, he is on the phone with my niece right now and he’s discussing that he doesn’t want to live anymore and he is suicidal.’

CHAKRABARTI: Jane’s son, John, was 16. He was thriving in class, in baseball, at church. So she thought, You’ve got the wrong kid here.

But that night, John told her, yes, he was suicidal because of a video that was being passed around in school.

DOE: So, you know, the first kick in the gut is that I think I know my kid and he’s suicidal. And the second kick in the gut is that what is this video?

CHAKRABARTI: The video was made three years earlier. It was child pornography. And her son was in it. This is On Point. I’m Meghna Chakrabarti.

This story is about child exploitation and may not be appropriate for all listeners. Jane and John Doe are not their real names. They have filed a lawsuit, and the court granted them anonymity because John was a minor when the sexually explicit video of him was made.

The video was not hidden away on the dark web. It was posted on one of the largest public social media platforms in the world, Twitter, where it was watched more than 167,000 times.

DOE: Repeatedly, we reported it as child pornography material, tried to tell his age. They weren’t going to take it down.

CHAKRABARTI: There has been an explosion of child sexual abuse material online, on almost every social media platform in the world.

In 2021, there were almost 30 million reports of suspected online child abuse material, a 35% increase from the previous year, according to the National Center for Missing and Exploited Children. Millions more images and videos continue to circulate that have never been reported.

Tech companies know this. They also know possessing or distributing these materials is a federal crime.

In November of 2022, Twitter’s owner, Elon Musk, posted:

CHAKRABARTI: More recently, Twitter announced it had suspended more than 400,000 accounts associated with child abuse material in January 2023 alone, saying:

“Not only are we detecting more bad actors faster, we’re building new defenses that proactively reduce the discoverability of Tweets that contain this type of content.”

But even with such efforts, child sexual abuse content continues to swarm social media sites. And critics claim it’s because the companies themselves are willfully not doing enough.

In John’s case, even after he and his mother made multiple reports to Twitter about the explicit video, the company made a choice.

“We’ve reviewed the content,” Twitter said in an email to the family. “[We] didn’t find a violation of our policies, so no action will be taken at this time.”

Today, we’re going to talk about why.

John is Jane’s youngest child. She says he’s a straight A student, played baseball since he was five and was very involved in their local church.

Home was their sanctuary.

DOE: My husband worked night shift. I worked day shift so that our kids never had to go to daycare. They’ve never ridden a bus because I felt like if we kept them safe, then they would always be safe.

Jane says they allowed only limited use of social media and had family rules about what kind of websites the kids could access.

But that night, in January 2020, when 16-year-old John was suicidal, Jane discovered that a trafficker had in fact reached into their home and found their son three years earlier … through Snapchat.

John was 13 at the time. And the person sent him Snaps pretending to be a 16-year-old girl. The messages were convincing. John thought she went to his school.

DOE: And he said to me, mom, she had to have known me because she knew that I played baseball. She knew that I went to this specific high school. So I thought it was somebody that he knew.

CHAKRABARTI: One night, John had a friend from the baseball team sleep over. He was also 13. And they both started messaging with the person. When the person realized John and his friend were in the same room, the person sent them photos – nude photos that John thought were coming from a 16-year-old girl.

DOE: He honestly was like, Mom, I was 13. I had never seen a naked girl before. Like, I hadn’t.

CHAKRABARTI: Jane’s heart still breaks, because of what happened next. The person asked John and his friend to send back nude photos and videos of themselves together. And they did.

DOE: I wish that he hadn’t responded to it, but he was, you know, 13. And how many 13-year-old boys that have never seen anything like that, will do the same thing?

CHAKRABARTI: Suddenly, things changed. The person wanted much more graphic videos of the boys performing sexual acts, and directly threatened the boys if they didn’t comply.

DOE: The threats continued, and the blackmail continued. To the point that they told him that if he didn’t continue to escalate the things sent, that the predator threatens to send what was already taken to his parents, to his pastor, to his coach, place it online.

CHAKRABARTI: John was being pulled into an alarming new trend: child exploitation via self-generated content. A method where what at first seems like an innocuous contact soon leads a minor to create, transmit, and exchange sexually explicit images of themselves to child predators. Children have reported being manipulated, threatened, coerced and even blackmailed into sending increasingly explicit photos and videos of themselves.

At this point, John realized the person was lying about being a 16-year-old girl. He knew what was happening was wrong. But he didn’t know how they got his number or Snapchat username.

And he was scared. The threats continued for weeks.

And then, the trafficker asked to meet in person. John refused.

DOE: He did not meet them. He said he just stopped communicating with them.

CHAKRABARTI: The trafficker stopped messaging him. And John prayed it was over. It was not.

Three years later, John was in high school. And suddenly, a compilation of the explicit videos from that sleepover was posted on Twitter. It went viral. Recall, the videos had initially been sent over Snapchat — a platform whose servers are designed to automatically delete messages after they’ve been viewed by all recipients.

DOE: Teens think that those go away, but the person was keeping the videos somehow. And then they compiled several videos together to make one significant video, and then that’s what was on Twitter.

CHAKRABARTI: Meaning, the video was now circulating widely on one of the largest public social media platforms in the world.

Twitter’s content standards clearly state that the company has a “zero-tolerance child sexual exploitation policy,” which forbids “visual depictions of a child engaging in sexually explicit, or sexually suggestive acts.”

According to court records, on December 25, 2019, someone sent Twitter a content alert about specific accounts that were posting child sexual abuse material. Twitter did not take action against those accounts.

A few weeks later, one of those same accounts posted the video of John and his friend. By January 20, 2020, John’s classmates had seen it. That was when Jane received the text message about her son wanting to take his own life.

DOE: I think the most disheartening thing is that from a parent’s standpoint is to know that your child was going through all of this for so long and that you didn’t even know until years later. And I couldn’t help him because he didn’t feel like he could tell me. So, you know, when I found all this out in January of 2020, you know, I don’t know — it’s hard to even explain. You just feel like the whole world was ripped out from under you.

CHAKRABARTI: And Jane says John felt doubly victimized. Because the videos ended up on social media, people thought John had willingly made them, even though in reality, he’d been coerced.

Jane and her son contacted school officials, law enforcement, and Twitter. They reported the video to Twitter on January 21st. Twitter replied the same day, and asked for documentation proving John’s identity. The family sent a copy of his driver’s license.

Jane sent two more content complaints to Twitter on January 22nd. The company replied with an automated message, and then did not respond for days. On January 26th, Jane contacted Twitter again. And finally, on January 28th — Twitter emailed its decision:

“We’ve reviewed the content, and didn’t find a violation of our policies, so no action will be taken at this time.”

DOE: Basically their end result was that if we wanted to fill out … another complaint, we could. But to say that they weren’t going to do anything, and it didn’t break any of their policies was mind blowing to me.

CHAKRABARTI: It was also shocking to an agent from the Department of Homeland Security, whom Jane had contacted via a connection from her pastor.

DOE: His words were, it’s, you know, black market material that’s being allowed on there, and he was blown away.

CHAKRABARTI: The agent approached Twitter, and on January 30th, after receiving his direct request on behalf of the United States Federal Government, Twitter took down the video.

But by that time, it had been retweeted more than 2,000 times, and watched more than 167,000 times.

DOE: What was on Twitter was taken down, but I have no idea if it’s on other platforms or if someone took it from Twitter and put it somewhere else. To this day, we have no idea where it is and how many people are still viewing it.

CHAKRABARTI: Jane says, it also means that John will have to live with the fact that the video may be out there for the rest of his life.

Jane and John Doe are suing Twitter. They claim the company benefited from violating federal law, specifically the Trafficking Victims Protection Reauthorization Act. The case is currently pending before the U.S. 9th Circuit Court of Appeals.

We contacted Twitter for comment on Jane and John’s case. The company did not respond to our requests.

In court filings, Twitter makes several specific arguments for its innocence, most notably invoking Section 230 of the Communications Decency Act. Tech companies often assert the Section releases them from liability for user-generated content on their platforms.

However, a 2018 federal law amended Section 230 and made it illegal to “knowingly assist, facilitate, or support sex trafficking.”

Twitter rejects the allegation that it knowingly assisted trafficking in John’s case. In a motion to dismiss, Twitter says it had no role in the creation of the videos, nor that it “knowingly provided any assistance to the perpetrators.”

The company also claims that there’s no evidence that “Twitter’s failure to remove the videos was a knowing and affirmative act to aid a sex trafficking venture, rather than mere error.”

So, in the age of social media, what, exactly, constitutes knowing assistance of child sexual abuse? How much time must pass before a decision not to take down such content goes from mere error to facilitating its spread?

These are the questions now before judges in the 9th circuit.

There is one more question that haunts Jane and John. Jane thought she’d done everything right to protect her son. So, why did the trafficker target him? How did they get his number?

DOE: Did I ever think this could happen in my own house? No, we never thought this could happen to us. I mean, we don’t know who he or she is, and they haven’t been stopped, so that’s my greatest fear, that other children are being hurt.

CHAKRABARTI: Local law enforcement did open an investigation, but never positively identified a perpetrator.

Digital forensics led to a house barely 20 minutes away from John and Jane’s home, but police said they lacked sufficient evidence to make an arrest.

When we come back, we’ll hear more about the sharp rise in child sexual abuse materials online, and what needs to change with technology, politics and the law to do something about it.

CHAKRABARTI: Let’s turn … to Hany Farid, who is a professor of computer science at the University of California, Berkeley. And Professor Farid, along with Microsoft, invented PhotoDNA, the main method for detecting illegal imagery online. Professor Farid, welcome to you.

HANY FARID: It’s good to be with you.

CHAKRABARTI: We’ll get back to what this kind of content actually is and what’s driving its unpleasant growth online. But at first glance, how would you evaluate or how would you judge what the platforms are doing in order to curb the content online?

FARID: The absolute bare minimum. And this is not a new trend. I got involved with Microsoft back in 2008 to try to combat the then growing, which has only exploded in the last decade and change. And the technology companies have been dragging their feet already for five years and they continue to drag their feet. They have good, don’t get me wrong, they have good talking points. We have zero tolerance. We do X, we do Y.

But the reality is that you can go on to Twitter, you can go on to Facebook, and you can go on to these mainstream social media sites and find this horrific content. And these technologies are being weaponized against our children, which are getting, of course, younger and younger and younger, and the problems are getting worse. So you just heard about this extortion, which is now a rising problem.

The live streaming is now a rising problem, in addition to the sharing of images in video to the tune of tens of millions a year. And the tech companies keep saying the same thing. We have zero tolerance for this, but the problem persists and is getting worse.

CHAKRABARTI: Hang on here for just a moment, because I think we have Yiota Souras on the line now. You are with us?

YIOTA SOURAS: I am. I’m here.

CHAKRABARTI: Thank you so much for joining us today. And once again, Yiota is senior vice president and general counsel at the National Center for Missing and Exploited Children. So you heard Professor Farid there describing how much of this child sexual abuse material is online on these public and open platforms. Yiota, how would you describe the growth in the amount of this content, not just over the past year or two, but even prior to that?

SOURAS: The volume has really increased exponentially. So you heard in the opening in 2021, the National Center for Missing and Exploited Children received over 29 million reports. It’s a tremendous number. It’s a heartbreaking number. In 2022, we received over 32 million reports. And as Professor Farid was just detailing, not only do we see the numbers increasing, but we see that the types and the ways that children are victimized evolving as well.

So we certainly heard in the opening about the blackmail or the extortion that is at issue in the Twitter case. Well, we are also seeing abuse that is getting increasingly violent, egregious, and many children in the images who are very young infants and toddlers who are often too young to even call for help.

CHAKRABARTI: So I just want to focus for another moment on this self-generated content, because I think it’s something that maybe a lot of people listening right now may not have heard of as even a possibility. So, Yiota, can you tell us more about that? Because, I mean, even self-generated content is kind of a misnomer, right? We’re talking about children who are, you know, sending images of themselves, but often under duress.

SOURAS: Absolutely. So we have seen alarming increases in the amount of self-generated reports of exploitation that are being received here at NCMEC. And I’ll explain a little bit about how that generally arises. You know, much is in the case in the Twitter example with John that we heard in the opening, children can be enticed under false pretenses. So, a child can be lured into believing that they are communicating with a same age or similarly aged child, that they are trading imagery with someone who they think they may know, or who may be a peer in some way.

But in fact, it is an adult. It is an offender who knows how to manipulate that child, how to get images from them. And then what? Once the images are shared, much as in the case that we heard in the opening, things turn very quickly. So we often see cases where children are then pressured, blackmailed into sending increasingly egregious imagery, sending more imagery, sending imagery where they are involved in abuse with other children. Upon threat that the original image will be shared online. So at this point, the offender has made clear to the child that they know their friend group, they know their family group, their school activities. So they can make that threat very, very real to children.

The other way we’re seeing this evolve is financial sextortion. And that is a much newer trend that we’re seeing. It’s primarily targeted to teenage boys, and it involves the same opening as a general sextortion crime. A boy might be approached online again, enticed or lured into sharing an image. And then the threat is not from war images, but it is for money. And we’ve seen some tragic outcomes where young boys have taken their lives as a result of financial sextortion.

CHAKRABARTI: Professor Farid, this may sound like a completely gauche question, but I still have to ask it anyway. In many of these cases of the quote-unquote, self-generated content, how are the perpetrators getting access to the children? I mean, is it because the platforms are so open? Or how?

FARID: Yeah, that’s exactly right. Is that we have created, if you will, a sandbox where kids as young as eight, nine, ten, 12, 13 are intermixing with adults and with predators. And this is by design. The platforms just have completely opened up. There are very few guardrails. Settings have, up until fairly recently, always defaulted to the most open even for minors. And it’s relatively easy for predators to find and get access to young kids and then extort them.

And so this is not a bug. This is a feature. This is the way the systems were designed. They were designed to be frictionless, to allow maximum user generated content to be uploaded, to allow maximum engagement by users. And then the side effect is this horrific contact that you are seeing where predators very easily find these young kids. And it’s not just one platform. Understand that this is all the platforms. I mean, I’ve no problem jumping up and down on Twitter’s head on this, but all the platforms have this problem.

CHAKRABARTI: So your resource, as we said earlier, obviously it is illegal to possess and disseminate child sexual abuse material. It’s a crime, and yet thus far successfully the platforms. And to Professor Farid’s point, all of them have evaded liability. And even in the Twitter case that we talked about with John Doe, almost all of John’s claims were dismissed by a judge because of Section 230 of the Communications Decency Act.

There’s one claim that’s still continuing to work its way through the courts. So why is it that Section 230 continues to provide this wall of protection for the platforms, even in the face of, you know, obviously illegal content?

SOURAS: Well, hopefully that’s about to change with some pending legislation and some legislation that we’ll see, I think, a little bit later this year. But as you noted with Section 230, quite simply, there is no legal recourse for a victim of this crime or their families when sexually explicit images of a child are circulated online. There is no legal recourse against a technology company that is facilitating, that is knowingly distributing, that is keeping that content online. As we saw again in the Twitter case where they had actual complaints and notices regarding that content and still kept it up. Section 230, as we know, has been around for quite a while.

There are some benefits to it societally and for the technology industry, but it has been misinterpreted at this point to create this perverse environment, which Professor Farid detailed. It enables children and adults to have private conversations and exchange imagery in total solitude in a way that we would tolerate in in any other aspect of our lives. There is nowhere in the world that we would allow children and adults to be together, to have intimate conversations, to share photos with no adult supervision, with no review, with no responsibility by the platform that is actually hosting those platforms. And yet Section 238 permits that currently.

CHAKRABARTI: So, Professor Farid, I’d like to hear you on this. Because as we know, the platforms, the tech companies frequently point to Section 230, not just as a source of strict legal liability, but also in terms of, well, they don’t want to get in the game of — this is what they say, of infringing on speech. Or be then eventually forced to say, do content moderation on other variously questionable activities, say drugs, weapons, terrorism, things like that.

FARID: I think that’s exactly right. I always like when Mark Zuckerberg says how much he loves the First Amendment and then bans legal adult pornography from his platform. Same thing for YouTube. These platforms have no problem finding and removing perfectly legal content when it’s in their business interests. Why did Facebook and YouTube ban adult pornography? Because they knew if they didn’t, their sites would be littered with this content. And advertisers don’t want to run ads against sexually explicit material, even if it’s legal. And so what did the companies do?

They got really good at it. And they’re good at it. They develop technology and they’re very aggressive about removing because it’s in their business interest, but it is not in their business interests to remove all sorts of harm. So then when it comes to harms to individuals and societies and democracies, the companies say, Well, you don’t want me to be the arbiter of truth, you don’t want me to be policing these networks.

Well, that’s awfully convenient, isn’t it? This is not a technological limitation. This is a business decision that the companies have made, that they are in the business of maximizing the creation and the upload of user generated content so they can monetize it. And taking down content that is not in their interest simply is not a priority for them.

This article was originally published on WBUR.org.

Copyright 2023 NPR. To see more, visit https://www.npr.org.

👋 Looks like you could use more news. Sign up for our newsletters.

* indicates required
New Orleans Public Radio News
New Orleans Public Radio Info