Four years after a mass shooting that left 51 people dead and 40 wounded at a mosque and Islamic center in Christchurch, New Zealand, researchers found dozens of videos and accounts on TikTok glorifying and promoting the shooter, as well as the Facebook livestream he recorded during the attack and his manifesto in just hours of study.


What You Need To Know

  • Four years after a mass shooting that left 51 people dead and 40 wounded at a mosque and Islamic center in Christchurch, New Zealand, researchers found dozens of videos and accounts on TikTok glorifying and promoting the shooter

  • In roughly five to six hours of searching on March 15, a researcher said, they found 33 videos and 20 accounts promoting the shooter

  • Videos later removed by TikTok included clips of the livestream footage, mostly of the shooter exiting his vehicle outside the mosque. At least one showed heavily obscured footage of the shooting itself

  • In a cursory search Wednesday, Spectrum News found one video posted Monday glorifying Tarrant with the hashtags “#brenton,” “#mosque,” and “#2019” and over 1,000 views. Another video, posted a week earlier, included photos of Tarrant and a caption espousing praise for him

“In the space of a few hours, it was shockingly simple to find not only content promoting the perpetrator through memes or just supportive posts, but also clips that were taken directly from the livestream footage of the attack or the manifesto, as well,” said Ciarán O’Connor, a senior analyst for the Institute for Strategic Dialogue who co-authored the report with his colleague Melanie Smith.

“It’s highly disappointing. It’s very concerning as well that this kind of content continues to circulate and continues to evade their attempts to remove all of it,” he added.

O’Connor said they shared their findings with the company prior to publication.

"There is no place for violent extremism on TikTok, and we work aggressively to remove content that undermines the creative and joyful experience people expect on our platform," a TikTok spokesperson said in a statement. "More than 40,000 people work to help keep TikTok safe, and we are grateful to third-party organizations whose research helps strengthen how we enforce our policies to keep our platform safe."

The study was a follow-up to a much more extensive examination conducted in 2021 that looked at more than 1,000 videos over the course of 16 days, and found 30 videos that featured support for the Christchurch shooter. In roughly five to six hours of searching on March 15, O’Connor said, they found 33 videos and 20 accounts promoting the shooter.

At least one video dated as far back as August 2022. Another account had over 1,000 followers and 105,000 views on ten videos, including one featuring audio from the livestream that had over 13,000 views. The report’s authors believed it was “possible TikTok’s algorithm has helped amplify this content.”

“At the time of writing, all but one of the 33 TikTok posts are still live, including some that show moments of death. This is a clear and direct violation of TikTok’s guidelines on violent and graphic content,” O’Connor and Smith wrote in the report.

After they were alerted to the videos found in 2023, TikTok removed them and all but one account was banned, an archive of the content shared with Spectrum News shows.

Videos removed at the time included clips of the livestream footage, mostly of the shooter exiting his vehicle outside the mosque. At least one showed heavily obscured footage of the shooting itself. Other clips showed recreations of the shooting in the video game Roblox.

Among the posts were memes of the shooter depicted as a saint, a common trope in far-right circles as a way of bestowing martyrdom on mass shooters and terrorists, O’Connor said.

This celebration of the Christchurch shooter has inspired others in the last few years to commit acts of racist mass violence.

The gunman who killed 10 Black people at a grocery store in Buffalo, New York in May 2022 also livestreamed his attack and wrote in a manifesto of his own that his path to radicalization began when he watched a clip of the Christchurch livestream on the forum 4chan, according to a report from the New York state attorney general’s office.

The man convicted of attacking a synagogue in Poway, California, in April 2019 wrote that the Christchurch shooter “showed me that it could be done,” according to Georgetown University researchers. The white gunman who killed 21 at a Walmart in August 2019, targeting the Latino community of El Paso, Texas, also noted his support for the Christchurch shooter and his white supremacist ideology, the researchers wrote.

“This content is itself propaganda and the fact that it’s still able to circulate quite freely without little resistance on a mainstream platform like this is itself a risk for further radicalizing or being used as propaganda material by extremists,” O’Connor said. “If you kind of move into this ecosystem of these kinds of accounts, you will also see wide-ranging anti-Semitism, Islamophobia content, of course related to Christchurch, but also misogyny and content attacking women and LGBTQ communities.”

Though they were solely focused on the Christchurch shooter, O’Connor said they also found content promoting racist mass shooters Dylann Roof — who killed nine Black worshippers at a church in Charleston, South Carolina, in 2015 — and Anders Behring Breivik — a self-identified Nazi who killed 77 people in an 2011 attack in Norway.

Content celebrating Dylan Klebold, one of the two teenagers who killed 13 people at their high school in Columbine, Colorado in 1999, was also easily found, according to the report.

TikTok’s CEO Shou Zi Chew is set to testify in front of the House Energy and Commerce Committee Thursday amid discussions of banning the app in the U.S. over privacy concerns and the company’s relationship with the Chinese government. The app has over 150 million U.S. users, TikTok reported this week.

It’s unclear whether this report will come up at the hearing. A spokesperson for the Democratic minority declined to comment on the report, saying committee members and staff were busy preparing for the hearing. Press representatives for the committee’s Republican majority did not return requests for comment on whether members were aware of the ISD report or planned on discussing it at the hearing.

“We’ve seen a parallel rise in domestic terrorism and extremism and the proliferation of misinformation and disinformation on social media platforms, which not only threaten to undermine our democracy, but also increase the threat of racially motivated violence,” Rep. Yvette Clarke, a New York Democrat both on the committee questioning Chew Thursday and the House Committee on Homeland Security, said in a statement.

“We know misinformation can have deadly consequences, and it’s clear that all platforms, including TikTok, must be held accountable for the part they play in not only failing to remove harmful content, but amplifying it as well,” Clarke added.

Other members of the committee contacted by Spectrum News either did not return requests for comment or directed requests to committee staff.

TikTok is aware of their shortfallings, according to O’Connor. ISD had “extensive discussions” with TikTok in 2021 on their findings.

“We were invited to share our discussions, share our understandings of their policies,” O’Connor said. “We said that there was a considerable gap between the policies and performance of these policies.”

The “enforcement gap,” as O’Connor described it, seems to have not improved since then, he claims. The actual policies of the social media network are “very comprehensive and very nuanced,” he said, but enforcement of violations leaves much to be desired, particularly when it comes to extremist content.

In September 2022, TikTok announced a partnership with Tech Against Terrorism, a United Nations-backed initiative, “working with the global tech industry to tackle terrorist use of the internet whilst respecting human rights” and pledged to use a combination of technology, human monitors, and partner organization to stay on top of their enforcement policies. 

“TikTok is an entertainment-first platform, and today only a small portion of videos - less than 1% of the total removed - violate our violent extremism policy,” wrote Julie de Bailliencourt, TikTok’s global head of product policy, at the time.

O’Connor said that other social media networks, while not perfect, are much better at moderating this type of content.

“I didn’t include it in the final analysis for this piece, but I did a cursory search on Facebook, YouTube and Twitter for some of the same search terms,” O’Connor said, noting he found nothing on Facebook or YouTube and only one tweet that fell into the category of pro-Christchurch shooter content. “TikTok in that measure is performing very badly compared to its mainstream equivalents.”

One major issue is that TikTok bans certain terms from being used, such as “Brenton Tarrant,” the name of the Christchurch shooter, but “Tarrant Brenton” or intentional misspellings of his name still bring up video results.

In a cursory search Wednesday, Spectrum News found one video posted Monday glorifying Tarrant with the hashtags “#brenton,” “#mosque,” and “#2019” and over 1,000 views. Another video, posted a week earlier, included photos of Tarrant and a caption espousing praise for him.

The videos were shared with TikTok’s communications department Wednesday afternoon. As of 4 p.m. they were still up. Just before 7 p.m., the company had removed them.