Social media companies are once again in the spotlight after a bank employee in Louisville, Kentucky, killed five people in a mass shooting and livestreamed the attack on Instagram.
Tech companies have gotten better in recent years at cooperating to tamp down the spread of mass shooting videos on mainstream platforms. But there's still no easy way to stop shooters from broadcasting their grisly crimes without shutting down livestreaming services altogether.
Here's what we know so far about what happened in Louisville:
How did Meta respond?
Instagram parent company Meta, which also owns Facebook, said in a statement that it quickly removed the livestream of the Louisville shooting on Monday morning.
But Meta did not immediately respond to questions Tuesday about how long it took to take down the livestream — or how many people watched it before it was removed.
Instagram allows users to anonymously report livestreams. Once a report has been submitted, the company’s policy states that it will review the broadcast “as quickly as possible” and remove those that violate its policies. Depending on the severity of the situation, the company may decide to end a live broadcast, disable the account or contact law enforcement.
Is this the first livestreamed shooting?
No. All told, there have been seven perpetrator-produced videos of violence posted on social media in the past four years that major companies have tried to keep off their platforms, according to the Global Internet Forum to Counter Terrorism.
In September, a gunman livestreamed his attack on people in Memphis, Tennessee, during a rampage that killed four and wounded three, police said. The shooting came four months after a white gunman massacred 10 Black shoppers and workers — and wounded three — in a shooting at a Buffalo, New York, supermarket that was livestreamed on the Amazon-owned gaming platform Twitch.
The platform said it removed that video in less than two minutes, which was not fast enough to prevent copies of the clip from spreading to other social media sites. But the removal was considerably faster than the 17 minutes it took Facebook to take down a livestreamed attack in 2019 at two mosques in Christchurch, New Zealand. That shooting killed 51 people.
Also in 2019, another gunman killed two people during a shooting at a German synagogue that was also livestreamed on Twitch.
Last June, two Muslim men in India were accused of slitting the throat of a Hindu tailor and posting a video of it online amid rising tensions between Hindus and Muslims in the country.
How have social media companies changed their tactics?
The methods to curb attack videos have evolved since 2014, when Islamic State militants in Syria began sharing gruesome propaganda videos of the beheadings of kidnapped journalists and other hostages.
While those events were not shared live, it was "really the first time that there was a major terrorist incident designed for the social media era. And platforms realized that they had to do something,” said Courtney Radsch, a fellow at the UCLA Institute for Technology, Law & Policy.
Facebook, Microsoft, Twitter and Google-owned YouTube formed a group in 2017 called the Global Internet Forum to Counter Terrorism. Its mission expanded after the Christchurch killings “spurred a much more aggressive effort to not only eradicate” terrorist content online, but also to go after mass killing videos “perpetrated by white nationalists and other types of extremists,” said Radsch, who serves on a committee for the group.
The group, known as GIFCT, now has nearly two dozen members, including Amazon, Airbnb, Dropbox, Discord and Zoom. Whatever platform has the original video will submit a “hash” — a digital fingerprint corresponding to that video — and notify the other member companies so they can restrict it from their platforms. While not perfect, experts say the response has grown quicker and also now encompasses PDF files to stop the spread of manifestos.
“Unfortunately, as these have continued to occur, the more of these we’ve gone through with our members, the more everyone strengthened their muscle memory around this,” said Sarah Pollack, a spokesperson for GIFCT.
A day after the Louisville shooting, clips from the gunman’s livestream were not easily findable on Instagram or other popular social media sites such as Twitter, Facebook and TikTok. The first calls to police were around 8:30 a.m. Monday. By midday, the GIFCT had put out its highest-level alert for coordinating efforts to stop the video’s spread.
What more could be done?
It’s hard to know if the effort to slow the spread of videos has done anything to deter the violence itself.
“There’s a tension between platforms “wanting to give their users new capabilities and opportunities to engage” and the risks of livestreaming, said UCLA's Radsch.
Livestreaming, "with no delay, with no real oversight, can present really challenging situations when users use your platform to livestream terrorism, extremism, violence, suicide.”
She said platforms still need to take more seriously whether to adopt additional precautions.
“The challenge is, any precaution you put in place for a mass violence event could also potentially be leveraged to prevent livestreaming of police brutality or pro-democracy protests,” she said. “So it really is a double-edged sword.”
Also, while mainstream companies are coordinating their response, they have little influence over the “dark web” forums that are still trying to collect and share the videos — other than preventing them from obtaining footage in the first place.