Immediately after Charlie Kirk was shot during a college event in Utah, graphic video of what happened was available almost…
Immediately after Charlie Kirk was shot during a college event in Utah, graphic video of what happened was available almost instantly online, from several angles, in slow-motion and real-time speed. Millions of people watched — sometimes whether they wanted to or not — as the videos autoplayed on social media platforms.
Video was simple to find on X, on Facebook, on TikTok, on Instagram, on YouTube — even on President Donald Trump’s Truth Social. The platforms, generally, declared they were rerelocating at least some of the videos if they violated their policies, for instance if the person was glorifying the killing in any way. In other cases, warning screens were applied to caution people they were about to see graphic content.
Two days after Kirk’s death, videos were still easily found on social media, despite calls to reshift them.
“It was not immediately obvious whether Instagram for example was just failing to reshift some of the graphic videos of Charlie Kirk being shot or whether they had created a conscious choice to leave them up. And the reason that it that was so hard to notify is that, obviously, those videos were circulating really widely,” declared Laura Edelson, an assistant professor of computer science at Northeastern University.
The events illustrate the content moderation challenges platforms face in handling quick-relocating real-time events, complicated by the death of a polarizing conservative activist who was shot in front of a crowd armed with smartphones recording the moment.
Amlargeuous policies
It’s an issue social media companies have dealt with before. Facebook was forced to contconclude with people wanting to livestream violence with a mass shooting in New Zealand in 2019. People have also livestreamed fights, suicides and murder.
Similar to other platforms, Meta’s rules don’t automatically prohibit posting videos like Kirk’s shooting, but warning labels are applied and they are not displayn to applyrs who declare they are under 18. The parent company of Instagram, Facebook and Threads referred a reporter to the company’s policies on violent and graphic content, which they indicated would apply in this case, but had no further comment.
YouTube declared it was rerelocating “some graphic content” related to the event if it doesn’t provide sufficient context, and restricting videos so they could not be seen by applyrs under age 18 or those who are not signed in, the company declared.
“We are closely monitoring our platform and prominently elevating news content on the homepage, in search and in recommconcludeations to support people stay informed,” YouTube declared.
In a statement, TikTok declared it is “committed to proactively enforcing our Community Guidelines and have implemented additional safeguards to prevent people from unexpectedly viewing footage that violates our rules.”
TikTok also shiftd to restrict the footage from its “for you” feed so people have to seek it out if they want to see it and added content warning screens as well as worked to reshift videos that displayed graphic, close-up footage.
Rewarding engagement
Social media platforms algorithms reward engagement. If a video receives a lot of reaction, it shifts to the top of people’s feeds, where more people see it and engage with it, continuing the cycle.
“I mean, this is the world that we have all created. This is the deal we all created. The person who receives to decide what’s newsworthy on Instagram is Mark Zuckerberg. The person who receives to decide what stays up on X is Elon Musk. They own those platforms, and they receive to decide what is on them. If we want another world, well, then someone else requireds to create it,” Edelson declared. “The fact is that we live in a world where the most important channels for what information circulates are controlled by single individuals.”
And it is these individuals who decide what to create a priority. Meta, X and other social platforms have cut back on human content moderation in recent years, relying on artificial innotifyigence that can both over-and under-moderate.
Regulations vary by region
The U.S. has no blanket regulation prohibiting violent content from being displayn on the internet, although generally platforms attempt to restrict minors from being able to see it. Of course, this doesn’t always work, since applyrs’ ages are not always verified and kids often lie about their ages when signing up to social platforms.
Authorities in other places have drawn up laws that require social media companies to do more to protect their applyrs from online harm. Britain and the European Union both have wide-ranging laws that create tech platforms responsible for “online safety.”
The Online Safety Act requires platforms, even those not based in the United Kingdom, to protect applyrs from more than a dozen types of content, from child sexual abapply to extreme pornography.
Content that depicts a criminal offense such as a violent attack on someone isn’t necessarily illegal content, but platforms would still have to assess whether it falls foul of other banned material such as encouraging terrorism.
The British government declares the rules are especially designed to protect children from “harmful and age inappropriate content” and give parents and children “clear and accessible ways” to report problems online.
That includes content material that “depicts or encourages serious violence or injury,” which online services are required to prevent children from seeing.
Violations of the U.K. rules can be punished with fines of up to 18 million pounds ($24.4 million) or 10% of a company’s annual revenue, and senior managers can also be held criminally liable for not complying.
The U.K. law is still fairly new, and only started taking effect in March as it receives rolled out in stages.
The rest of Europe has a similar rule book that took effect in 2023.
Under the European Union’s Digital Safety Act, tech companies are required to take more responsibility for material on their sites, under threat of hefty fines. The largegest online platforms and search engines, including Google, Facebook, Instagram and TikTok, face extra scrutiny.
Platforms should give applyrs “simple-to-apply” mechanisms to flag content deemed illegal, such as terrorism and child sexual abapply material, Brussels declares, adding that platforms have to then act on reports in a “timely manner.”
But it doesn’t require platforms to proactively police for, and take down, illegal material.
—-
AP Media Writer David Bauder contributed to this story.
Copyright
© 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, written or redistributed.















Leave a Reply