Tech innovations on the front line have brought the violence of the war in Ukraine to our screens
Thousands of hours of combat footage recorded amid Russia's war in Ukraine has flooded the internet, exposing an increasingly online generation to the conflict's most graphic moments.
With governments and social media companies unequipped to deal with the deluge, experts say, the responsibility is often falling on parents and guardians — and children themselves — to navigate the carnage.
Warning: This story contains graphic descriptions that may disturb some readers.
Violent footage ranging from air strikes on apartment buildings to snipers picking off infrared targets has become commonplace in the context of Russia's invasion of Ukraine.
One style of content, so-called "drone drops", has garnered substantial online popularity. The videos show drone pilots manoeuvring their crafts to drop grenades through the hatches of Russian tanks.
But as commercial drones have been appropriated for use on the battlefield, their built-in cameras have given a never-before-seen perspective of combat.
It's a powerful propaganda tool, as videos amass millions of views online.
With slick production value and accompanying music, the content is made to be consumed — however the footage is often unpleasant to watch.
In some clips, Russian troops shelter in shallow dugouts or attempt to shoot the drones down as grenades land around them. Others show wounded soldiers being picked off in open fields.
One disturbing video shows grenades dropped into a Russian foxhole where three soldiers are sheltering. It is not clear if the soldiers were actively engaged in fighting.
The drone hovering overhead captures and broadcasts what appears to be the moment of their gruesome deaths.
Down a digital rabbit hole
Violent content, particularly war-related, has been a feature of the internet landscape since the early 2000s with "shock sites" like the now-defunct LiveLeak justifying footage of gruesome violence as furthering citizen journalism in the digital age.
Many shock sites were a place for users to share disturbing content recreationally by seeking out the most gratuitous material in an attempt to one-up other users.
Before the rise of social media, however, these websites and their content were largely spread by word of mouth.
Now, they're often recommended by the algorithms of social media platforms.
"There is a distinction between the kind of pre-algorithmic-culture version of the internet, where you did have to seek things out, and just lingering on your TikTok For You page," Michael Dezuanni, a professor of digital media and learning in the School of Communications at Queensland University of Technology, said.
"This notion of the rabbit hole — or that the algorithm provides more of the same, but also potentially more extreme versions of the same — is fairly well documented now."
Most drone-drop videos are first shared by fighters on their public Telegram channels.
From there, content is collected on dedicated subredits and then spread across platforms including Twitter and TikTok.
While these sites have rules in place to police the most extreme content, human moderation has been drastically reduced as companies cut costs or attempt to steer their online culture into new directions.
"The directive from [Elon] Musk has been much more that he's a free-speech maximalist," said Tama Leaver, an associate professor in internet studies at Curtin University in Perth, and president of the Association of Internet Researchers.
"What we've seen is almost the entire content moderation teams [at Twitter] have either been completely cut back, or completely let go."
Disturbing content is still removed when caught by moderators, Dr Leaver said, but the sheer volume of posts means that these videos remain up for longer, amassing thousands of views and shares before being removed.
"More and more of that moderation is either getting through the cracks, so things that probably should be taken down that aren't, or aren't being taken down that quickly, or the people that were working on the nuance just don't work there anymore," he said.
Some companies turned to algorithms, particularly during the pandemic, to moderate in place of people, Dr Leaver said.
"But what we have seen is that the algorithms have never gotten as good as they planned to be," he said.
An open internet, or a regulated internet?
According to Professor Dezuanni, the platforms have a responsibility to prevent this kind of content from reaching children and younger users.
"But there's always a trade-off between having an open internet and having a regulated internet," he said.
In 2015, the Australian government established the eSafety Commissioner — a globally unique regulatory agency geared towards protecting Australians from harmful content online by requiring service providers to remove access to websites.
"The eSafety Commissioner has probably more power than almost any comparable body in any other nation," Dr Leaver said.
"But they can only respond to things after it's been brought to their attention."
According to a 2022 report by the eSafety Commissioner, 37 per cent of Australians aged from 14 to 17 said they had been exposed to gory and violent material online.
While the eSafety Commissioner can remove extreme online material, experts say its mandate is narrowed to terrorism-related content and child exploitation associated with individual sites, limiting its overall effectiveness when it comes to social media.
A spokesperson for eSafety Commissioner Julie Inman Grant told the ABC that the body was working to address harmful content through new regulatory tools, and working directly with the industry.
"Technology companies have a major responsibility and can make greater strides in enforcing their own policies and better protecting young people," they said.
"Parents and carers can also play an important role. We recommend setting parental controls and limiting device use to open areas of the home where possible."
The spokesperson added their powers do extend to social media, and that their mandate covers "any extreme violence or other abhorrent material".
Media literacy in a digital society
While the majority of young people online are not particularly at risk of encountering graphic footage while, for example, casually scrolling TikTok, changing algorithms, a steady stream of content, and a lack of reliable moderation is making the internet increasingly volatile.
A TikTok spokesperson said the platform took the safety of its 8.5 million Australian users seriously and all content on the platform must abide by strict community guidelines, overseen by 40,000 "dedicated trust and safety professionals".
"We remove videos that depict graphic deaths or real world graphic violence," they said.
"In addition, we age gate some content for users under 18. And other videos might contain an 'opt-in' screen or warning prior to the video being available to view."
When contacted for comment, Twitter responded with the company's standard automated reply of a poo emoji.
Professor Dezuanni said an increased focus on media literacy in schools is a way for individuals to better prepare themselves for what they encounter online.
"Children and young people tend to be more emotionally and psychologically impacted by real world violence," he said.
"Young people respond really well to having open conversations and reflecting critically on the content that they're consuming.
"Often young people also need to be given permission to say no. They need to be told that it's OK to tell their mates that they don't want to see that thing, because it's not the kind of thing they like seeing."
Generally, school-age internet users aren't as interested in watching gruesome war-related content as they are videos of schoolyard fights, or other content which appeals to them on a peer level.
"Young people are much more literate than we usually give them credit for," Dr Leaver said.
"They are pretty good at navigating around stuff that they don't want to see."
However, with graphic content only a few clicks away, and the potential for algorithms to turn a flicker of curiosity into an upsetting experience, the bigger question, Professor Leaver posits, is what this says about our broader culture.
"It's not just about combating horrible stuff online — it's 'should we be addressing a culture where people are motivated to create and share horrible stuff online?'" he said.
"There is a bigger context here that is never going to be solved just by moderation, just by tools, or just by better policing."