Technology

TikTok’s Action on Misinfo in Israel-Hamas Conflict

TikTok reported that it took “immediate” measures to counter misinformation after a warning from the European Union (EU) following the attack by Hamas on Israel.

The EU urged TikTok’s boss, Shou Zi Chew, to “urgently step up” efforts and to “spell out” within 24 hours how it was complying with European law last Friday.

Social media platforms have witnessed a surge in misinformation regarding the conflict, such as altered images and mislabeled videos. TikTok mentioned it had removed “violative content and accounts.”

“We immediately mobilised significant resources and personnel to help maintain the safety of our community and integrity of our platform,” the company said in a statement on Sunday.

In a letter to the company on Friday, EU commissioner Thierry Breton warned that TikTok needs to be mindful of its popularity among young people and “protect children and teenagers from violent content and terrorist propaganda as well as death challenges and potentially life-threatening content.”

TikTok Acts on Videos Concerning Israel-Hamas Conflict Following EU Warning

The EU also gave similar warnings to X (formerly Twitter), YouTube, and Meta, the owner of Facebook and Instagram, about misinformation, along with a 24-hour deadline.

TikTok, owned by Chinese firm ByteDance, detailed on its website the actions it has taken to combat misinformation and hateful content. It reported having created a command center, enhanced its automated detection systems to remove graphic and violent content, and added more moderators who speak Arabic and Hebrew.

“We do not tolerate attempts to incite violence or spread hateful ideologies,” TikTok stated. “We have a zero-tolerance policy for content praising violent and hateful organisations and individuals, and those organisations and individuals aren’t allowed on our platform.”

TikTok spoke out against terrorism, expressing it was “shocked and appalled by the horrific acts of terror in Israel last week,” and also “deeply saddened by the intensifying humanitarian crisis unfolding in Gaza.”

The EU introduced new laws in August 2023 regulating the kind of content allowed online. The Digital Services Act (DSA) requires so-called very large online platforms – those with over 45 million EU users – to proactively remove “illegal content,” and show they have taken measures to do so if requested.

The EU previously informed it was not currently in a position to comment on what would come next in these specific cases, but explained what was hypothetically possible under the law. The DSA allows the EU to conduct interviews and inspections and, if it is unsatisfied, proceed to a formal investigation.

If it decides that a platform has not complied or is not addressing the problems it has identified, and risks harming users, the commission can take steps, including issuing fines and, as a last resort, request judges to temporarily ban a platform from the EU.

With information from BBC

Leave a Reply

Your email address will not be published. Required fields are marked *