Meta Enhances Monitoring Amid Ongoing Middle East Conflict
Meta, the parent entity of Facebook and Instagram, announced on Friday its escalated efforts to enforce policies against violence and misinformation in light of the ongoing conflict between Israel and the Palestinian militant group Hamas.
The tech behemoth has set up a “special operations center” staffed with experts, including fluent Hebrew and Arabic speakers, to monitor the situation and expedite the removal of content breaching Meta’s policies.
In the initial three days of the conflict, Meta reported the removal or flagging of over 795,000 pieces of content in Hebrew and Arabic, citing violations of its policies on dangerous organizations and individuals, violent and graphic content, and hate speech, among others.
The company underlined that Hamas is prohibited on Facebook and Instagram under its dangerous organizations and individuals policy. “We want to reiterate that our policies are designed to give everyone a voice while keeping people safe on our apps,” Meta stated. “We apply these policies regardless of who is posting or their personal beliefs, and it is never our intention to suppress a particular community or point of view.”
Amid a surge of misinformation related to the conflict on social media, Meta also mentioned its collaboration with AFP, Reuters, and Fatabyyano to fact-check claims and demote false claims in users’ feeds.
The announcement follows a letter to Meta CEO Mark Zuckerberg from the European Union, urging the company to be “very vigilant” about eliminating “illegal content” and disinformation.
Thierry Breton, EU Commissioner for Internal Market, stressed Meta’s responsibility to take “timely, diligent and objective action” following notifications of illegal content on its platforms under the bloc’s new online regulations known as the Digital Services Act.
EU Investigates X for Handling Violent Content and Disinformation
X owner Elon Musk received a more sternly worded warning from Breton on Tuesday regarding the spread of “illegal content” and disinformation on the platform formerly known as Twitter.
On Thursday, the EU declared it would investigate X over its management of “terrorist and violent content and hate speech” in relation to the conflict in Israel and Gaza.
Since the conflict erupted, false claims have swamped X, with posts alleging old and unrelated photos and videos — even a video game clip — to be from the current Israel-Hamas war.
Experts have cautioned that although viral misinformation typically propagates during conflicts, Musk’s alterations to the platform since his acquisition of the social media company last year have aggravated the issue.
Musk has dialed back content moderation measures, reinstated banned accounts, and scrapped the platform’s legacy verification system in favor of a paid subscription service since purchasing Twitter for $44 billion last October.
Before the EU unveiled its investigation, X CEO Linda Yaccarino responded to Breton’s letter on Thursday, mentioning that the platform has deactivated hundreds of accounts linked to Hamas and removed or labeled tens of thousands of pieces of content.
She also highlighted the platform’s “redistributed resources and refocused internal teams,” and its “proportionate and effective assessment and addressing of identified fake and manipulated content during this constantly evolving and shifting crisis.”
“There is no place on X for terrorist organizations or violent extremist groups and we continue to remove such accounts in real time, including proactive efforts,” she added.
With information from The Hill