Days after the Israel-Hamas conflict ignited last weekend, major social media platforms such as Meta, TikTok, and X (formerly Twitter) received a serious warning from a prominent European regulator regarding the necessity to monitor disinformation and violent content related to this ongoing crisis. This alert highlights the critical role that social media platforms play in shaping public discourse, especially during times of geopolitical tension.
The communication, issued by Thierry Breton, the European Commissioner for the Internal Market, underscored the potential repercussions for failing to comply with the regulations outlined in the Digital Services Act. This legislation sets forth stringent guidelines that govern unlawful online content, emphasizing that non-compliance could lead to significant consequences for these companies operating within the European Union.
Breton specifically reminded Elon Musk, the owner of X, of the potential for a formal investigation should compliance issues arise, which could result in financial penalties. His correspondence serves as a clear warning about the regulatory landscape in Europe, which is far more proactive in addressing harmful online content than in the United States.
In contrast, the regulatory environment in the United States is shaped by the First Amendment, which protects a wide range of speech, including controversial or offensive views. This constitutional protection often complicates the government’s ability to effectively address misinformation, particularly related to elections and public health crises like Covid-19. Currently, a legal battle is underway, initiated by Republican state attorneys general, challenging the Biden administration’s actions aimed at moderating misleading content on social media.
This ongoing litigation suggests that the Biden administration may have overstepped its bounds in urging social media platforms to eliminate certain types of posts. A recent ruling from an appeals court indicated that actions taken by the White House, the Surgeon General’s office, and the FBI could potentially infringe upon First Amendment rights by exerting undue pressure on these platforms to engage in content moderation. As the Biden administration awaits a Supreme Court decision, the implications of this case for future government interactions with online platforms remain uncertain.
Given the current legal context, David Greene, the Civil Liberties Director at the Electronic Frontier Foundation, remarked, “I don’t think the U.S. government could constitutionally send a letter like that,” in reference to Breton’s warnings. This highlights the stark differences between regulatory approaches in Europe and the U.S., particularly concerning content moderation and the definition of harmful speech.
Kevin Goldberg, a First Amendment expert at the Freedom Forum, elaborated by stating that the U.S. lacks a formal legal definition of hate speech or disinformation, as such concepts are not explicitly actionable under constitutional law. He explained, “What we do have are very narrow exemptions from the First Amendment for things that may involve what people identify as hate speech or misinformation.” For instance, statements classified as hate speech might fall under the First Amendment’s exemption for “incitement to imminent lawless violence,” while certain forms of misinformation could be subject to legal action when they violate laws related to fraud or defamation.
However, the expansive protections offered by the First Amendment suggest that many provisions within the Digital Services Act may not be applicable within the U.S. context. Goldberg emphasized that government officials in the U.S. cannot exert pressure on social media platforms in the same manner that EU regulators are currently doing regarding the Israel-Hamas conflict. He noted, “Because too much coercion is itself a form of regulation, even if they don’t specifically say, ‘we will punish you.'” This distinction underscores the complexities of balancing regulation and free speech rights in different jurisdictions.
Christoph Schmon, the international policy director at the Electronic Frontier Foundation, characterized Breton’s communications as a significant signal for social media platforms, indicating that the European Commission is closely monitoring their actions. The Digital Services Act mandates that large online platforms implement robust mechanisms for removing hate speech and disinformation while also considering free expression rights. Companies failing to adhere to these regulations may face steep fines, potentially reaching up to 6% of their global annual revenues.
In the U.S., the prospect of government-imposed penalties poses a serious risk. Greene cautioned that officials must be careful in their communications, ensuring that any requests for content moderation are framed clearly as voluntary and devoid of threats of enforcement action.
A series of letters sent by New York Attorney General Letitia James to various social media platforms illustrates the delicate balance U.S. officials seek to strike. James solicited information from Google, Meta, X, TikTok, Reddit, and Rumble regarding their strategies for identifying and removing calls for violence and acts of terrorism, citing “reports of growing antisemitism and Islamophobia” following the recent attacks in Israel. Importantly, unlike Breton’s warnings, these letters do not include threats of penalties for non-compliance.
The impact of the new regulations and warnings from Europe on how tech platforms approach content moderation both within the EU and globally remains uncertain. Goldberg noted that social media companies have historically navigated varying restrictions on acceptable speech across different countries, suggesting that they might limit new policies to the European context. However, the tech industry has previously adopted approaches similar to the EU’s General Data Privacy Regulation (GDPR) on a broader scale.
Individual users may find it reasonable to adjust their settings to filter out content they prefer not to see. Goldberg emphasized that this choice should ultimately rest with each user, allowing them to tailor their online experience according to their preferences.
Given the complex history of the Middle East, Goldberg argued that individuals should have access to a wide array of content to make informed decisions, rather than being restricted to content deemed appropriate by governmental authorities.
Subscribe to CNBC on YouTube.
WATCH: EU’s Digital Services Act will present the biggest risk to Twitter, think tank says