Facebook Oversight Board’s Trump Ruling Underscores Need for Clear Regulatory Frameworks for Social Media

Facebook Oversight Board’s Trump Ruling Underscores Need for Clear Regulatory Frameworks for Social Media

May 6, 20213 min read
Featuring:

The Facebook Oversight Board’s ruling temporarily upholding the social media giant’s ban on former President Donald J. Trump, which they instructed the company to reassess within six months, noted that the parameters for an indefinite suspension are not defined in Facebook's policies. The non-decision in this high-profile case illustrates the difficulties stemming from the lack of clear frameworks for regulating social media.


For starters, says web science pioneer James Hendler, social media companies need a better definition of the misinformation they seek curb. Absent a set of societally agreed upon rules, like those that define slander and libel, companies currently create and enforce their own policies — and the results have been mixed at best.


“If Trump wants to sue to get his Facebook or Twitter account back, there’s no obvious legal framework. There’s nothing to say of the platform, ‘If it does X, Y, or Z, then it is violating the law,’” said Hendler, director of the Institute for Data Exploration and Applications at Rensselaer Polytechnic Institute. “If there were, Trump would have to prove in court that it doesn’t do X, Y, or Z, or Twitter would have to prove that it does, and we would have a way to adjudicate it.”


As exemplified in disputes over the 2020 presidential election results, political polarization is inflamed by a proliferation of online misinformation. A co-author of the seminal 2006 Science article that established the concept of web science, Hendler said that “as society wrestles with the social, ethical, and legal questions surrounding misinformation and social media regulation, it needs technologist to help inform this debate.”


“People are claiming artificial intelligence will handle this, but computers and AI are very bad at ‘I’ll know it when I see it,’” said Hendler, who’s most recent book is titled Social Machines: The Coming Collision of Artificial Intelligence, Social Networking, and Humanity. “What we need is a framework that makes it much clearer: What are we looking for? What happens when we find it? And who’s responsible?”


The legal restrictions on social media companies are largely dictated by a single sentence in the Communications Decency Act of 1996, known as Section 230, which establishes that internet providers and services will not be treated as traditional publishers, and thus are not legally responsible for much of the content they link to. According to Hendler, this clause no longer adequately addresses the scale and scope of power these companies currently wield.


“Social media companies provide a podium with an international reach of hundreds of millions of people. Just because social media companies are legally considered content providers rather than publishers, it doesn’t mean they’re not responsible for anything on their site,” Hendler said. “What counts as damaging misinformation? With individuals and publishers, we answer that question all the time with libel and slander laws. But what we don’t have is a corresponding set of principles to adjudicate harm through social media.”


Hendler has extensive experience in policy and advisory positions that consider aspects of artificial intelligence, cybersecurity and internet and web technologies as they impact issues such as regulation of social media, and powerful technologies including facial recognition and artificial intelligence. Hendler is available to speak to diverse aspects of policies related to social media, information technologies, and AI.



Connect with:
  • James Hendler
    James Hendler Director, Future of Computing Institute

    Leading researcher in the Semantic Web and artificial intelligence

You might also like...