Giulio Kowalski

A lot has been said about Facebook being an incumbent digital platform threatening competition on the markets (and arguably much more remains to be said). However, the ‘law of Facebook’ incorporates different dimensions ‒ e.g., public, international, transnational, European, comparative ‒ that are at least as important as competition law and policy. It is with this premise in mind that the Jean Monnet Chair in Law & Transatlantic Relations, City Law School and the Institute for the Study of European Law (ISEL) at City Law School hosted a webinar to shed light on these further dimensions of the law of Facebook and discuss whether it can function as a blueprint to understand legal issues ‒ and engineer possible solutions ‒ concerning the law of big techs in general. Let’s delve into the central matters discussed by the panellists concerning the multidimensional law of Facebook.

The challenges in the digital sphere are manifold. The EU-US partnership set up to regulate essential data flows between the two sides of the Atlantic (the so-called “Privacy Shield”) has gone through hardships after the Schrems II decision in July 2020 which has led to reinforced requirements for data localisation and related significant implementation issues. Moreover, digital trade is gaining momentum, with important international trade agreements on this subject. However, this state of the art also shows a potentially dangerous fragmentation of data protection regimes within such various (digital) trade agreements. Thirdly, the debate over a “digital tax” on online businesses is very much an open debate, centred around the doubts cast over the capacity of national taxation regimes to capture large online platforms. Finally, the enforcement of competition law has also shown some fatigue in its effort to rein in large tech companies, forcing regulators to attempt alternative regulatory instruments (e.g., the Digital Markets Act in Europe) that have however a long way ahead before conceivably being implemented.

Against this background, the event’s chair Professor Elaine Fahey, Jean Monnet Chair in Law & Transatlantic Relations at City Law School together with four distinguished panellists, have examined the state of the art of the Law of Facebook from a variety of perspectives, following up the 2020 City Law School event on the same topic.

 First to take the floor was Francisco De Abreu Duarte, PhD Researcher at the Faculty of Law of European University Institute, who illustrated his original “digital constitution question/hypothesis”: is Facebook a constitutional order and are Facebook’s community standards (“FCSs”) constitutional provisions?

To address this question, the speaker first explained the similarities between a constitutional order and Facebook. The FCSs are a set of rules including rights and obligations (e.g., freedom of expression but also limits to the latter such as ban on hate speech and nudity) and procedures akin to a constitutional order (e.g., possibility for users to report a post featuring hate speech and nudity to have it removed). Moreover, the FCSs are enforced as they would be under a constitutional order (e.g., Facebook has the power to remove the post containing hate speech or nudity). Considering these similarities, the speaker concluded, it is definitely worth discussing whether Facebook’s nature is that of a constitutional order (what the speaker called “constitutional duck test”: if something looks like a duck it is probably a duck even though is called differently).

Also, De Abreu Duarte noted two different trends concerning the relationship between big techs and public power. First, the privatisation of public powers, that is when public powers “outsource” their public prerogatives to privates entities (e.g., fundamental rights balancing when digital platforms decide on the lawfulness of a content). Second, the publicisation of private law, whereby private entities start to actually assume content moderation powers, structures and procedures that resemble those of public law (e.g., interpretation and enforcement of human rights law by the Facebook’s Oversight Board).

Then, De Abreu Duarte outlined the analytical framework (i.e., a “constitutional methodology”) that he would deploy to test whether Facebook (and perhaps more generally any other large digital incumbent), as well as its content standards, can be considered as elements of a constitutional order. Such framework is based on the two fundamental objectives of the 18th/19th-century constitutionalism: organising political power‒ that is, distinguishing the public and private sphere ‒ and limit such power under the law. Through this test, De Abreu Duarte concluded, it would be possible to test whether a constitutional discourse around Facebook (and perhaps other large online companies) may exist.

The second speaker, Bilyana Petkova, Professor of Law at Graz Law School and Affiliate Scholar, Yale Information Society Project, looked at fundamental rights protection in the online ecosystem, with a particular focus on online speech and regulation of intermediaries. She noted how the internet, among all the positives, has also helped spread disinformation, as seen for instance during the 2016 US election with the Russian interference. However, governments have not lost their power to control online speech as some had imagined in the early stages of the internet era. And they are trying to react. As proof of this, Professor Petkova showed insights coming from the EU Member States such as the adoption of the German Network Enforcement Act in 2017, aimed at tackling unlawful online contents and particularly hate speech. However, Professor Petkova continued, developments can also be found in France (which is trying to regulate online speech during elections). Such national developments  show a tendency of the states to act as “laboratories of democracy”, producing legislative prototypes for supranational legislation. For instance, the abovementioned German Network Enforcement Act constitutes a sort of “prequel” to the Digital Service Act. Also, worth mentioning the Platforms Act in Austria, establishing a platforms liability regime and clamping down on hate speech and unlawful content.

Also, Professor Petkova suggested, the illustrated state of the art represents the most recent moment of an “empirical history” of public powers flexing their muscles. Such history dates back to when the first rules on online content moderation were being implemented both in the EU and the US. However, at that point in time, the public power was not only trying to show its firm hand through regulation but also being collaborative towards online content distributors.

The debate on regulating intermediaries is ongoing. However, Professor Petkova is optimistic. The different legislative proposals on online content being put forward around the world offer significant fora for cooperation between public powers and private entities, on thorny issues such as liability regimes for illegal online content and contrast to online hate speech. Among significant crackdowns and clashes, hence, there is some room for collaboration. And optimism as well.

The third speaker was Thomas Streinz, Inaugural Executive Director of Guarini Global Law & Tech, New York University Law School and Adjunct Professor of Law at NYU Law. Professor Streinz analysed the Facebook Oversight Board (“Oversight Board”) under legal and global governance principles. In particular, he tried to foresee the challenges of having corporate installed bodies (such as the Oversight Board) in charge of large-scale global content regulation and the consequences for global governance and speech regulation more broadly.

Professor Streinz started by explaining what the Oversight Board actually is. He immediately clarified that the Oversight Board is not exactly the Supreme Court of Facebook. Although it is entrusted with adjudicating on potentially controversial contents posted on Facebook, it lacks the fundamental features of a traditional court (such as being embedded in the “separation of powers” principle). Also, this comparison with the US Supreme Court would narrow the scope of Oversight Board to be predominantly about US disputes (such as the currently pending dispute whether the former US president should be allowed back to the platform) which is not what the Oversight Board is about.

Furthermore, Professor Streinz added, thinking of the Oversight Board simply as a corporate structure mediated through a trust to separate it from Facebook would be too formalistic a legal analysis. More precisely, Professor Streinz explained, the Oversight Board is understood to be a global regulatory body that is charged with adjudicating speech controversies (i.e., what kind of content is allowed on a global platform such as Facebook). Another original feature of the Oversight Board is its composition, focused on gender equality and racial diversity. Also, the Oversight Board is characterised by the remarkable expertise of all its members concerning speech regulation.

Much more critical, though, is the question of what the Oversight Board actually decides. In this regard, Professor Streinz first noted that cases reach the Oversight Board in two ways: either there’s a user appeal against specific and potentially problematic content shared through Facebook or through the direct appeal process. However, Professor Streinz observed that the real issue concerns technical and legal limitations to Oversight Board’s jurisdiction included in Facebook’s bylaws and procedural guidance. Professor Streinz focused on the Facebook legal department’s review, which determines the cases that the Oversight Board actually decides. According to the speaker, such legal filter shifts decision-making powers from a supposedly independent body (the Oversight Board) to Facebook itself. Professor Streinz stressed how, before the Oversight Board can opine on a case, it is the Facebook legal department that preliminarily evaluates whether, for instance, there’s a risk of liability for Facebook (i.e., risk that the case if reviewed by the Oversight Board could result, for instance, in criminal liability or adverse government action). This particularly means that the Oversight Board is constrained in its role by Facebook, being its content moderation activity conditional upon a preliminary legal review by the Facebook’s legal department, designed to shield Facebook against potential liabilities. Given its legal constraints, Professor Streinz concluded, the question remains whether the Oversight Board is an effective tool to provide unbiased and effective solutions to the controversial issues of content moderation.

The fourth speaker was Peter Swire, Elizabeth & Tommy Holder Chair of Law and Ethics, Scheller College of Business, Georgia Institute of Technology. Professor Swire looked at the interface between data protection and Facebook, focusing on the frustrations stemming from such interrelation.

First, Professor Swire focused on frustrations for protectors of fundamental rights. For instance, he raised the issue of non-compliance with the rule of law. Little seems to have changed in the data sharing mechanisms that Facebook uses to transfer data from the EU to the US, notwithstanding the Schrems I and II decisions invalidating two EU-US data-sharing schemes. Besides, tracking techniques are also a frustration in that they can favour mass surveillance. Frustration seems to result also from the current struggle by national and supranational entities to regain sovereignty and rein in dominant platforms (so-called “techlash”).

However, there is also a lot of frustration among US companies in the EU, including Facebook. For example, Professor Swire observed how US companies will find it hard to comply with the strict interpretation of EU data protection laws by Data Protection Authorities (particularly the EU Data Protection Board), particularly following the Schrems decisions. Furthermore, Professor Swire noted how the GDPR is too focused on individuals’ rights and protections which can hardly be built into functional business systems or processes. According to him, such focus would cause the GDPR to be read as banning key emerging technologies (e.g., machine learning and AI) that make extensive use of personal data difficult to anonymise when crunched by such technologies.

Lastly, the speaker mentioned the issue of double standards. For instance, EU Member States are not complying with the CJEU’s rulings striking down data retention laws adopted by several EU Member States. Therefore, Professor Swire Provocatively asked, why should US firms be bound by these rules when EU governments don’t follow them?

Privacy frustration is real. To ease some of it at least, Professor Swire concluded, we should pay more attention to actual business data practices and try to design rules which allow the establishment of strong data protection standards which interoperate with business systems and processes.

The law of Facebook is multidimensional and the issues stemming from it are multifaceted. As the panellists showed with their presentations, it seems that we have clarified only some of the societal, legal, and economic issues underpinning Facebook and large tech platforms. The City Law School event on the law of Facebook attempted to do just that: provide some food for thoughts and try to set a potential research agenda for future ‒ and much welcome ‒ interdisciplinary academic research concerning digital businesses.

Do also consult the recording of the Webinar.

 For more information about research related to Brexit and EU Law, visit the Institute for the Study of European Law, the Jean Monnet Chair in European Law and Jean Monnet Chair in Law and Transatlantic Relations.