On why the UN Global Digital Compact is a Trojan horse in Internet governance
The United Nations has finally made public the third and latest revision of the Global Digital Compact, which it originally published under silence procedure. This procedure gives member states 72 hours to raise concerns to break their silence. Should they fail to, the text will be adopted as is. As of 17 of July, more than 10 member states have broken silence following controversial paragraphs. Co-facilitators Sweden and Zambia have since scheduled informal consultations for 17 August 2024, roughly six days at the time of writing this blog. The Global Pact is meant to be adopted as an annex to the "Pact for the Future” during the “Summit of the Future” in New York on 21st of September.
I got a copy of the third revision just this weekend. The transition from the second to the third draft of the GDC is subtle—so subtle that I needed to print out the two drafts and compare them side by side just so I can notice the difference. Despite how subtle they are, the shift in language and emphasis is significant. And they could have profound implications for the protection and promotion of human rights globally. To avoid repeating arguments that were already made in the past against the GDC, I will provide five short key points on why the GDC is weak, redundant and most importantly, how it opens doors for authoritarian regimes to flourish in the digital age.
There is nothing particularly groundbreaking with the GDC. I still have yet to find a reasonable rationale as to why Guterres would even propose such ludicrous document just as he is on his way out. Basically, everything written in the document has been discussed within the walls of the IGF, WSIS, IETF and W3C. Its call for the creation of another Scientific Panel (parag 55), this time, for AI, particularly stands out. Given the plethora of existing bodies dealing with similar issues (like UNESCO and the ITU), it is worth asking why we need another panel on emerging technologies? Is it Guterres’ way to cement his legacy? Bad idea, if it is. But I guess, we’ll never know. The creation of another panel does not happen without substantial funding, expert recruitment, and administrative support. For an organisation that repeatedly calls for funding, I am certain they could find programs where it would more prudent to allocateresources towards more direct interventions in technology policy and implementation.
The GDC’s objective to centralise Internet governance in New York is the very antitheses of the qualities that have made the Internet thrive. Putting additional bureaucracy and centralisation on Internet governance is never the way to go. For this, we need only look at the development and evolution of foundational internet protocols like TCP/IP, standards like HTML and DNS--the very cornerstone of the Internet’s architecture. These were not the products of a single, centralised authority. Rather, they emerged from broad, collaborative efforts involving diverse stakeholders across various sectors and nations. Guterres’ current proposal is not only detrimental to the very foundational principles of the Internet, it also has far-reaching implications for the future. How? Well, this obsession with top-down models of Internet governance seen in countries like Russia, Iran, North Korea and China have showed us one thing. Government control over the Internet will lead to severe limitations on access and widespread censorship. By centralising the global Internet governance, we risk to replicate these failures on a global scale.
Second, the GDC apparently subscribes to the notion of technological determinism--where technology is viewed as the primary cause of social change. A determinist view to technology downplays the role of decision-makers behind these technologies. It makes the investors, the venture capitalists, the policymakers, the very people who make critical decisions about the design and deployment of technology invisible to the public eye. David Nye, in his book "Technology Matters: Questions to Live With" argued how people have historically shaped technologies to fit their needs, rather than being merely shaped by technology. Therefore, the GDC must instead focus on regulating the actors behind the technology instead of putting all its eggs in one basket that aims to regulate the technologies they create.
The heavy reliance on corporate self-regulation (parags 25, 31(a, b, c), 35(a,c), which we know now is just a euphemism for ‘do what you want or you will get a slap on your wrist’ approach, underscores the urgent need to create a framework that can penetrate through regulatory capture. This means the United Nations Guiding Principles on Business and Human Rights framework is already out of the conversation. If any of these “we call on digital technology companies and social media platforms” have worked in the past, we would not be in this pandemonium that we are in today. History is littered with examples on how without mandatory compliance measures and sanctions for violations, reliance on corporate goodwill is a recipe for failure. Various tech companies such as Meta, OpenAi, Google, Amazon and Microsoft, amongst many others have been implicated in privacy breaches and unethical practices despite the current existing guidelines.
GDC mentioned private sector more than it mentioned civil society in its third revision. This is one thing the GDC has excelled at. It showed us whose interests are truly being served and the potential consequences for equity and sustainability in the global digital landscape. As they are businesess, whose main goal is to accumulate as much profit as possible, they will advocate for less regulation and oversight in areas where greater control is necessary, such as data privacy and security. It would not be an overstatement to claim that the current version of the GDC is a win for state overreach and capitalism.
Third, much like a thief who operates under the veil of darkness to avoid detection and accountability, the manner in which this draft was circulated among ‘stakeholders’, which by the way, is just another term for member states, bypasses the broad and inclusive feedback mechanisms that had characterised earlier revisions. If anything, this approach tells us one special thing that we must foresee. Just as how the voices of civil society and human rights organisation have been sidelined during the Cybercrime Treaty negotiations, the future of human rigths activism in the United Nations is grim. We must prepare for a system where the very people who are most attuned to the on-the-ground realities of digital rights and human rights will be deliberately pushed to the margins of the discussion. This shift towards a state-centric governance threatens to erode the very foundations of the UN’s commitment to human rights. Pretty much as a nighttime theft would leave a community feeling vulnerable and violated. It is hypocritical of the United Nations to call for “civil society groups to endorse the Compact and take active part in its implementation and follow-up” (parag 65). Calling for the civil society to endorse a document in which they were deliberately marginalised shows a disingenuous effort to appear inclusive while structuring the Compact in a way that inherently favors more powerful stakeholders. For this, there is only one thing I can say:
Fourth, a quick run through on the language on human rights in both drafts shows a significant weakening on the integration of human rights into technology governance. In short, it dilutes the commitment to human rights as foundational principles. For a pact that boasts itself as ‘global’ in nature, the lack of specificity regarding how human rights will be upheld throughout the lifecycle of digital and emerging technologies leaves much open to interpretation. And as we have seen in the recently passed Cybercrime Treaty, broad and often vague language used in defining objectives such as fostering "an inclusive, open, safe, and secure digital space" (parag 32) is very likely to be exploited by authoritarian governments to justify stringent control over digital ecosystems under the guise of national security or cultural preservation. China’s Cybersecurity Law uses the same language to ensure its "cyberspace sovereignty". Authoritarian governments love vague terminologies because they can change their meanings whenever its convenient. Much like Viet Nam’s Cybersecurity Law, which uses ambiguous language on what constitutes a threat to "national security" to justify strict controls over online content and surveillance of digital communications. This flexibility allows such regimes to interpret these terms in ways that can lead to greater censorship, surveillance, and restriction of digital freedoms.
This inconsistency in the GDC is even more apparent when references to "non-military domain" in the second revision (parags. 13(e), 20, 21(i) and 49) were discarded. This means the GDC has blurred the lines between civilian and military cybersecurity measures. Military-focused cybersecurity prioritises national security and often justifies extensive surveillance, data collection, and even the suppression of information under the guise of security. When applied in civilian contexts, they can lead to pervasive surveillance of ordinary citizens and infringe on their human rights. The indiscriminate collection of data, justified under the guise of cybersecurity, will result in the monitoring of political dissent, the targeting of activists, and the erosion of civil liberties.
And if that wasn’t enough, the GDC also watered down its previous calls for cybersecurity-related capacity building (parag 13(e), 21(i)) in the third revision. The absence of specific cybersecurity initiatives could lead to weaker defenses against cyber threats that target vulnerable populations. Here, we can see again how activists, journalists, and human rights defenders are particularly at risk. If cybersecurity measures are not prioritised, we are looking for more targeted attacks designed to silence dissent and curb freedom of speech. This leads me to my next point, the underplaying of the need to address state surveillance and data privacy in the digital age. Recent events have revealed how states’ use of surveillance technologies have become too pervasive. And this is often done without adequate judicial oversight and checks and balances. The Pegasus spyware is the perfect example here, as it has led to widespread violations of privacy and has been linked to crackdowns on dissent in countries like Mexico, Saudi Arabia, Indonesia and India. By relegating the issue to a single, brief clause in parag 30(d), the GDC not only fails to recognise the significant human rights risks posed by unchecked surveillance practices but overtly leave any actionable commitments that ensure robust protection of human rights in the face of technological advancements. The GDC is nothing but a paper full of word salad.
The mere mention of "international law" in 30(d) is used to sidestep real accountability. Just look at Russia’s invasion of Ukraine, Saudi Arabia’s human rights violations in Yemen, China’s surveillance in Xinjiang, and Israel’s self-serving justifications for settlements and security measures. These examples are clear as daylight. The lack of explicit commitments to specific human rights instruments will only set the stage for states to continue their egregious abuses under the flimsy guise of legal compliance of international law. This glaring omission leaves a critical gap in human rights protections, effectively giving states a free pass to violate privacy and other fundamental rights. And don’t get me started with lack of acknowledgement of the ongoing debates around encryption backdoors, which some governments are aggressively pushing for. These backdoors would not only obliterate privacy rights but also penalise activism at its core. It is deafening how the GDC is silent on this crucial issue.
Another alarming change from the second to the third revision is how the role of the Office of the High Commissioner for Human Rights (OHCHR) have been changed. The second draft explicitly notes the OHCHR's efforts to provide expert advice and practical guidance on human rights and technology:
“We take note of OHCHR’s ongoing efforts to provide, upon request, expert advice and practical guidance on human rights and technology issues to governments, the private sector and other stakeholders, including through the establishment of a UN Digital Human Rights Advisory Service within existing resources.” (parag 24)
This clause has been weakened to a mere acknowledgement in the third revision (parag 24):
“We acknowledge OHCHR’s ongoing efforts to provide through an advisory service on human rights in the digital space, upon request and within existing and voluntary resources, expert advice and practical guidance on human rights and technology issues to governments, the private sector and other stakeholders”
You can see here how the GDC has scaled back and put strict limitations on OHCHR’s capacity to influence by removing reference to the Advisory Service . To me, it sounds more like “Yes, OHCHR, you released a report. Thank you. Next.” But, it didn’t stop there, it inserted a phrase "upon request", which fundamentally positions the OHCHR's intervention as reactive rather than proactive. Basically, “if you are not being asked, shut up.” A reactive model of advisory is nothing short of a token that states will use as they please. The lack of a permanent, dedicated structure also means that human rights will take the backseat.
Lastly, the biggest elephant in the room: technological imperialism. With all its mention of ‘South-South’, ‘North-South’ terminologies, the GDC appears to be attempting the address the challenges when it comes to disparities in technologies across countries. But, a careful analysis of the paragraphs have illustrated how the GDC may perpetuate, if not, exacerbate tech imperialism. Paragraphs 19, 21(b) and 28(a) calls for an "enabling environment" that supports innovation and digital entrepreneurship. Sure, it does sound audibly appealing, but this framework hinges on two things: partnerships and technologies that primarily developed in high-income countries. Following this logic, developing countries will be put on extreme dependence on foreign technology and expertise—which, in most cases, never aligns with their local needs or capacities. The phrase “mutually agreed terms”, which was mentioned five times highlight this implicit risk. We know how “terms” often translate to terms dictated by the more powerful party, especially when there is a significant power imbalance between the countries involved. With regards to technology, this means terms created by large tech firms and highly developed countries, leaving developing countries a choice in the matter.
The lack of any enforceable mechanisms in the GDC, which we have proven in the previous paragraphs of this blog, will only ensure that global data governance initiatives would be co-opted by powerful nations and corporations. Parag 62 specifically calls for “increased investment, particularly from the private sector and philanthropy, to scale up AI capacity building for sustainable development, especially in developing countries.” This is a textbook example on how market concentration and monopolies start. The involvement of large multinational tech corporations in AI capacity building will lead to a concentration of technological resources and expertise in the hands of a few. This will also stifle local competition by outcompeting or acquiring local startups and companies that lack similar resources or scale. We have seen this in the mobile telecommunications market in Africa, where large international companies have established dominant positions, making it difficult for smaller local companies to compete effectively. In the AI domain, if companies like Microsoft or Google lead their versions of capacity-building efforts, they will most likely use their technologies. This is a gateway for companies to dominate AI markets and infrastructure in developing countries by setting standards and controlling the ecosystem in ways that favor their business models and products. And we haven’t even gotten to the peak of the problem yet. We do know that private sector-driven AI development requires extensive data collection and processing. And often data generated in developing countries are exploited by multinational corporations. A good example of this is Facebook’s Free Basics in India, which created a walled garden of internet services while potentially accessing a wealth of user data under the guise of providing free internet services.
The Global Digital Compact, as it stands, is not just an ineffective tool for safeguarding digital rights. That’s already given. If the manner in which the GDC has been developed and revised is not enough for you to despise this document, I hope that I have provided you with at least four more arguments to strengthen your aversion. The shortcomings in the document suggest a problem on how policy makers in the UN conceptualise digital governance. This intense drive towards centralising Internet governance mirrors the authoritarian tendencies that suppress open and free communication. These issues provide a compelling basis to critically evaluate and ultimately challenge the direction the GDC is taking us. The vague commitments laid down in this document make it ill-equipped to address challenges in the modern world. Issues such as data privacy, encryption, freedoms and human rights were explicitly watered down to pave the way for increased private sector investment and the creation of additional regulatory bodies like the proposed AI panel. In light of these critical issues, it is clear that the GDC risks becoming not just ineffective but a tool that could potentially cause more harm than good in the digital domain. Ariana Grande has put this so perfectly,