classstruggle.tech

My name is Jean. Come and join me on this eclectic journey where tech meets politics, art and humour.

Meta has just released its human rights report for 2023, which covers the whole of 2023 from 1 January to 31 December inclusively. The report is structured into a few subchapters, including sections on ‘AI’, risk management, issues, stakeholder engagement, transparency and remedy and something they called ‘looking forward’. As per usual, the report seems to serve primarily as a compliance document designed to fulfil regulatory obligations rather than effectuate real change. Substance wise, the report has nothing to offer except a few corporate jargons and cooptation of human rights concepts to give the impression of progressive intentions. This blog aims to dissect Meta's 2023 Human Rights Report and look behind the glossy facade of corporate speak.

On AI

Prior to the week of the report’s release, Reuters released a news report detailing how the Meta Platforms will start training its ‘AI’ models using public content shared by adults on the two major social networking sites they own: Facebook and Instagram. In the same month, Meta has also admitted to scraping every Australian adult user’s public photos and posts to train their ‘AI’, but unlike the way they ran this absurdity in EU, they did not offer Aussies any opt-out option. Meta did not address any of these in their so-called human rights report. In fact, the whole section on ‘AI’ reads like it was written by PR practitioners whose main goal is to sell us an idea. And it is that ‘AI’ models are ‘powerful tool[s] for advancing human rights’, a portrayal that I find both naive and disingenuous.

This short blog will not go into specific cases on how the current ‘AI’ boom has led to shrinking democratic spaces, but examples in India, Bangladesh, Pakistan and Indonesia are abound. Not to mention how the ‘AI’ boom has been fueling the rise of digital sweatshops, where workers mostly from the Global Majority countries such as Kenya and the Philippines are being paid less than 2 dollars per hour to label content. And it did not stop there, Meta went on to fire dozens of content moderators in Kenya who attempted to unionise. These workers, tasked with reviewing graphic and often deeply traumatising content on Facebook, were subsequently blacklisted from reapplying for similar positions with another contractor, Majorel, after Meta switched firms.The output data from moderators are then used to train machine learning models that enhance systems primarily aimed at Western consumers, ostensibly to make these technologies “safer”.

Another often-overlooked consequence of these AI models is the immense energy consumption required by the power-hungry processors that fuel them. Recent numbers from the University of Massachusetts Amherst revealed that the carbon footprint of training a single large language model is roughly around 272,155 kgs of CO2 emissions. Big Tech’s obsession with ‘AI’ has created a burgeoning demand for the construction of data centres and chip manufacturing facilities, especially in regions of the Global Majority. And as these data centres require significant computational power that generates considerable heat. These data centres are literally sucking dry the water supplies that local communities depend on for survival.

Now, Meta can argue that the reason why these cases were not included in the report is because these are outside the date coverage of the human rights report, but evidence shows that this is not the case. Meta released Llama 2 in July 2023, which they described as ‘open’ in their report. Now, the use of the word ‘open’ here and in their succeeding press releases regarding Llama 3 will require another blog on its own. But one is for sure, nothing about the Llama licences make them open source. The training data for both LLMs were never publicly released. According to Meta, Llama 2 was pretrained on publicly available online data sources whilst Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources. Now, Meta is not the only one who is not transparent about this. If you remember the WSJ interview with the OpenAI CTO, you would remember how she made this grimace face after being asked if OpenAI used videos on Youtube. I do not expect any of these companies to release the training data they used for the large language models because that would open a can of legal problems for them. Meta’s lawyers, for instance,warned the company about the legal repercussions of using copyright material to train its model.

This reinforces my earlier point on why the so-called human rights report is nothing but a press release that attempts to paint a rosy picture of Meta’s commitments while conveniently glossing over the darker aspects of their operations that have significant human rights implications.The selective transparency and regional inconsistencies in user consent practice (such as the stark differences in how users in Australia and the EU are treated) raise a pressing question: where do we, in Southeast Asia, stand? Given our governments will not likely champion our data rights as vigorously as those in the EU (not that they are not without their flaws), the risk to our privacy and rights is even more acute. This regional disparity in data protection and user consent highlights a more disturbing trend. Tech giants like Meta can, and do, exploit weaker regulatory frameworks in regions to sidestep stringent compliance obligations that they would otherwise have to meet in other jurisdictions. This tells us how companies are ethics-washing their policies, by deciding on things based on what would provoke the least backlash.

Deflecting responsibility

The section on ‘risk management’ reeks of posturing and selectivity. The mention of the UN's Convention on the Rights of the Child (CRC) as the backbone of their “Best Interests of the Child Framework” is at best token, especially when faced with their commercial priorities. A platform that is fundamentally driven by user engagement and data monetisation cannot prioritise well-being of adults, let alone children. Meta has recently introduced ‘teen account’ that “will limit who can contact teens and the content they see, and help ensure their time is well spent.” It sounds good on paper, but what Meta is actually doing here is just deflecting responsibility to Apple and Google. Meta is pushing for mobile platform providers to enforce app installation approvals, which basically offloads the burden of safety measures to other companies. While it should go without saying that Apple and Google do indeed have a crucial role in managing the ecosystems their platforms support, it is imperative for app developers, particularly those like Meta, whose apps are used by millions of children worldwide to take primary responsibility for the safety features within their own products. About time you own your responsibility, Meta. Stop deflecting and start owning the consequences of your business model.

Censorship and content moderation

Meta’ content policy has been under fire for quite some time now especially with its complicity and failures that have directly contributed to the Rohinya genocide and Ethiopia’s Tigrayan community. More recently, following the 7 October event in Gaza, Meta has once again proven itself incapable of adhering to its own standards and promises by further censoring Palestinian voices. Meta's pattern of policy enforcement has been overly restrictive against pro-Palestine content either through deletion or shadowbanning. Back in 2021, Meta commissioned the Business for Social Responsibility (BSR) to conduct a rapid human rights diligence. The BSR report found that “Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.“ However, events in October 2023 have shown Meta’s insufficient follow-through on its prior commitments. If anything, it exposed a worsening crisis that highlights the company’s capacity (or lack thereof) to enact impactful changes to its content moderation strategies.

In response, Meta has introduced updates as noted in Meta Update: Israel and Palestine Human Rights Due Diligence report. The most notable of which was the replacement of the term 'praise' with 'glorification' in its content restriction policy. But, these definitions still remain overly broad and subject to varying interpretations based on cultural, social, and political contexts. This is even made more problematic given the requirement for users to “clearly indicate their intent.” Expecting users to preemptively clarify their positions is unrealistic and places an undue burden on those already vulnerable to suppression. Meta’s updates do little to address the deeper structural flaws in its content moderation strategy that continues to operate with a heavy-handed approach.

Another example is Meta’s decision to ban the term ‘zionist’. The foundation of this decision is anchored on the conflation of legitimate criticism of a political ideology with hate speech. This very conflation serves to immunise the Israeli government from legitimate scrutiny under the guise of preventing hate speech. By blanketly labelling all critical uses of “zionist” as hate speech (read Ilan Pappe’s Ten Myths About Israel)  without sufficient contextual differentiation, there is a significant risk of branding scholars, human rights activists, and critics of Israeli policies as antisemitic. Not only does this approach stifle the necessary dialogue, it would also be exploited to deflect from genuine human rights discussions. If criticisms of a political nature are too readily classified as attacks on Jewish identity, then all discussions on human rights abuses, international law, and the humanitarian impacts of the Israeli occupation could be unjustly censored.

Final thoughts

I feel like I should end this short blog by declaring that the arguments made in this blog are not exhaustive. A counter-report is needed, if we want to scrutinise every point made in Meta’s report. Yet, if there's one critical takeaway for you, the reader, it's this: Meta's 2023 human rights report is an exemplary case of corporate doublespeak, artfully crafted in a way that masquerades compliance as commitment. But, this should not surprise us especially for a company that profits immensely from the very practices that pose risks to human rights. And this didn’t just come from us. The International Trade Union Confederation named Meta as one of the main culprits in facilitating the spread of harmful ideologies worldwide, particularly in weaponising its platforms for the dissemination of far-right propaganda. Meta's aggressive lobbying, which squandered 8 million euros in the EU alone, circumvents accountability to ensure their profit machine steamrolls over any democratic control or oversight. This is a deliberate assault on democracy. And at the core of these issues is Meta's relentless drive for profit.

The business model incentivises invasive data practices and the commodification of personal information. The real issue at hand is not just the individual failures in AI application, content moderation, or crisis management, as grave as these are. The problem is the overarching business model that drives these failures. Until Meta confronts this root cause, every human rights report it issues will be nothing more than a smokescreen that hides the unpalatable truth of its operations. Meta operates with the arrogance of a quasi-state that thrives on an architecture of surveillance capitalism that exploits users with impunity. It is imperative now, more than ever, that robust and enforceable regulations are implemented to curb the pervasive influence of all BigTech companies. These measures are crucial to dismantle their overreach and ensure they are held accountable for their impact on society and individual freedoms.

In an era dominated by digital interactions and transformations, the discourse around digital rights has increasingly become steeped in ideologies anchored on the tenets of individualism. We have been told over and over again that they were designed that way to champion personal freedoms. Yet, recent events like this, this, this and this have suggested quite the opposite. If anything, the frameworks we have today are actually just masking a profit-driven agenda that aims to commercialise every facet of our digital existence and perpetuate colonial legacies of domination and exploitation under the guise of “modernisation” and “progress”. This deeply ingrained individualistic focus in the realm of digital rights not only sidelines our collective needs. They actively participate in the neoliberal assault on communal structures and in the longer term, environmental sustainability. This blog seeks to scrtutinise the limitations of our current understanding of digital rights and explore alternative approaches that prioritise the collective welfare and environmental health. As our planet faces record high temperatures and escalating environmental crises, it's imperative that we shift our focus from the 'me' to the 'we'.

Back in 1995, Nicholas Negroponte predicted that Internet would “flatten organisations, globalise society, decentralise control, and help harmonise people”. Quite a utopian picture of digital connectivity, starkly different from our reality in 2024. That instead of being a democratising power, the digital landscape has exploited user data for profit, with a few dominant players wielding significant influence over how data is used and monetised. Much of the Internet we see today can be compared to emerald mining, a process that often involves the extraction of valuable resources under conditions that are far from equitable. The digital realm thrives on the same logic for data extraction, often without users’ explicit consent or fair compensation. Thus, mirroring the exploitation during the colonial era when resources were taken from local communities for the economic advantage of more powerful entities, which leaves those communities impoverished and their environments depleted. And as we observe Indigenous Peoples Day this month, it is just fitting to reflect on the parallels between historical colonisation and the digital exploitation unfolding right before our eyes, Today, we see a similar scenario plays out where vast amount of data are harvested from people worldwide. And from these datasets they create a lesser-known, labour-intensive digital sweatshops fueled by the likes of Amazon Mechanical Turks. In countries like the Philippines, India, and Kenya, workers are employed under harsh conditions to process and label these enormous data pools, tasks that are essential for training AI systems. Such labour is often tedious and poorly paid, yet they are the backbone of the sophisticated algorithms we see on search engines and other digital platform. Echoing the sentiment of a popular 90s song, it's profoundly ironic that those who toil to advance cutting-edge technologies often do so in circumstances that starkly oppose the futuristic applications they help create.

This prevailing “me, me, me” agenda in the digital realm further exacerbates the issues highlighted above, where the individualistic ethos not only promotes but necessitates a self-focused view of technology. But, this cultural shift is not just a byproduct of natural course of tech evolution. Rather, it is more of an active engineering concocted by public policy and commercial interests—as noted by Greenstein in his book “How the Internet Became Commercial”. Corporations created environments that encourage constant connectivity and self-promotion, directly influencing how we conceive of concepts like digital rights, and turning them into matters of personal concern rather than communal responsibility.

Take privacy, as an example. Privacy is not merely a personal choice despite how often it is framed that way. Privacy is a social predicament, which means one person’s decisions regarding their data can have far-reaching consequences for others. I am once again citing Kasper (2007) who argued that “[p]rivacy is a socially created need, and without society, there would be no need for privacy”. One person’s choice of an app can jeopardise the privacy of others. When enough people use a non-secure service, it becomes a norm, making it harder for others to choose more secure options without sacrificing social or professional connections.

Privacy is also highly influenced by one’s position in the social hierarchy. The richer you are, the easier it is for you to obtain a higher level of privacy compared to those lower down the social ladder. This very commodification of privacy creates a false dichotomy between those who can afford privacy and those who cannot. A classic tale as old as time. When we treat prvacy as a purchasable good, we marginalise those who lack the resources to buy into these protections. And by framing privacy as an individual choice, society implicitly blames those who cannot afford privacy-enhancing tools for their lack of privacy. This perpetuates the false idea that privacy is a matter of personal responsibility and capability, rather than a systemic issue rooted in economic inequality. It reinforces the idea that people attain privacy simply because they choose to. It ignores the financial and social barriers that prevent many from securing their personal data.

Many people are not equipped with the knowledge to make informed decisions about their privacy. Some wouldn’t understand the trade-offs they are making by sharing their personal information in exchange for free services. This gap in knowledge and the individualistic push highlight a significant divergence from the long-term thinking and sustainability prioritised by indigenous traditions, which often consider the impact of actions on future generations. To rectify this, we can draw on the collective-focused principles of many indigenous cultures such as the Maori’s whanaungatanga, Igorot’s **og-ogbo **and Minangkabau’s gotong toyong. These groups prioritise collective well-being over individual success, a stark contrast to the self-centered, narcistic approach we see everyday as we browse Instagram’s Explore tab or Tiktok’s Discover page. For these cultures, decisions about community resources are made collectively. This reflects a deep commitment to the entire community's welfare, which could inform a better approach to digital privacy, and digital rights more generally.

In the face of the current digital rights framework dominated by commodification, consumerism and individualism, there is an urgent need to pivot towards a decolonial and degrowth approach in digital rights. The Euro-American centric paradigms that have long dominated and distorted our approach to digital interaction have failed the global majority. These frameworks perpetuate a colonial legacy and dictates terms and conditions from a viewpoint that aligns with Western interests at the expense of local and indigenous practices. In various communities across Asia and Africa, for example, data and digital resources are traditionally seen as collective assets, integral to the welfare and advancement of the entire community rather than just the indivdual. This communal approach to digital resources has become evident in practices such as community-managed cooperative mobile networks in South Africa and Mexico, where the technology is maintained and used by the community to ensure that all members have access. Such models highlight a stark contrast to the individualistic, privatised approach, where data is often siloed and monetised on individual bases, leaving the control largely in the hands of corporate entities.

Another critical framework that can provide guidance for rethinking digital rights and advocate for a shift away from unsustainable consumption is degrowth. Degrowth challenges the relentless drive for technological advancement and data accumulation. Instead it proposes that we prioritise ecological sustainability and human well-being over corporate profits. At the heart of the degrowth argument is the call to curb unnecessary data collection, which is critical in an era where the over-collection and exploitation of data are rampant.This would mean that data collection is limited strictly to what is necessary for the functionality of services, rather than for surplus value extraction through surveillance capitalism. We need to reorient our relationship with technology to make it align with the principles human rights and environmental sustainability. As long as these problematic business models persist, we are trapped in a destructive cycle where companies often play the dual roles of arsonists and firefighters.

As we move away from individualistic data ownership models to collective data governance, we need to ensure that digital resources are managed in ways that benefit entire communities rather than individual corporations. This could involve community-controlled data trusts that prioritise transparency and equitable access. We need a radical reevaluation of how digital technologies are developed, deployed, and discarded, emphasising moderation, regulation, and the minimisation of digital footprints. Integrating indigenous perspectives into digital rights discourse can provide valuable insights into how digital technologies might be harmonised with cultural practices and communal values, offering a more holistic approach to privacy and data protection.

Digital rights policies should align with broader sustainable development goals, ensuring that digital growth does not come at the expense of environmental degradation or social inequality. This could mean imposing stricter regulations on energy consumption of data centres or designing technologies that are both energy-efficient and accessible to economically disadvantaged communities. To combat the monopolistic control of tech giants, supporting decentralised and open-source technologies can empower smaller businesses and communities. This would help reduce the concentration of power and promote a more democratic digital landscape. 

By addressing these aspects, the discourse on digital rights can shift towards a model that is not only anti-capitalist but also decolonial and aligned with degrowth principles. This would foster a digital environment where technologies serve the collective good, ensuring fair access and sustainable practices that respect both human and environmental rights. The challenge for us, activists, lies not just in resisting the commodification of digital spaces but in reim

Below is the speech I delivered on behalf of Manushya Foundation during the United Nations’ so-called multistakeholder information session on the GDC:

I am Jean Linis-Dinco of Manushya Foundation. We appreciate the opportunity to speak as we critically evaluate the third and latest revision of the Global Digital Compact. This document, released under the silence procedure, has provoked considerable discourse, leading to member states breaking their silence due to contentious points within the draft.

We appreciate the opportunity to speak as we critically evaluate the third and latest revision of the Global Digital Compact. This document, released under the silence procedure, has provoked considerable discourse, leading to member states breaking their silence due to contentious points within the draft.

We are compelled to speak out because the subtle yet profound changes from the previous drafts signal a dangerous shift towards centralisation and bureaucracy that contradicts the decentralised nature that has made the Internet a bastion of freedom and innovation. This push towards centralisation mimics the heavy-handed governance models, which have only proven to stifle free expression and restrict access to information. Furthermore, the introduction of additional bureaucratic structures despite the existence of competent bodies already addressing these issues, is redundant.

We are particularly alarmed by the reliance on corporate self-regulation. This method has consistently failed us, serving corporate interests at the expense of privacy and ethical conduct. Moreover, the diluted language concerning human rights and the marginalisation of civil society’s role in this draft is unconscionable. Civil society is the guardian of public interest, yet this draft relegates it to the periphery, favoring instead a top-down approach that centralises power and silences dissenting voices. The weakening of the role of the Office of the High Commissioner for Human Rights represents a profound failure of responsibility. It reduces a vital watchdog to a token participant, undermining its ability to challenge and address human rights abuses in the digital realm. This is not just a step back; it is a leap into dangerous territory where human rights are not the guiding principles but mere afterthoughts.

This compact, as it stands, is a recipe for increased surveillance, censorship, and repression under the guise of digital cooperation. It uses vague language that some countries can and will exploit to justify their crackdowns on digital freedoms.

We, at the Manushya Foundation, urge the co-facilitators and member states to amend this draft iand to engage in a truly inclusive, transparent process that places human rights, transparency, and the global public interest at the core of digital governance. We owe it to the global citizenry to ensure that their rights are not traded away on the altar of expediency or political convenience.

Thank you for your attention. We look forward to engaging in a process that respects the voices of all stakeholders and not just states and protects the digital rights of every citizen, not just the interests of the powerful few.

And here’s my from a screenshot taken by WSIS :D

The United Nations has finally made public the third and latest revision of the Global Digital Compact, which it originally published under silence procedure. This procedure gives member states 72 hours to raise concerns to break their silence. Should they fail to, the text will be adopted as is. As of 17 of July, more than 10 member states have broken silence following controversial paragraphs. Co-facilitators Sweden and Zambia have since scheduled informal consultations for 17 August 2024, roughly six days at the time of writing this blog. The Global Pact is meant to be adopted as an annex to the “Pact for the Future” during the “Summit of the Future” in New York on 21st of September.

I got a copy of the third revision just this weekend. The transition from the second to the third draft of the GDC is subtle—so subtle that I needed to print out the two drafts and compare them side by side just so I can notice the difference. Despite how subtle they are, the shift in language and emphasis is significant. And they could have profound implications for the protection and promotion of human rights globally. To avoid repeating arguments that were already made in the past against the GDC, I will provide five short key points on why the GDC is weak, redundant and most importantly, how it opens doors for authoritarian regimes to flourish in the digital age.

There is nothing particularly groundbreaking with the GDC. I still have yet to find a reasonable rationale as to why Guterres would even propose such ludicrous document just as he is on his way out. Basically, everything written in the document has been discussed within the walls of the IGF, WSIS, IETF and W3C. Its call for the creation of another Scientific Panel (parag 55), this time, for AI, particularly stands out. Given the plethora of existing bodies dealing with similar issues (like UNESCO and the ITU), it is worth asking why we need another panel on emerging technologies? Is it Guterres’ way to cement his legacy? Bad idea, if it is. But I guess, we’ll never know. The creation of another panel does not happen without substantial funding, expert recruitment, and administrative support. For an organisation that repeatedly calls for funding, I am certain they could find programs where it would more prudent to allocateresources towards more direct interventions in technology policy and implementation.

The GDC’s objective to centralise Internet governance in New York is the very antitheses of the qualities that have made the Internet thrive. Putting additional bureaucracy and centralisation on Internet governance is never the way to go. For this, we need only look at the development and evolution of foundational internet protocols like TCP/IP, standards like HTML and DNS—the very cornerstone of the Internet’s architecture. These were not the products of a single, centralised authority. Rather, they emerged from broad, collaborative efforts involving diverse stakeholders across various sectors and nations. Guterres’ current proposal is not only detrimental to the very foundational principles of the Internet, it also has far-reaching implications for the future. How? Well, this obsession with top-down models of Internet governance seen in countries like Russia, Iran, North Korea and China have showed us one thing. Government control over the Internet will lead to severe limitations on access and widespread censorship. By centralising the global Internet governance, we risk to replicate these failures on a global scale.

Second, the GDC apparently subscribes to the notion of technological determinism—where technology is viewed as the primary cause of social change. A determinist view to technology downplays the role of decision-makers behind these technologies. It makes the investors, the venture capitalists, the policymakers, the very people who make critical decisions about the design and deployment of technology invisible to the public eye. David Nye, in his book “Technology Matters: Questions to Live With” argued how people have historically shaped technologies to fit their needs, rather than being merely shaped by technology. Therefore, the GDC must instead focus on regulating the actors behind the technology instead of putting all its eggs in one basket that aims to regulate the technologies they create.

The heavy reliance on corporate self-regulation (parags 25, 31(a, b, c), 35(a,c), which we know now is just a euphemism for ‘do what you want or you will get a slap on your wrist’ approach, underscores the urgent need to create a framework that can penetrate through regulatory capture. This means the United Nations Guiding Principles on Business and Human Rights framework is already out of the conversation. If any of these “we call on digital technology companies and social media platforms” have worked in the past, we would not be in this pandemonium that we are in today. History is littered with examples on how without mandatory compliance measures and sanctions for violations, reliance on corporate goodwill is a recipe for failure. Various tech companies such as Meta, OpenAi, Google, Amazon and Microsoft, amongst many others have been implicated in privacy breaches and unethical practices despite the current existing guidelines.

GDC mentioned private sector more than it mentioned civil society in its third revision. This is one thing the GDC has excelled at. It showed us whose interests are truly being served and the potential consequences for equity and sustainability in the global digital landscape. As they are businesess, whose main goal is to accumulate as much profit as possible, they will advocate for less regulation and oversight in areas where greater control is necessary, such as data privacy and security. It would not be an overstatement to claim that the current version of the GDC is a win for state overreach and capitalism.

Third, much like a thief who operates under the veil of darkness to avoid detection and accountability, the manner in which this draft was circulated among ‘stakeholders’, which by the way, is just another term for member states, bypasses the broad and inclusive feedback mechanisms that had characterised earlier revisions. If anything, this approach tells us one special thing that we must foresee. Just as how the voices of civil society and human rights organisation have been sidelined during the Cybercrime Treaty negotiations, the future of human rigths activism in the United Nations is grim. We must prepare for a system where the very people who are most attuned to the on-the-ground realities of digital rights and human rights will be deliberately pushed to the margins of the discussion. This shift towards a state-centric governance threatens to erode the very foundations of the UN’s commitment to human rights. Pretty much as a nighttime theft would leave a community feeling vulnerable and violated. It is hypocritical of the United Nations to call for “civil society groups to endorse the Compact and take active part in its implementation and follow-up” (parag 65). Calling for the civil society to endorse a document in which they were deliberately marginalised shows a disingenuous effort to appear inclusive while structuring the Compact in a way that inherently favors more powerful stakeholders. For this, there is only one thing I can say:

Fourth, a quick run through on the language on human rights in both drafts shows a significant weakening on the integration of human rights into technology governance. In short, it dilutes the commitment to human rights as foundational principles. For a pact that boasts itself as ‘global’ in nature, the lack of specificity regarding how human rights will be upheld throughout the lifecycle of digital and emerging technologies leaves much open to interpretation. And as we have seen in the recently passed Cybercrime Treaty, broad and often vague language used in defining objectives such as fostering “an inclusive, open, safe, and secure digital space” (parag 32) is very likely to be exploited by authoritarian governments to justify stringent control over digital ecosystems under the guise of national security or cultural preservation. China’s Cybersecurity Law uses the same language to ensure its “cyberspace sovereignty”. Authoritarian governments love vague terminologies because they can change their meanings whenever its convenient. Much like Viet Nam’s Cybersecurity Law, which uses ambiguous language on what constitutes a threat to “national security” to justify strict controls over online content and surveillance of digital communications. This flexibility allows such regimes to interpret these terms in ways that can lead to greater censorship, surveillance, and restriction of digital freedoms.

This inconsistency in the GDC is even more apparent when references to “non-military domain” in the second revision (parags. 13(e), 20, 21(i) and 49) were discarded. This means the GDC has blurred the lines between civilian and military cybersecurity measures. Military-focused cybersecurity prioritises national security and often justifies extensive surveillance, data collection, and even the suppression of information under the guise of security. When applied in civilian contexts, they can lead to pervasive surveillance of ordinary citizens and infringe on their human rights. The indiscriminate collection of data, justified under the guise of cybersecurity, will result in the monitoring of political dissent, the targeting of activists, and the erosion of civil liberties.

And if that wasn’t enough, the GDC also watered down its previous calls for cybersecurity-related capacity building (parag 13(e), 21(i)) in the third revision. The absence of specific cybersecurity initiatives could lead to weaker defenses against cyber threats that target vulnerable populations. Here, we can see again how activists, journalists, and human rights defenders are particularly at risk. If cybersecurity measures are not prioritised, we are looking for more targeted attacks designed to silence dissent and curb freedom of speech. This leads me to my next point, the underplaying of the need to address state surveillance and data privacy in the digital age. Recent events have revealed how states’ use of surveillance technologies have become too pervasive. And this is often done without adequate judicial oversight and checks and balances. The Pegasus spyware is the perfect example here, as it has led to widespread violations of privacy and has been linked to crackdowns on dissent in countries like Mexico, Saudi Arabia, Indonesia and India. By relegating the issue to a single, brief clause in parag 30(d), the GDC not only fails to recognise the significant human rights risks posed by unchecked surveillance practices but overtly leave any actionable commitments that ensure robust protection of human rights in the face of technological advancements. The GDC is nothing but a paper full of word salad.

The mere mention of “international law” in 30(d) is used to sidestep real accountability. Just look at Russia’s invasion of Ukraine, Saudi Arabia’s human rights violations in Yemen, China’s surveillance in Xinjiang, and Israel’s self-serving justifications for settlements and security measures. These examples are clear as daylight. The lack of explicit commitments to specific human rights instruments will only set the stage for states to continue their egregious abuses under the flimsy guise of legal compliance of international law. This glaring omission leaves a critical gap in human rights protections, effectively giving states a free pass to violate privacy and other fundamental rights. And don’t get me started with lack of acknowledgement of the ongoing debates around encryption backdoors, which some governments are aggressively pushing for. These backdoors would not only obliterate privacy rights but also penalise activism at its core. It is deafening how the GDC is silent on this crucial issue.

Another alarming change from the second to the third revision is how the role of the Office of the High Commissioner for Human Rights (OHCHR) have been changed. The second draft explicitly notes the OHCHR's efforts to provide expert advice and practical guidance on human rights and technology:

“We take note of OHCHR’s ongoing efforts to provide, upon request, expert advice and practical guidance on human rights and technology issues to governments, the private sector and other stakeholders, including through the establishment of a UN Digital Human Rights Advisory Service within existing resources.” (parag 24)

This clause has been weakened to a mere acknowledgement in the third revision (parag 24):

“We acknowledge OHCHR’s ongoing efforts to provide through an advisory service on human rights in the digital space, upon request and within existing and voluntary resources, expert advice and practical guidance on human rights and technology issues to governments, the private sector and other stakeholders”

You can see here how the GDC has scaled back and put strict limitations on OHCHR’s capacity to influence by removing reference to the Advisory Service . To me, it sounds more like “Yes, OHCHR, you released a report. Thank you. Next.” But, it didn’t stop there, it inserted a phrase “upon request”, which fundamentally positions the OHCHR's intervention as reactive rather than proactive. Basically, “if you are not being asked, shut up.” A reactive model of advisory is nothing short of a token that states will use as they please. The lack of a permanent, dedicated structure also means that human rights will take the backseat.

Lastly, the biggest elephant in the room: technological imperialism. With all its mention of ‘South-South’, ‘North-South’ terminologies, the GDC appears to be attempting the address the challenges when it comes to disparities in technologies across countries. But, a careful analysis of the paragraphs have illustrated how the GDC may perpetuate, if not, exacerbate tech imperialism. Paragraphs 19, 21(b) and 28(a) calls for an “enabling environment” that supports innovation and digital entrepreneurship. Sure, it does sound audibly appealing, but this framework hinges on two things: partnerships and technologies that primarily developed in high-income countries. Following this logic, developing countries will be put on extreme dependence on foreign technology and expertise—which, in most cases, never aligns with their local needs or capacities. The phrase “mutually agreed terms”, which was mentioned five times highlight this implicit risk. We know how “terms” often translate to terms dictated by the more powerful party, especially when there is a significant power imbalance between the countries involved. With regards to technology, this means terms created by large tech firms and highly developed countries, leaving developing countries a choice in the matter.

The lack of any enforceable mechanisms in the GDC, which we have proven in the previous paragraphs of this blog, will only ensure that global data governance initiatives would be co-opted by powerful nations and corporations. Parag 62 specifically calls for “increased investment, particularly from the private sector and philanthropy, to scale up AI capacity building for sustainable development, especially in developing countries.” This is a textbook example on how market concentration and monopolies start. The involvement of large multinational tech corporations in AI capacity building will lead to a concentration of technological resources and expertise in the hands of a few. This will also stifle local competition by outcompeting or acquiring local startups and companies that lack similar resources or scale. We have seen this in the mobile telecommunications market in Africa, where large international companies have established dominant positions, making it difficult for smaller local companies to compete effectively. In the AI domain, if companies like Microsoft or Google lead their versions of capacity-building efforts, they will most likely use their technologies. This is a gateway for companies to dominate AI markets and infrastructure in developing countries by setting standards and controlling the ecosystem in ways that favor their business models and products. And we haven’t even gotten to the peak of the problem yet. We do know that private sector-driven AI development requires extensive data collection and processing. And often data generated in developing countries are exploited by multinational corporations. A good example of this is Facebook’s Free Basics in India, which created a walled garden of internet services while potentially accessing a wealth of user data under the guise of providing free internet services.

The Global Digital Compact, as it stands, is not just an ineffective tool for safeguarding digital rights. That’s already given. If the manner in which the GDC has been developed and revised is not enough for you to despise this document, I hope that I have provided you with at least four more arguments to strengthen your aversion. The shortcomings in the document suggest a problem on how policy makers in the UN conceptualise digital governance. This intense drive towards centralising Internet governance mirrors the authoritarian tendencies that suppress open and free communication. These issues provide a compelling basis to critically evaluate and ultimately challenge the direction the GDC is taking us. The vague commitments laid down in this document make it ill-equipped to address challenges in the modern world. Issues such as data privacy, encryption, freedoms and human rights were explicitly watered down to pave the way for increased private sector investment and the creation of additional regulatory bodies like the proposed AI panel. In light of these critical issues, it is clear that the GDC risks becoming not just ineffective but a tool that could potentially cause more harm than good in the digital domain. Ariana Grande has put this so perfectly,

Cybersecurity Awareness Month is still two months away, but given the importance and urgency of this topic, I thought I’d write about the UN Cybercrime Treaty. And as the negotiations for the treaty draw to a close on August 9th, the stakes couldn’t be higher. This treaty was initially proposed by Russia and now under the management of the UN Office on Drugs and Crime. It promises to strengthen “international cooperation” against “cybercrime”, which to my cybersecurity ears translates to : “We want more power. And we want a UN stamp on it.” Every detail buried within the treaty’s provisions is a step closer towards government’s overreach, especially in the areas of surveillance, data collection, and criminalisation. I’ve listed down here three points of why the treaty is no friend of human rights activists.

First, the treaty mandates expansive powers for data preservation and access (see Articles 25-29). Basically, this legitimises state surveillance. The mandate for “expedited” data preservation and allowing the state to continually renew orders to preserve electronic data sets a precedent for perpetual surveillance. There is also a lack of definition on what constitutes “grounds to believe”. At this point, are we just meant to ask the mirror on the wall?

I am sure the mirror will have a hard time finding who, not because there is none, but because there are too many. The lack of definition gives authorities unchecked power to justify indefinite data preservation. This blatant overreach tramples on privacy rights and creates a chilling effect on free speech. Might as well be the end of investigative journalism, as we know it. Article 27(b) also allows the state to force a service provider to divulge information related to the case being investigated. And we know how history is littered with good examples on why this is a bad idea.

In Vietnam, Facebook bent its knees to the government of Vietnam faster than Jon Snow bent his to the Targaryen Queen. And it was noted that Facebook has “been making repeated concessions to Vietnam’s authoritarian government, routinely censoring dissent.” In Iran, authorities have used private Telegram chats, phone logs, and text messages to incriminate activists, as seen in the case of Negin, who was interrogated and threatened with execution. In Pakistan, the government released an order titled “citizens protection against online harm 2020” which forced service providers to give out data and personal information, as requested by the country’s Inter-Services Intelligence. Not to mention how the broad definitions of crimes and the powers granted to prosecute “cybercrime” could be misused to target activists, journalists, and dissidents under the guise of national security. And asChina seeks to expand the definition of cybercrime to include the “fake news” online, we may be entering a time when the delicate balance between enforcing public order and curbing free speech could grow increasingly indistinct.

“Illegal access” (Article 7) could also be interpreted to include the activities of journalists accessing information for public interest reporting. Late last year, Delhi police carried out raids on the office of NewsClick, a news outlet that is highly critical of Narendra Modi. Houses of almost 50 journalists, activists and comedians in India were also raided under the ‘anti terrorism’ law that allows charges for “anti-national activities. In the Philippines, a similar ‘anti-terrorism’ law was being used to surveil environmental activists. At least 281 environmental defenders were killed in the Philippines between 2012 and 2022. In Jordan, the situation is particularly severe for LGBTI individuals, where the cybercrime law prohibits content that “promote, instigate, aid, or incite immorality.” The Jordanian law also bans the use of Virtual Private Networks (VPNs), proxies, and Tor. And this prohibition forces many LGBT individuals to choose between maintaining their identity’s security and freely expressing their opinions online.

Second, the treaty’s provisions for international cooperation (specifically Article 37) does not sufficiently safeguard against the extradition or transfer of individuals to countries where they might face political persecution. Paragraph 15 mentioned ‘substantial grounds’, but it was not clearly defined. Again, this lack of clarity will lead to individuals being extradited for politically motivated reasons. Article 3 of CAT supports the prohibition of extradition to countries where individuals would face serious risks to their life or freedom.

The treaty is also paving a way for states to create a digital autocracy where governments can compel service providers to preserve data and to provide such data to authorities without stringent oversight. A treaty that facilitates international cooperation on data sharing and broadens the scope for surveillance can also become a tool for governments to crack down on minorities. The ability to access and preserve electronic data without robust safeguards (Article 41 and Article 42) can be exploited to target marginalised communities, such as ethnic, religious, and LGBTQ+ groups. In countries like Russia or Uganda, where the state has a history of using legal frameworks to persecute LGBTI individuals, the ability to monitor, intercept, and collect digital communications under the pretense of preventing “cybercrime” could lead to identifying and prosecuting individuals based on their sexual orientation or gender identity. But to these countries, these people will just be collateral damages.

Paragraph 14 and Paragraph 9 of the treaty’s Article 37 presents a contradiction. While paragraph 14 guarantees fair treatment and the enjoyment of rights and guarantees provided by the domestic law of the state party, paragraph 9 encourages states to simplify evidentiary requirements to expedite extradition procedures. Simplifying evidence standards compromise the accuracy and fairness of the proceedings, which in turn erode due process rights. The recently concluded case of Julian Assange exemplifies the issues within Article 37. Another case of Ola Bini exemplifies these risks. Bini was detained at Quito’s Mariscal Sucre International Airport as he was preparing to travel to Japan for a vacation. The arrest occurred without clear or sufficient evidence, and Bini was held in custody without formal charges. While the treaty mentions respecting human rights and fundamental freedoms, it lacks concrete procedural safeguards against the misuse of the powers it grants. The provisions for search, seizure, and interception of data do not clearly require judicial oversight or other independent review mechanisms, potentially allowing for unchecked governmental overreach and violations of due process rights (uhm, can the UN people please refer to Article 14 of the ICCPR–a UN document?).

While the treaty mentioned the words “human rights” seven times, it lacks concrete procedural safeguards against the misuse of the powers it grants to state parties, educing the invocation of human rights to nothing more than hollow rhetoric. The provisions for search, seizure, and interception of data, as defined in Article 28, do not clearly require judicial oversight. This exposes the treaty as a clear conduit for unchecked governmental overreach and egregious violations of due process rights. Take Indonesia as an example. West Papuan human rights defenders often face significant challenges due to heightened surveillance and frequent seizures of their communication devices such as phones, laptops, and hard drives. This practice not only undermines due process but also poses a direct threat to the protection of civil liberties, operating in a legal gray area that facilitates potential abuses.

A specific provision within Article 28(3d) grants the state an alarming authority to “render inaccessible or remove” data within accessed information and communication systems. But this clause is not just about access, it is about granting the state the power to alter or delete data. This raises severe implications for information integrity and individual rights and sets the precedence for data manipulation without stringent oversight mechanisms in place. Such actions could irreversibly affect data integrity and availability and can be misused in a way that would alter evidence. Article 28 is deeply troubling. Clause (4), in particular, mandates individuals with system knowledge to assist in state investigations. If the coercion includes threats of legal penalties including imprisonment for non-compliance, it violates the right to freedom of thought–which is an absolute human right. Article 28, as it currently stands is a serious assault on fundamental human rights and stands as an abomination to these principles. It contravenes long-standing rights protections enshrined in the ICCPR, including 17, 19, 14 and 9.

Given that “cybercrime” can be politically charged, individuals could be unjustly targeted for their online activities that are critical of governments. Saudi Arabia’s sweeping Anti-Cyber Crime and Counter-Terrorism laws have been used to harshly penalise peaceful protesters, such as Nourah al-Qahtani, who was sentenced to 45 years for her social media posts. These laws, enacted in 2007 and 2014, are intentionally vague, allowing the government to arrest individuals under broadly defined charges like “tearing the social fabric” or “violating public order.”

Third, the treaty mentions that it “acknowledg[es] the right to protection against arbitrary or unlawful interference with one’s privacy, and the importance of protecting personal data”. Sure, then we don’t have a problem anymore, right?

No, Padme. You are wrong. The keyword here is the term “unlawful interference with privacy”. And with the recent anti-encryption campaigns and legislations that are sweeping across Europe and the Five Eyes, we know the government will find ways. In fact, Articles 27 and 28 implicitly discourage encryption practices by facilitating access to stored data. The weakening of encryption and anonymity endangers human rights defenders, journalists, and minorities.. As we have seen in the past, vague laws and treaties only mean one thing: governments can do whatever they want. And the only difference with this one is they will have a treaty that protects them. They will justify intrusive surveillance measures under the pretext of national security or fighting cybercrime at the expense of a person’s privacy and freedoms without accountability. The treaty eerily follows the Snooper’s Charter of the UK. The Charter, passed in 2016, allows authorities to retain emails and electronic communications indiscriminately and requires private companies to store this data. The Snooper Charter is having a facelift to expand the government’s access to large personal datasets, potentially allowing broader and more flexible use of personal data.

So what exactly is the point of this blog? What is even the point of criticising this treaty when some governments all over the world are doing it? The point is simple. Just because some governments are openly spying and jailing their journalists and human rights activists doesn’t mean we need a treaty to legitimise it. The UN was established to serve as a global platform where checks and balances can be applied not just within countries but across international borders. For millions of people around the world who are victims of authoritarian regimes, the UN is the only platform where they can advocate for their rights and a platform to air their grievances. And now, they are trying to take that away. The UN Cybercrime Treaty, under the guise of promoting international cooperation against cybercrime, is fraught with potential for abuse and overreach. It infringes on privacy rights, free speech, and the freedoms of activists, journalists, and minority groups.

But more than anything else, this Russia-backed treaty aims to normalise digital autocracy by channeling their efforts through the UN to create an illusion of universality and necessity. By allowing tools and justifications for digital monitoring and data collection, the treaty can aid in the establishment of a digital autocracy, backed by the United Nations. The notion of the internet as a free and open space is already fading, but with this treaty, we are paving the way for a future where every digital action is monitored and controlled by the state.

I spent the last two years of my life becoming increasingly vocal about my conviction that our current understanding of “AI” is a construct of PR and marketing strategies employed by major tech companies to promote that the technology we have now is both ‘artificial’ and ‘intelligent’. If you pay close attention to how the bourgeois media has framed AI, you would see how desperate they are to sell us the idea that LLMs are sentient, all-knowing entities which can either solve all humanity’s problems or be the beginning of our extinction. Both views are highly dangerous because they are too keen about the future that they often forget about the ‘now’.

AI evangelists would argue for techno-utopianism. They sell us the idea that a little bit of automation here and there would save humanity from all the troubles that we are facing from inequality to climate change. On the other hand, AI doomsdayer loves the SkyNet myth because it sells. Our species is obssessed with tales of the world ending as a way to confront our own mortality and the impertinence of the human civilisation.

I remember I was in high school when the news about 2012 being the end of the world according to the Mayan calendar boomed. I would be lying if I said that I did not believe it. As a highly religious teenager, I was gullible. The fact that even my pastor mentioned about it in one of his sermons together with some passages from the book of Revelations made me contemplate about my life everyday to the point that I could not sleep. The 2012 phenomenon came and went. It left behind a trail of relieved sighs and perhaps some embarrassed chuckles. But the lesson lingered. It showed how easily we can be swayed by narratives that resonate with our pre-existing beliefs and fears, regardless of their factual basis.

The doomsday narrative on AI often play on the same psychological and emotional chords as the 2012 prophecy. They tap into our fears and anxiety, which obscure rational discourse. The doomsday narrative on AI is founded on the idea that the technology we have now is sentient and can therefore make autonomous decisions. But that is far from the truth. All it does is absolve the creators and company leaders of accountability and obfuscates the genuine issues at hand: the widening gap between the rich and the poor, algorithmic biases, privacy invasion, rising inequality and the most crucial yet often overlooked, LLM’s energy expenditure. The use of 'artificial' to describe the tech is just a tool of evasion of accountability. It's a deflection tactic that if shit hits the fan, they can just say, “Well, the AI did it.”

Shove off, Romeo and Juliet. We've just uncovered a love story far more intricate and enthralling than yours. In the grand theatre of late stage capitalism, there exists a romantic story, not between star-crossed lovers, but between the titans of Big Tech and government powers. This relationship, steeped in data and driven by profit and surveillance, is rewriting the narrative of privacy, control, and ethical boundaries in our digital age.

Big Tech's current business model is centred around the mantra of “collect, collect, collect.” It is as if they go on to their séance every night, worshipping the concept of greedy pursuit of material wealth—reminiscent of ‘Mammon’, the biblical embodiment of capitalism. This approach has unleashed a Pandora's Box, making it particularly vulnerable to exploitation by governments. And recent events have shown that it is not just those with authoritarian inclinations are tempted to misuse this wealth of information.

This model is, of course, highly profitable. Just look at how Alphabet, Amazon, Apple, Meta, and Microsoft earned enough revenue as of seven days and three hours into 2024 to cover their combined fines of $3.04 billion, penalties incurred for legal violations in the US and Europe. Their business model effectively transforms users into products where our personal information are commoditised in the process. This echoes what Marx called ‘commodity fetishism’ in which our social relationships and our identities are transformed into mere economic transactions. Here, our privacy is sidelined. It is treated more as an obstacle to be circumvented than a right to be protected. Such a model is deeply rooted in exploitation.

To understand what's going on, we need to look closer at the intricate relationship between the state and the capital. Tech companies are obsessed with collecting as much of your data as possible, as we have established. They do this because it helps them make more money. That simple. These corporations, in pursuit of market revenues, frequently align themselves with state agendas. Why? They don’t want to lose access to their market. Just look how easily Facebook bowed down to the Vietnamese government's censorship demands for posts containing anti-state rhetoric in 2020. Data is indeed the new oil. It has become a prized commodity and in this process corporations that control this resource have been held up to influential positions in the global economy. This power, of course, is not without consequences. This is even more evident if we look at the other side of the coin: the governments. They cannot resist using all this data for their own purposes, often at the expense of civil liberties and democratic principles hidden under the guise of national security. It's a tale as old as time where the line between state and modern feudal fiefdom blurs, leading to potential abuses of power and erosion of the privacy rights of citizens.

Analysing the power dynamics at play here is crucial. These tech companies have tons of money and influence. Their economic clout, derived from vast revenues and market capitalisation, allows them to shape not only market trends but also societal norms and behaviors. As Antonio Gramsci has written in his Prison Notebooks, hegemony is achieved not through force or coercion, but by winning the consent of the governed. Governments, on the other hand, wield political power and are aware of the influence of tech companies in the society. Choosing to align with them to maintain their own control over the narrative and ideology within society is another piece of a puzzle. The unique authority of the state to define legal frameworks which includes the ability to impose restrictions or requirements on tech companies make the intersection of these two forms of power – economic and political – a pandemoniac arena for the working class liberation.

This issue transcends corporate greed as it has embedded itself in the very fabric of global economic and political structures just like how cancer cells proliferate and entrench themselves within the body's own systems. The pursuit of capital growth and market access frequently overrides concerns for human rights and ethical governance. And often, the balance of power often tips in favour of capital controllers, leaving individuals and citizens marginalised. Addressing this issue requires more than just corporate white papers, bullshit reports on sustainable development (that does nothing) and public relations maneuvers. It calls for a fundamental reassessment of the role and responsibility of tech companies in global society. The once-touted motto of 'don’t be evil' seems a distant memory in today's landscape of digital capitalism, isn’t it, Google? It's time to confront the elephant in the room and acknowledge the complicity of tech companies in these dynamics. This demands a shift in our understanding of the role of technology in society, not just as a tool for economic growth, but as a medium with profound impacts on our democratic values and the very essence of our collective freedoms.

Data is the lifeblood that allows any machine learning model to perform its task. It’s not magic. It’s not the so-called “AI” being intelligent or sentient. It’s statistics. And, as the tech sector delves into rapid developments of specialised LLMs that they can further commoditise, they will require vast amounts of diverse data to train—leading to what some have called as data hunting.

Just as how the industrial revolution turned human labour into a mere cog in the machine, the so-called AI revolution has converted our interactions and our identities into commodities to be bought and sold. We have unwittingly contributed to the very systems that exploit us. Our digital footprints are the raw materials for the new age capitalists: the Big Tech. Every single one of us who have posted even a single piece of content online are all part of this,whether we like it or not. In the recent New York Times vs OpenAI legal debacle, the latter claimed that it is impossible to create ‘AI’ tools like GPT without copyrighted materials. If they can bend the law to however they want, what makes you think those images you posted on Facebook as part of your December dump were not used to train Meta’s ML models?

We are now part of a universal digital sweatshop that transcends international borders. Our labour is ignored and uncompensated based on the belief that since we shared content freely, companies have the right to monetise it whenever and however they want. Time and time again, as we have seen in recent news, companies have collected our data without explicit consent. When they do ask for ‘consent’, they give us word salad in the user agreements or just ask us to opt our way out of the inferno that they manufactured. The aggressive collection of data paves the way for a future where a few corporations will have disproportionate control over vast datasets, which they can exploit for unwarranted targeted advertising, surveillance and practices that would reinforce biases or unfairly influence individual choices and behaviours.

Consistent with their greedy branding, the exploitation, of course, does not end with the involuntary surrender of what I call our ‘quantified selves’ to Big Tech. In fact, it extends to more tangible exploitation of human labour in the Global South. Services like Amazon Mechanical Turk uses Human API to perform tasks such as 'identifying the red apple in this image of a fruit basket.' Of course, they would not dare to brand them as digital, underpaid slaves. Instead, they prefer to label them as freelancers to make it sound ethical. Hurray! Another job for the PR industry! These freelancers forfeit any remaining vestiges of their bargaining power for as little as 2 USD a day so that your LLMs will not spew out rubbish.

This is the reality behind your glamorous “AI” models. While “AI” companies in the developed world reap huge profits, the groundwork is outsourced to workers in Bangladesh, Kenya, the Philippines and India. How disgusting it is that the very countries that were once plundered for their resources are now the same countries being exploited for cheap labour? But it is fine, isn’t it? As long as we don’t see them. Out of sight, out of mind.

In another attempt to whitewash capitalism, the current world order has advanced the idea of diversifying capitalism to divert our attention from the real issue: corporations are earning record profits while workers are putting in longer hours than pre-Industrial Revolution. All this happens as workers' wages continue to stagnate below the poverty rate in many countries around the world. I wouldn't be surprised if sea levels rise before minimum wages in the USA.

I have attended numerous conferences where I was asked to discuss DE&I (Diversity, Equity, and Inclusion). All conversations revolved around one topic: how can we bring more Black, Brown, queer, and other minority individuals into executive positions? They believe this is beneficial because 'diversity' is profitable. Indeed, research confirms that more diverse and inclusive companies are more innovative and, therefore, more profitable. It all boils down to profit and the continuous plunder of the environment for non-stop consumerism. We have been brainwashed to be addicted to 'growth', quantifying every bit of our existence as if we are nothing more than targets to be achieved. If one is not growing economically, they are considered non-contributors to the economy and, therefore, are often relegated to the margins of society. Consequently, they are branded as lazy and weak.

This is even more evident in the growing tech space, particularly those working in the so-called AI industry. These companies hire consultants who work for the same capitalist institutions who tell them that the way to overcome algorithmic biases is to hire a more diverse workforce. Certainly, increasing representation, especially of women and other marginalised groups, in tech is crucial. However, hiring more women and diverse identities without restructuring the underlying system is nothing but a cosmetic change. This is what Rosa Luxemburg was implying about the difference between reform and revolution. Reforms do not alter the fundamental structure of the system. They just seek to make things palatable enough to prevent the working class from realising that they are chained. Capitalists have co-opted progressive movements to appear sympathetic and forward-thinking, but at the end of the day, it’s just a a businessman donning homeless attire as a costume. Hell, even Amazon is now part of an annual pride march.

If tech companies really want to overcome algorithmic biases, they would challenge the profit-driven motives of the tech industry and advocate for a transition towards a systemic model that moves away from the concentration of technological power in the hands of a few corporations. A piecemeal reform in the tech industry is weak and futile. This inclusion of certain identities for the sake of it serves no one but the profit-based enterprise so that the business appears progressive and inclusive without actually challenging the exploitative nature of the system itself.

No matter how many diverse identities you put there, the profit-based system can and will only commodify 'diversity' and turn it into a marketable asset or a brand value rather than a genuine push for systemic equality. Having a queer CEO for an oil company means nothing to the thousands of working-class trans people who have to resort to sex work to survive. Having a woman CEO means nothing to a single mother of three who must make ends meet before her next pay. Having an Asian executive means nothing to me, an Asian, if she is just there as a tool cherry-picked by those in power as a decoration to make the current system appetising. None of these approaches challenge the existing bourgeois hegemony. Enough with the girlboss, pink-economics, leaning in, and rainbow capitalism. They do not and will not liberate the working class. I may be alone in this stance, but if 'inclusion' meant that I would be included in a system that profits off someone's suffering, I do not want to be part of it.

Diversity and inclusion have become the opium of the masses. If they genuinely want diversity, equity, and inclusion, they should start by ensuring that workers from all backgrounds have not just representation but actual power and equity in the workplace. This shift requires a fundamental reconfiguration of the workplace into a collective where workers democratically control their environment. It’s about dismantling the structures that prioritise profit over people and replacing them with a system where workers collectively make decisions, share profits, and have equitable stakes in the outcomes of their labour.

Enter your email to subscribe to updates.