classstruggle.tech

My name is Jean. Come and join me on this eclectic journey where tech meets politics, art and humour.

Google Scholar bears the remnants of the old search, far away from the fast-paced enshittification. Whilst some of the so-called advancements have seeped their way through via the AI outline that is currently only available on Chrome and AI-generated 'research', Google Scholar is still a decent academic search tool. And I say that even though I have been de-googling for a while now. Degoogling has been easier since I finished my PhD. Scholar was probably the only Google product I was active on until last year. I don't use Google Scholar anymore.

As Google Scholar celebrates 20 years, Nature asked a very contentious question: Will Scholar survive the AI revolution? I don't know. I hope it does, and I hope it does in a way that it helped me before GPT. The allure of academia is about searching for truth, for knowledge even if it takes hours or months even (at least for me while trying to chase a paper written by a scholar in China, whew). No amount of chatbot can give the same braingasm as the Eureka moment after finding the literature review that you've been looking for.

Yet, in these conversations, we forget the most important question: Will Academia survive without Scholar? Without Ebsco? Scopus? Web of Science? We, academics, are at the mercy of these companies. The same way most Internet research collapsed after Meta and Twitter pulled their academic APIs. Knowledge that is meant to be for the betterment of society is locked behind paywalls that neither benefit the researcher who wrote it nor the community. We need public libraries and open access platforms, where access is not dictated by corporate whims and its addiction to profit. Public libraries must be reimagined as digital hubs. And this is more important now than ever as right-wing vitriols aim to defund libraries and revise histories. Defunding libraries and book bans are an attack on freedom of thought. We need libraries more than ever because these institutions will remember what we humans will forget.

What a lazy and deeply disgusting way to describe what is going on back home, in the Philippines. The Philippines is on its 16th typhoon this year, six of which were just this month. The author, a mayor in the Philippines, even quoted the US National Academy of Public Administration for its mention of “resilient communities.” I mean? Did you even read what they actually meant by that? It feels like he skimmed a research paper, hit CTRL+F, typed “resiliency,” and called it a day. Lazily cherry-picking scientific studies is NOT research.

Yes, Filipinos are resilient. That’s true. And the media loves to showcase this by featuring images and videos of Filipinos smiling and singing amid floods and destruction. It’s even become a hallmark of our tourism branding. Foreigners have jumped on this bandwagon, making videos that highlight how Filipinos face calamities with cheerful determination.

But smiling through adversity does not rebuild home nor does it bring back people from the dead. Resiliency does not solve the systematic issues at play, particularly the lack of disaster preparedness and climate inaction. What it does is whitewash the sufferings of working class Filipinos, who are at most disadvantaged during catastrophes, and make them palatable to the ears.

The glorification of resilience creates a dangerous narrative that the we, as a nation, can endure anything. It washes the hands of the government under the notion of 'oh, atleast you're alive and happy.'

Climate justice isn’t about applauding survival. Climate justice is about tackling the root causes of suffering and ensuring no one has to rely on resilience just to make it through.

The 2024 Paris Olympics was meant to be a celebration of athletic might yet it became a political battleground where the politics of identity were as fiercely contested as the sports themselves. The Olympics showcased an unsettling moment when Algerian boxer Imane Khelif faced a torrent of bigotry for allegedly not appearing woman enough. The likes of JK Rowling, Logan Paul and tech libertarian Elon Musk were amongst the many that amplified the disinformation campaign against Khelif, which has then spiralled into a public spectacle that not only cast doubt on Khelif's identity but also painted a vivid picture of the broader systemic issues at play. Khelif is not trans. Let us put it out there. Yet, the furor at the Paris Olympics over her gender identity vis a vis gender expression mirrors the disturbing ideologies that trans people have long endured at the hands of transvestigators–the very same people who intrusively scrutinise appearances in their witch hunt to “expose” the next trans individual. Even popstar Taylor Swift did not escape scrutiny as people zoomed in on her ‘bulge’ under her navy blue bathers. It's called mons pubis, people. 

This is an example on how Visage Technologies classifies if a face is feminine or masculine. Screenshot from https://visagetechnologies.com/gender-detection/

And this is what Automated Gender Recognition (AGR) seeks to automate. AGR attempts to correlate physical attributes with gender identity, a premise that has already been challenged by contemporary research. An article published by Nature journal strongly argues that anatomy does not definitively determine someone's gender. The author further highlights the complexity of sex and gender as spectrums, which include a variety of biological, psychological, and cultural factors. Therefore, relying solely on physiological traits to define someone’s gender can lead to inaccuracies and harm, as gender identity encompasses more than just visible or genetic characteristics. Another paper worth mentioning is that of Daphna Joel, who finds that human brains exhibit a “mosaic” of features, some more common in females compared with males and vice versa. This undermines the notion that there are distinctly male or female brains, highlighting the complexity and variability of brain characteristics across genders.

By embedding these flawed assumptions into its algorithms, AGR technology institutionalises the discriminatory practices that sparked such controversy at the Olympics. Not only does it systematically enforce a flawed understanding of gender, it has also leveraged surface-level data to make profound decisions about people’s identities. This obsession has real life repercussions of perpetuating, at best, biases and exclusion, and at worst, death. At airports, AGR’s potential to out transgender individuals could be especially harrowing, particularly for those living in countries where their existence means capital punishment. When travelling in the United States, for instance, TSA agents will select your gender based on how you present. While that sounds good in theory, it fails to accommodate individuals whose gender identity does not conform to traditional binary norms, as well as transgender individuals who have not undergone gender-confirming surgeries. If a pre-op trans woman, for instance, goes through the female gender screening, the resulting image in the scan will show a square highlighting the groin area. This will then be followed by a patdown by an agent with the same gender as the traveller. However, if the pat-down does not resolve the security concern to the satisfaction of the TSA agents, the traveller may be subjected to a more invasive search in a private room.

While TSA checkpoint procedures do not utilise AGR, the issues they reveal are emblematic of broader challenges that could escalate with technological advancements. The problems inherent in such subjective assessments provide a cautionary tale as we consider the future of security technologies. And with rapid technological advancements, the potential adoption of AGR is not far-fetched. This shift towards automation poses a real danger that these technologies could be appropriated by governmental and people who now use transgender individuals as scapegoats to foster fear or justify discriminatory policies for political gains. We are beyond the issue of privacy here and even past beyond just acknowledging risk. We are venturing into realms where the very existence of transgender people are at stake.

Outside airport security, the reach AGR could extend into everyday spaces such as public restrooms. And this is not just speculative as we are already seeing AGR being employed in various contexts, such as the Giggles for Girls app, the dating app L’App, and even a restaurant in Oslo that targets ads based on gender, showing men pizza and women salad. In areas with deeply conservative values, like the American Bible Belt, the deployment of AGR systems in these spaces to enforce gender norms is a real possibility. The city of Odessa, Texas, has already introduced a $10,000 bounty for reporting transgender individuals who use bathrooms that correspond to their gender identity. This illustrates a troubling trend where AGR could potentially be used to enforce discriminatory laws and stoking fear among transgender communities.

The application of AGR in public security systems, online platforms, and even in everyday consumer technologies frames trans bodies as subjects of suspicion and scrutiny. This invasive oversight is driven by the marriage of the state and the capital that seek to monitor and control societal norms, including rigid adherence to gender binaries. In this setting, trans people are perceived as deviations, bugs in the system if you will, and consequently, treated as threats to social or public order, as is the case for the arrest of transgender people in many countries including Malaysia, Indonesia, India and the Philippines.

Transgender people, by their very existence, challenge these rigid gender norms and, by extension, the division of labour that underpins many economic and social policies. Capitalism relies heavily on the gender binary to sustain the nuclear family model, which in turn supports the reproduction of existing power structures. Angela Davis wrote a strong critique about this in her book ‘Women, Race and Class’. She highlights that Black women were subjected to a dual exploitation—both as labourers and as reproducers of more slaves, which was crucial for the perpetuation of the slave economy. This exploitation was not just a byproduct of slavery but a deliberate effort to uphold and benefit from the racist and sexist economic structures. The implications of this history are vast and echo in many ways modern capitalist societies continue to exploit bodies considered 'other'— may it be through racial, gender, or sexual discrimination. The surveillance and control of trans bodies through technologies like AGR is a continuation of this legacy, in which certain bodies are monitored and regulated more strictly to conform to existing social norms that benefit the capitalist system.

Surveillance becomes a tool to 'correct' deviations within the system. It allows those in power to determine whose lives are deemed livable and whose are not, enforcing a normative standard from which deviation must be monitored and controlled. For trans people, this can mean the difference between visibility and erasure, between recognition and death. The utilisation of surveillance technologies not only strips individuals of their agency but also places them at an increased risk of violence and discrimination. When the state and societal institutions possess the tools to 'watch' and 'correct,' they wield a significant power that transforms surveillance from a simple security measure into a tool of social control.

We are past the point of reform. Surveillance, as it exists at the very depths of the capitalist inferno, is and always has been a tool for subjugating one class by another. Any reform to surveillance that may appear to arise from below either replacing the face of the oppressor, is still fundamentally rooted on power dynamics. As Paulo Freire poignantly noted, “When education is not liberating, the dream of the oppressed is to become the oppressor.” No amount of diversifying the faces of those who operate the surveillance apparatus will alter its intrinsic function as an instrument of control. It is a tool designed not just to watch but to maintain and enforce the status quo, to keep existing power structures intact.Replacing the operators of capitalism with more palatable identities does not change the fundamental operation of the system itself to reduce complex identities into manageable data points that can be controlled and manipulated. The answer to surveillance is not the absence of data on marginalised minorities. The answer to surveillance is the abolishment of it.

And as we reflect on this reality today, on Transgender Day of Remembrance, we are compelled to honour the memory of those who have suffered and died under the weight of such oppressive mechanisms, but also act upon these ideas. The time for idealism is over. Let this day serve as a call to action to build a more just and equitable society, one that truly honours the diverse tapestry of human experience and fiercely protects it from the corrosive effects of unwarranted surveillance. Remember, none of us is free until the whole working class is.

I recently spent two weeks in Honolulu on a fellowship funded by a foundation established by a tobacco heiress and a historically contentious institution. I applied to this fellowship without expecting much rather than meeting people and maybe a few answers to some questions that I have gathered in the last 10 years of my life. Instead, it left me with more questions than answers—questions I wouldn't probably have encountered otherwise. During the 14-day residency, one question in particular from a panel-like session on the night of November 8th struck a chord and has lingered with me. As we opted for a group discussion over individual storytelling, one of my team members asked, “Jean, being part of so many steering committees, do you believe that change can occur within institutions, and can you be part of that change?” This wasn't the exact phrasing, but it captured the essence of my dilemma.

This question put me on the spot, not just because it was unexpected, but because it echoed thoughts I'd been wrestling with for some time, to which I still don't have the answers. I ended up discussing my departure from the Global Internet Forum to Counter Terrorism following disagreements over their initial Incident-Response Group report, particularly concerning the handling of the Palestine issue. I requested that GIFCT remove my name from the report as I wasn't representing any institution at the time. In trying to make sense of our current realities, I admitted openly that I didn't have the answers. And as per usual, I resorted back to my comfort zone by referencing ideas and concepts that I read. Being well-read enables me to weave diverse theories into a cohesive narrative, though sometimes these ideas are rehearsed more in theory than in practice.

I referenced Rosa Luxembourg, one of the finest Marxist thinkers, whose pamphlet “Reform or Revolution” has left a lasting imprint on me. Her ideas challenge the efficacy of reform within capitalist systems, a concept that resonates deeply as I navigate my roles within various institutions. Reflecting on that transformative night in Honolulu, my skepticism about the capacity of individuals to instigate significant changes within established institutions solidifies. This skepticism isn't just theoretical but is intertwined with my own experiences, such as my involvement in the fellowship. Funded by foundations with controversial histories, the fellowship inadvertently presented a paradox: by participating, was I perpetuating the very structures I aim to dismantle? This inner conflict highlights the complex interplay between personal actions and institutional affiliations. The thought that I might be complicit in sustaining a problematic system, despite my intentions to challenge and reform it, weighs heavily on me. (sigh) It raises broader questions about the nature of engagement with such institutions. Can one truly effect change from within, or does the act of participation compromise one's ability to critique and alter those systems fundamentally?

I personally don't believe that significant institutional change is achievable, especially for those of us who are low-wage workers or merely volunteers. Hell, even trade unions (with all their might in some countries) don’t make that kind of change. It is just a lie we tell ourselves to make us feel better, allowing us to feel like we are part of a transformative process, even when the structures remain largely unaltered. This, of course, does not mean that we are failing or that our efforts are misguided. Rather, it speaks to a broader societal condition where every aspect of our existence is commodified, where even our desires for change are packaged and sold back to us in manageable forms. Recognising this doesn't make us cynical. Instead, it underscores a realistic grasp of the power dynamics at play.

I am skeptic about the feasibility of revolution (as per Luxembourg’s definition) within our current economic structures, especially within my own lifetime. But, this does not deter my commitment to making life more bearable for others while I am alive. Each small action I take gives meaning to my work in a world we are alienated from our very own labour. But, I acknowledge the limits of this, that half of the things I am doing will not help create the world I want to see. My involvement in efforts outside of work to raise class consciousness in hope that one day it will foster a long-term awareness that could eventually lead to broader societal shifts. I hope that the seeds I plant will be a catalyst for a future where change might become more feasible, even if it's beyond my time. Engaging in this global movement connects me to a community and a cause that transcends individual limitations, focusing instead on collective awareness and gradual empowerment. These efforts, though they may not culminate in revolution as Luxembourg envisioned, are steps towards a more conscious and equitable world. Or so I think.

Meta has just released its human rights report for 2023, which covers the whole of 2023 from 1 January to 31 December inclusively. The report is structured into a few subchapters, including sections on ‘AI’, risk management, issues, stakeholder engagement, transparency and remedy and something they called ‘looking forward’. As per usual, the report seems to serve primarily as a compliance document designed to fulfil regulatory obligations rather than effectuate real change. Substance wise, the report has nothing to offer except a few corporate jargons and cooptation of human rights concepts to give the impression of progressive intentions. This blog aims to dissect Meta's 2023 Human Rights Report and look behind the glossy facade of corporate speak.

On AI

Prior to the week of the report’s release, Reuters released a news report detailing how the Meta Platforms will start training its ‘AI’ models using public content shared by adults on the two major social networking sites they own: Facebook and Instagram. In the same month, Meta has also admitted to scraping every Australian adult user’s public photos and posts to train their ‘AI’, but unlike the way they ran this absurdity in EU, they did not offer Aussies any opt-out option. Meta did not address any of these in their so-called human rights report. In fact, the whole section on ‘AI’ reads like it was written by PR practitioners whose main goal is to sell us an idea. And it is that ‘AI’ models are ‘powerful tool[s] for advancing human rights’, a portrayal that I find both naive and disingenuous.

This short blog will not go into specific cases on how the current ‘AI’ boom has led to shrinking democratic spaces, but examples in India, Bangladesh, Pakistan and Indonesia are abound. Not to mention how the ‘AI’ boom has been fueling the rise of digital sweatshops, where workers mostly from the Global Majority countries such as Kenya and the Philippines are being paid less than 2 dollars per hour to label content. And it did not stop there, Meta went on to fire dozens of content moderators in Kenya who attempted to unionise. These workers, tasked with reviewing graphic and often deeply traumatising content on Facebook, were subsequently blacklisted from reapplying for similar positions with another contractor, Majorel, after Meta switched firms.The output data from moderators are then used to train machine learning models that enhance systems primarily aimed at Western consumers, ostensibly to make these technologies “safer”.

Another often-overlooked consequence of these AI models is the immense energy consumption required by the power-hungry processors that fuel them. Recent numbers from the University of Massachusetts Amherst revealed that the carbon footprint of training a single large language model is roughly around 272,155 kgs of CO2 emissions. Big Tech’s obsession with ‘AI’ has created a burgeoning demand for the construction of data centres and chip manufacturing facilities, especially in regions of the Global Majority. And as these data centres require significant computational power that generates considerable heat. These data centres are literally sucking dry the water supplies that local communities depend on for survival.

Now, Meta can argue that the reason why these cases were not included in the report is because these are outside the date coverage of the human rights report, but evidence shows that this is not the case. Meta released Llama 2 in July 2023, which they described as ‘open’ in their report. Now, the use of the word ‘open’ here and in their succeeding press releases regarding Llama 3 will require another blog on its own. But one is for sure, nothing about the Llama licences make them open source. The training data for both LLMs were never publicly released. According to Meta, Llama 2 was pretrained on publicly available online data sources whilst Llama 3 is pretrained on over 15T tokens that were all collected from publicly available sources. Now, Meta is not the only one who is not transparent about this. If you remember the WSJ interview with the OpenAI CTO, you would remember how she made this grimace face after being asked if OpenAI used videos on Youtube. I do not expect any of these companies to release the training data they used for the large language models because that would open a can of legal problems for them. Meta’s lawyers, for instance,warned the company about the legal repercussions of using copyright material to train its model.

This reinforces my earlier point on why the so-called human rights report is nothing but a press release that attempts to paint a rosy picture of Meta’s commitments while conveniently glossing over the darker aspects of their operations that have significant human rights implications.The selective transparency and regional inconsistencies in user consent practice (such as the stark differences in how users in Australia and the EU are treated) raise a pressing question: where do we, in Southeast Asia, stand? Given our governments will not likely champion our data rights as vigorously as those in the EU (not that they are not without their flaws), the risk to our privacy and rights is even more acute. This regional disparity in data protection and user consent highlights a more disturbing trend. Tech giants like Meta can, and do, exploit weaker regulatory frameworks in regions to sidestep stringent compliance obligations that they would otherwise have to meet in other jurisdictions. This tells us how companies are ethics-washing their policies, by deciding on things based on what would provoke the least backlash.

Deflecting responsibility

The section on ‘risk management’ reeks of posturing and selectivity. The mention of the UN's Convention on the Rights of the Child (CRC) as the backbone of their “Best Interests of the Child Framework” is at best token, especially when faced with their commercial priorities. A platform that is fundamentally driven by user engagement and data monetisation cannot prioritise well-being of adults, let alone children. Meta has recently introduced ‘teen account’ that “will limit who can contact teens and the content they see, and help ensure their time is well spent.” It sounds good on paper, but what Meta is actually doing here is just deflecting responsibility to Apple and Google. Meta is pushing for mobile platform providers to enforce app installation approvals, which basically offloads the burden of safety measures to other companies. While it should go without saying that Apple and Google do indeed have a crucial role in managing the ecosystems their platforms support, it is imperative for app developers, particularly those like Meta, whose apps are used by millions of children worldwide to take primary responsibility for the safety features within their own products. About time you own your responsibility, Meta. Stop deflecting and start owning the consequences of your business model.

Censorship and content moderation

Meta’ content policy has been under fire for quite some time now especially with its complicity and failures that have directly contributed to the Rohinya genocide and Ethiopia’s Tigrayan community. More recently, following the 7 October event in Gaza, Meta has once again proven itself incapable of adhering to its own standards and promises by further censoring Palestinian voices. Meta's pattern of policy enforcement has been overly restrictive against pro-Palestine content either through deletion or shadowbanning. Back in 2021, Meta commissioned the Business for Social Responsibility (BSR) to conduct a rapid human rights diligence. The BSR report found that “Meta’s actions in May 2021 appear to have had an adverse human rights impact on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred.“ However, events in October 2023 have shown Meta’s insufficient follow-through on its prior commitments. If anything, it exposed a worsening crisis that highlights the company’s capacity (or lack thereof) to enact impactful changes to its content moderation strategies.

In response, Meta has introduced updates as noted in Meta Update: Israel and Palestine Human Rights Due Diligence report. The most notable of which was the replacement of the term 'praise' with 'glorification' in its content restriction policy. But, these definitions still remain overly broad and subject to varying interpretations based on cultural, social, and political contexts. This is even made more problematic given the requirement for users to “clearly indicate their intent.” Expecting users to preemptively clarify their positions is unrealistic and places an undue burden on those already vulnerable to suppression. Meta’s updates do little to address the deeper structural flaws in its content moderation strategy that continues to operate with a heavy-handed approach.

Another example is Meta’s decision to ban the term ‘zionist’. The foundation of this decision is anchored on the conflation of legitimate criticism of a political ideology with hate speech. This very conflation serves to immunise the Israeli government from legitimate scrutiny under the guise of preventing hate speech. By blanketly labelling all critical uses of “zionist” as hate speech (read Ilan Pappe’s Ten Myths About Israel)  without sufficient contextual differentiation, there is a significant risk of branding scholars, human rights activists, and critics of Israeli policies as antisemitic. Not only does this approach stifle the necessary dialogue, it would also be exploited to deflect from genuine human rights discussions. If criticisms of a political nature are too readily classified as attacks on Jewish identity, then all discussions on human rights abuses, international law, and the humanitarian impacts of the Israeli occupation could be unjustly censored.

Final thoughts

I feel like I should end this short blog by declaring that the arguments made in this blog are not exhaustive. A counter-report is needed, if we want to scrutinise every point made in Meta’s report. Yet, if there's one critical takeaway for you, the reader, it's this: Meta's 2023 human rights report is an exemplary case of corporate doublespeak, artfully crafted in a way that masquerades compliance as commitment. But, this should not surprise us especially for a company that profits immensely from the very practices that pose risks to human rights. And this didn’t just come from us. The International Trade Union Confederation named Meta as one of the main culprits in facilitating the spread of harmful ideologies worldwide, particularly in weaponising its platforms for the dissemination of far-right propaganda. Meta's aggressive lobbying, which squandered 8 million euros in the EU alone, circumvents accountability to ensure their profit machine steamrolls over any democratic control or oversight. This is a deliberate assault on democracy. And at the core of these issues is Meta's relentless drive for profit.

The business model incentivises invasive data practices and the commodification of personal information. The real issue at hand is not just the individual failures in AI application, content moderation, or crisis management, as grave as these are. The problem is the overarching business model that drives these failures. Until Meta confronts this root cause, every human rights report it issues will be nothing more than a smokescreen that hides the unpalatable truth of its operations. Meta operates with the arrogance of a quasi-state that thrives on an architecture of surveillance capitalism that exploits users with impunity. It is imperative now, more than ever, that robust and enforceable regulations are implemented to curb the pervasive influence of all BigTech companies. These measures are crucial to dismantle their overreach and ensure they are held accountable for their impact on society and individual freedoms.

In an era dominated by digital interactions and transformations, the discourse around digital rights has increasingly become steeped in ideologies anchored on the tenets of individualism. We have been told over and over again that they were designed that way to champion personal freedoms. Yet, recent events like this, this, this and this have suggested quite the opposite. If anything, the frameworks we have today are actually just masking a profit-driven agenda that aims to commercialise every facet of our digital existence and perpetuate colonial legacies of domination and exploitation under the guise of “modernisation” and “progress”. This deeply ingrained individualistic focus in the realm of digital rights not only sidelines our collective needs. They actively participate in the neoliberal assault on communal structures and in the longer term, environmental sustainability. This blog seeks to scrtutinise the limitations of our current understanding of digital rights and explore alternative approaches that prioritise the collective welfare and environmental health. As our planet faces record high temperatures and escalating environmental crises, it's imperative that we shift our focus from the 'me' to the 'we'.

Back in 1995, Nicholas Negroponte predicted that Internet would “flatten organisations, globalise society, decentralise control, and help harmonise people”. Quite a utopian picture of digital connectivity, starkly different from our reality in 2024. That instead of being a democratising power, the digital landscape has exploited user data for profit, with a few dominant players wielding significant influence over how data is used and monetised. Much of the Internet we see today can be compared to emerald mining, a process that often involves the extraction of valuable resources under conditions that are far from equitable. The digital realm thrives on the same logic for data extraction, often without users’ explicit consent or fair compensation. Thus, mirroring the exploitation during the colonial era when resources were taken from local communities for the economic advantage of more powerful entities, which leaves those communities impoverished and their environments depleted. And as we observe Indigenous Peoples Day this month, it is just fitting to reflect on the parallels between historical colonisation and the digital exploitation unfolding right before our eyes, Today, we see a similar scenario plays out where vast amount of data are harvested from people worldwide. And from these datasets they create a lesser-known, labour-intensive digital sweatshops fueled by the likes of Amazon Mechanical Turks. In countries like the Philippines, India, and Kenya, workers are employed under harsh conditions to process and label these enormous data pools, tasks that are essential for training AI systems. Such labour is often tedious and poorly paid, yet they are the backbone of the sophisticated algorithms we see on search engines and other digital platform. Echoing the sentiment of a popular 90s song, it's profoundly ironic that those who toil to advance cutting-edge technologies often do so in circumstances that starkly oppose the futuristic applications they help create.

This prevailing “me, me, me” agenda in the digital realm further exacerbates the issues highlighted above, where the individualistic ethos not only promotes but necessitates a self-focused view of technology. But, this cultural shift is not just a byproduct of natural course of tech evolution. Rather, it is more of an active engineering concocted by public policy and commercial interests—as noted by Greenstein in his book “How the Internet Became Commercial”. Corporations created environments that encourage constant connectivity and self-promotion, directly influencing how we conceive of concepts like digital rights, and turning them into matters of personal concern rather than communal responsibility.

Take privacy, as an example. Privacy is not merely a personal choice despite how often it is framed that way. Privacy is a social predicament, which means one person’s decisions regarding their data can have far-reaching consequences for others. I am once again citing Kasper (2007) who argued that “[p]rivacy is a socially created need, and without society, there would be no need for privacy”. One person’s choice of an app can jeopardise the privacy of others. When enough people use a non-secure service, it becomes a norm, making it harder for others to choose more secure options without sacrificing social or professional connections.

Privacy is also highly influenced by one’s position in the social hierarchy. The richer you are, the easier it is for you to obtain a higher level of privacy compared to those lower down the social ladder. This very commodification of privacy creates a false dichotomy between those who can afford privacy and those who cannot. A classic tale as old as time. When we treat prvacy as a purchasable good, we marginalise those who lack the resources to buy into these protections. And by framing privacy as an individual choice, society implicitly blames those who cannot afford privacy-enhancing tools for their lack of privacy. This perpetuates the false idea that privacy is a matter of personal responsibility and capability, rather than a systemic issue rooted in economic inequality. It reinforces the idea that people attain privacy simply because they choose to. It ignores the financial and social barriers that prevent many from securing their personal data.

Many people are not equipped with the knowledge to make informed decisions about their privacy. Some wouldn’t understand the trade-offs they are making by sharing their personal information in exchange for free services. This gap in knowledge and the individualistic push highlight a significant divergence from the long-term thinking and sustainability prioritised by indigenous traditions, which often consider the impact of actions on future generations. To rectify this, we can draw on the collective-focused principles of many indigenous cultures such as the Maori’s whanaungatanga, Igorot’s **og-ogbo **and Minangkabau’s gotong toyong. These groups prioritise collective well-being over individual success, a stark contrast to the self-centered, narcistic approach we see everyday as we browse Instagram’s Explore tab or Tiktok’s Discover page. For these cultures, decisions about community resources are made collectively. This reflects a deep commitment to the entire community's welfare, which could inform a better approach to digital privacy, and digital rights more generally.

In the face of the current digital rights framework dominated by commodification, consumerism and individualism, there is an urgent need to pivot towards a decolonial and degrowth approach in digital rights. The Euro-American centric paradigms that have long dominated and distorted our approach to digital interaction have failed the global majority. These frameworks perpetuate a colonial legacy and dictates terms and conditions from a viewpoint that aligns with Western interests at the expense of local and indigenous practices. In various communities across Asia and Africa, for example, data and digital resources are traditionally seen as collective assets, integral to the welfare and advancement of the entire community rather than just the indivdual. This communal approach to digital resources has become evident in practices such as community-managed cooperative mobile networks in South Africa and Mexico, where the technology is maintained and used by the community to ensure that all members have access. Such models highlight a stark contrast to the individualistic, privatised approach, where data is often siloed and monetised on individual bases, leaving the control largely in the hands of corporate entities.

Another critical framework that can provide guidance for rethinking digital rights and advocate for a shift away from unsustainable consumption is degrowth. Degrowth challenges the relentless drive for technological advancement and data accumulation. Instead it proposes that we prioritise ecological sustainability and human well-being over corporate profits. At the heart of the degrowth argument is the call to curb unnecessary data collection, which is critical in an era where the over-collection and exploitation of data are rampant.This would mean that data collection is limited strictly to what is necessary for the functionality of services, rather than for surplus value extraction through surveillance capitalism. We need to reorient our relationship with technology to make it align with the principles human rights and environmental sustainability. As long as these problematic business models persist, we are trapped in a destructive cycle where companies often play the dual roles of arsonists and firefighters.

As we move away from individualistic data ownership models to collective data governance, we need to ensure that digital resources are managed in ways that benefit entire communities rather than individual corporations. This could involve community-controlled data trusts that prioritise transparency and equitable access. We need a radical reevaluation of how digital technologies are developed, deployed, and discarded, emphasising moderation, regulation, and the minimisation of digital footprints. Integrating indigenous perspectives into digital rights discourse can provide valuable insights into how digital technologies might be harmonised with cultural practices and communal values, offering a more holistic approach to privacy and data protection.

Digital rights policies should align with broader sustainable development goals, ensuring that digital growth does not come at the expense of environmental degradation or social inequality. This could mean imposing stricter regulations on energy consumption of data centres or designing technologies that are both energy-efficient and accessible to economically disadvantaged communities. To combat the monopolistic control of tech giants, supporting decentralised and open-source technologies can empower smaller businesses and communities. This would help reduce the concentration of power and promote a more democratic digital landscape. 

By addressing these aspects, the discourse on digital rights can shift towards a model that is not only anti-capitalist but also decolonial and aligned with degrowth principles. This would foster a digital environment where technologies serve the collective good, ensuring fair access and sustainable practices that respect both human and environmental rights. The challenge for us, activists, lies not just in resisting the commodification of digital spaces but in reim

Below is the speech I delivered on behalf of Manushya Foundation during the United Nations’ so-called multistakeholder information session on the GDC:

We appreciate the opportunity to speak as we critically evaluate the third and latest revision of the Global Digital Compact. This document, released under the silence procedure, has provoked considerable discourse, leading to member states breaking their silence due to contentious points within the draft.

We appreciate the opportunity to speak as we critically evaluate the third and latest revision of the Global Digital Compact. This document, released under the silence procedure, has provoked considerable discourse, leading to member states breaking their silence due to contentious points within the draft.

We are compelled to speak out because the subtle yet profound changes from the previous drafts signal a dangerous shift towards centralisation and bureaucracy that contradicts the decentralised nature that has made the Internet a bastion of freedom and innovation. This push towards centralisation mimics the heavy-handed governance models, which have only proven to stifle free expression and restrict access to information. Furthermore, the introduction of additional bureaucratic structures despite the existence of competent bodies already addressing these issues, is redundant.

We are particularly alarmed by the reliance on corporate self-regulation. This method has consistently failed us, serving corporate interests at the expense of privacy and ethical conduct. Moreover, the diluted language concerning human rights and the marginalisation of civil society’s role in this draft is unconscionable. Civil society is the guardian of public interest, yet this draft relegates it to the periphery, favoring instead a top-down approach that centralises power and silences dissenting voices. The weakening of the role of the Office of the High Commissioner for Human Rights represents a profound failure of responsibility. It reduces a vital watchdog to a token participant, undermining its ability to challenge and address human rights abuses in the digital realm. This is not just a step back; it is a leap into dangerous territory where human rights are not the guiding principles but mere afterthoughts.

This compact, as it stands, is a recipe for increased surveillance, censorship, and repression under the guise of digital cooperation. It uses vague language that some countries can and will exploit to justify their crackdowns on digital freedoms.

We urge the co-facilitators and member states to amend this draft and to engage in a truly inclusive, transparent process that places human rights, transparency, and the global public interest at the core of digital governance. We owe it to the global citizenry to ensure that their rights are not traded away on the altar of expediency or political convenience.

Thank you for your attention. We look forward to engaging in a process that respects the voices of all stakeholders and not just states and protects the digital rights of every citizen, not just the interests of the powerful few.

And here’s my from a screenshot taken by WSIS :D

The United Nations has finally made public the third and latest revision of the Global Digital Compact, which it originally published under silence procedure. This procedure gives member states 72 hours to raise concerns to break their silence. Should they fail to, the text will be adopted as is. As of 17 of July, more than 10 member states have broken silence following controversial paragraphs. Co-facilitators Sweden and Zambia have since scheduled informal consultations for 17 August 2024, roughly six days at the time of writing this blog. The Global Pact is meant to be adopted as an annex to the “Pact for the Future” during the “Summit of the Future” in New York on 21st of September.

I got a copy of the third revision just this weekend. The transition from the second to the third draft of the GDC is subtle—so subtle that I needed to print out the two drafts and compare them side by side just so I can notice the difference. Despite how subtle they are, the shift in language and emphasis is significant. And they could have profound implications for the protection and promotion of human rights globally. To avoid repeating arguments that were already made in the past against the GDC, I will provide five short key points on why the GDC is weak, redundant and most importantly, how it opens doors for authoritarian regimes to flourish in the digital age.

There is nothing particularly groundbreaking with the GDC. I still have yet to find a reasonable rationale as to why Guterres would even propose such ludicrous document just as he is on his way out. Basically, everything written in the document has been discussed within the walls of the IGF, WSIS, IETF and W3C. Its call for the creation of another Scientific Panel (parag 55), this time, for AI, particularly stands out. Given the plethora of existing bodies dealing with similar issues (like UNESCO and the ITU), it is worth asking why we need another panel on emerging technologies? Is it Guterres’ way to cement his legacy? Bad idea, if it is. But I guess, we’ll never know. The creation of another panel does not happen without substantial funding, expert recruitment, and administrative support. For an organisation that repeatedly calls for funding, I am certain they could find programs where it would more prudent to allocateresources towards more direct interventions in technology policy and implementation.

The GDC’s objective to centralise Internet governance in New York is the very antitheses of the qualities that have made the Internet thrive. Putting additional bureaucracy and centralisation on Internet governance is never the way to go. For this, we need only look at the development and evolution of foundational internet protocols like TCP/IP, standards like HTML and DNS—the very cornerstone of the Internet’s architecture. These were not the products of a single, centralised authority. Rather, they emerged from broad, collaborative efforts involving diverse stakeholders across various sectors and nations. Guterres’ current proposal is not only detrimental to the very foundational principles of the Internet, it also has far-reaching implications for the future. How? Well, this obsession with top-down models of Internet governance seen in countries like Russia, Iran, North Korea and China have showed us one thing. Government control over the Internet will lead to severe limitations on access and widespread censorship. By centralising the global Internet governance, we risk to replicate these failures on a global scale.

Second, the GDC apparently subscribes to the notion of technological determinism—where technology is viewed as the primary cause of social change. A determinist view to technology downplays the role of decision-makers behind these technologies. It makes the investors, the venture capitalists, the policymakers, the very people who make critical decisions about the design and deployment of technology invisible to the public eye. David Nye, in his book “Technology Matters: Questions to Live With” argued how people have historically shaped technologies to fit their needs, rather than being merely shaped by technology. Therefore, the GDC must instead focus on regulating the actors behind the technology instead of putting all its eggs in one basket that aims to regulate the technologies they create.

The heavy reliance on corporate self-regulation (parags 25, 31(a, b, c), 35(a,c), which we know now is just a euphemism for ‘do what you want or you will get a slap on your wrist’ approach, underscores the urgent need to create a framework that can penetrate through regulatory capture. This means the United Nations Guiding Principles on Business and Human Rights framework is already out of the conversation. If any of these “we call on digital technology companies and social media platforms” have worked in the past, we would not be in this pandemonium that we are in today. History is littered with examples on how without mandatory compliance measures and sanctions for violations, reliance on corporate goodwill is a recipe for failure. Various tech companies such as Meta, OpenAi, Google, Amazon and Microsoft, amongst many others have been implicated in privacy breaches and unethical practices despite the current existing guidelines.

GDC mentioned private sector more than it mentioned civil society in its third revision. This is one thing the GDC has excelled at. It showed us whose interests are truly being served and the potential consequences for equity and sustainability in the global digital landscape. As they are businesess, whose main goal is to accumulate as much profit as possible, they will advocate for less regulation and oversight in areas where greater control is necessary, such as data privacy and security. It would not be an overstatement to claim that the current version of the GDC is a win for state overreach and capitalism.

Third, much like a thief who operates under the veil of darkness to avoid detection and accountability, the manner in which this draft was circulated among ‘stakeholders’, which by the way, is just another term for member states, bypasses the broad and inclusive feedback mechanisms that had characterised earlier revisions. If anything, this approach tells us one special thing that we must foresee. Just as how the voices of civil society and human rights organisation have been sidelined during the Cybercrime Treaty negotiations, the future of human rigths activism in the United Nations is grim. We must prepare for a system where the very people who are most attuned to the on-the-ground realities of digital rights and human rights will be deliberately pushed to the margins of the discussion. This shift towards a state-centric governance threatens to erode the very foundations of the UN’s commitment to human rights. Pretty much as a nighttime theft would leave a community feeling vulnerable and violated. It is hypocritical of the United Nations to call for “civil society groups to endorse the Compact and take active part in its implementation and follow-up” (parag 65). Calling for the civil society to endorse a document in which they were deliberately marginalised shows a disingenuous effort to appear inclusive while structuring the Compact in a way that inherently favors more powerful stakeholders. For this, there is only one thing I can say:

Fourth, a quick run through on the language on human rights in both drafts shows a significant weakening on the integration of human rights into technology governance. In short, it dilutes the commitment to human rights as foundational principles. For a pact that boasts itself as ‘global’ in nature, the lack of specificity regarding how human rights will be upheld throughout the lifecycle of digital and emerging technologies leaves much open to interpretation. And as we have seen in the recently passed Cybercrime Treaty, broad and often vague language used in defining objectives such as fostering “an inclusive, open, safe, and secure digital space” (parag 32) is very likely to be exploited by authoritarian governments to justify stringent control over digital ecosystems under the guise of national security or cultural preservation. China’s Cybersecurity Law uses the same language to ensure its “cyberspace sovereignty”. Authoritarian governments love vague terminologies because they can change their meanings whenever its convenient. Much like Viet Nam’s Cybersecurity Law, which uses ambiguous language on what constitutes a threat to “national security” to justify strict controls over online content and surveillance of digital communications. This flexibility allows such regimes to interpret these terms in ways that can lead to greater censorship, surveillance, and restriction of digital freedoms.

This inconsistency in the GDC is even more apparent when references to “non-military domain” in the second revision (parags. 13(e), 20, 21(i) and 49) were discarded. This means the GDC has blurred the lines between civilian and military cybersecurity measures. Military-focused cybersecurity prioritises national security and often justifies extensive surveillance, data collection, and even the suppression of information under the guise of security. When applied in civilian contexts, they can lead to pervasive surveillance of ordinary citizens and infringe on their human rights. The indiscriminate collection of data, justified under the guise of cybersecurity, will result in the monitoring of political dissent, the targeting of activists, and the erosion of civil liberties.

And if that wasn’t enough, the GDC also watered down its previous calls for cybersecurity-related capacity building (parag 13(e), 21(i)) in the third revision. The absence of specific cybersecurity initiatives could lead to weaker defenses against cyber threats that target vulnerable populations. Here, we can see again how activists, journalists, and human rights defenders are particularly at risk. If cybersecurity measures are not prioritised, we are looking for more targeted attacks designed to silence dissent and curb freedom of speech. This leads me to my next point, the underplaying of the need to address state surveillance and data privacy in the digital age. Recent events have revealed how states’ use of surveillance technologies have become too pervasive. And this is often done without adequate judicial oversight and checks and balances. The Pegasus spyware is the perfect example here, as it has led to widespread violations of privacy and has been linked to crackdowns on dissent in countries like Mexico, Saudi Arabia, Indonesia and India. By relegating the issue to a single, brief clause in parag 30(d), the GDC not only fails to recognise the significant human rights risks posed by unchecked surveillance practices but overtly leave any actionable commitments that ensure robust protection of human rights in the face of technological advancements. The GDC is nothing but a paper full of word salad.

The mere mention of “international law” in 30(d) is used to sidestep real accountability. Just look at Russia’s invasion of Ukraine, Saudi Arabia’s human rights violations in Yemen, China’s surveillance in Xinjiang, and Israel’s self-serving justifications for settlements and security measures. These examples are clear as daylight. The lack of explicit commitments to specific human rights instruments will only set the stage for states to continue their egregious abuses under the flimsy guise of legal compliance of international law. This glaring omission leaves a critical gap in human rights protections, effectively giving states a free pass to violate privacy and other fundamental rights. And don’t get me started with lack of acknowledgement of the ongoing debates around encryption backdoors, which some governments are aggressively pushing for. These backdoors would not only obliterate privacy rights but also penalise activism at its core. It is deafening how the GDC is silent on this crucial issue.

Another alarming change from the second to the third revision is how the role of the Office of the High Commissioner for Human Rights (OHCHR) have been changed. The second draft explicitly notes the OHCHR's efforts to provide expert advice and practical guidance on human rights and technology:

“We take note of OHCHR’s ongoing efforts to provide, upon request, expert advice and practical guidance on human rights and technology issues to governments, the private sector and other stakeholders, including through the establishment of a UN Digital Human Rights Advisory Service within existing resources.” (parag 24)

This clause has been weakened to a mere acknowledgement in the third revision (parag 24):

“We acknowledge OHCHR’s ongoing efforts to provide through an advisory service on human rights in the digital space, upon request and within existing and voluntary resources, expert advice and practical guidance on human rights and technology issues to governments, the private sector and other stakeholders”

You can see here how the GDC has scaled back and put strict limitations on OHCHR’s capacity to influence by removing reference to the Advisory Service . To me, it sounds more like “Yes, OHCHR, you released a report. Thank you. Next.” But, it didn’t stop there, it inserted a phrase “upon request”, which fundamentally positions the OHCHR's intervention as reactive rather than proactive. Basically, “if you are not being asked, shut up.” A reactive model of advisory is nothing short of a token that states will use as they please. The lack of a permanent, dedicated structure also means that human rights will take the backseat.

Lastly, the biggest elephant in the room: technological imperialism. With all its mention of ‘South-South’, ‘North-South’ terminologies, the GDC appears to be attempting the address the challenges when it comes to disparities in technologies across countries. But, a careful analysis of the paragraphs have illustrated how the GDC may perpetuate, if not, exacerbate tech imperialism. Paragraphs 19, 21(b) and 28(a) calls for an “enabling environment” that supports innovation and digital entrepreneurship. Sure, it does sound audibly appealing, but this framework hinges on two things: partnerships and technologies that primarily developed in high-income countries. Following this logic, developing countries will be put on extreme dependence on foreign technology and expertise—which, in most cases, never aligns with their local needs or capacities. The phrase “mutually agreed terms”, which was mentioned five times highlight this implicit risk. We know how “terms” often translate to terms dictated by the more powerful party, especially when there is a significant power imbalance between the countries involved. With regards to technology, this means terms created by large tech firms and highly developed countries, leaving developing countries a choice in the matter.

The lack of any enforceable mechanisms in the GDC, which we have proven in the previous paragraphs of this blog, will only ensure that global data governance initiatives would be co-opted by powerful nations and corporations. Parag 62 specifically calls for “increased investment, particularly from the private sector and philanthropy, to scale up AI capacity building for sustainable development, especially in developing countries.” This is a textbook example on how market concentration and monopolies start. The involvement of large multinational tech corporations in AI capacity building will lead to a concentration of technological resources and expertise in the hands of a few. This will also stifle local competition by outcompeting or acquiring local startups and companies that lack similar resources or scale. We have seen this in the mobile telecommunications market in Africa, where large international companies have established dominant positions, making it difficult for smaller local companies to compete effectively. In the AI domain, if companies like Microsoft or Google lead their versions of capacity-building efforts, they will most likely use their technologies. This is a gateway for companies to dominate AI markets and infrastructure in developing countries by setting standards and controlling the ecosystem in ways that favor their business models and products. And we haven’t even gotten to the peak of the problem yet. We do know that private sector-driven AI development requires extensive data collection and processing. And often data generated in developing countries are exploited by multinational corporations. A good example of this is Facebook’s Free Basics in India, which created a walled garden of internet services while potentially accessing a wealth of user data under the guise of providing free internet services.

The Global Digital Compact, as it stands, is not just an ineffective tool for safeguarding digital rights. That’s already given. If the manner in which the GDC has been developed and revised is not enough for you to despise this document, I hope that I have provided you with at least four more arguments to strengthen your aversion. The shortcomings in the document suggest a problem on how policy makers in the UN conceptualise digital governance. This intense drive towards centralising Internet governance mirrors the authoritarian tendencies that suppress open and free communication. These issues provide a compelling basis to critically evaluate and ultimately challenge the direction the GDC is taking us. The vague commitments laid down in this document make it ill-equipped to address challenges in the modern world. Issues such as data privacy, encryption, freedoms and human rights were explicitly watered down to pave the way for increased private sector investment and the creation of additional regulatory bodies like the proposed AI panel. In light of these critical issues, it is clear that the GDC risks becoming not just ineffective but a tool that could potentially cause more harm than good in the digital domain. Ariana Grande has put this so perfectly,

Cybersecurity Awareness Month is still two months away, but given the importance and urgency of this topic, I thought I’d write about the UN Cybercrime Treaty. And as the negotiations for the treaty draw to a close on August 9th, the stakes couldn’t be higher. This treaty was initially proposed by Russia and now under the management of the UN Office on Drugs and Crime. It promises to strengthen “international cooperation” against “cybercrime”, which to my cybersecurity ears translates to : “We want more power. And we want a UN stamp on it.” Every detail buried within the treaty’s provisions is a step closer towards government’s overreach, especially in the areas of surveillance, data collection, and criminalisation. I’ve listed down here three points of why the treaty is no friend of human rights activists.

First, the treaty mandates expansive powers for data preservation and access (see Articles 25-29). Basically, this legitimises state surveillance. The mandate for “expedited” data preservation and allowing the state to continually renew orders to preserve electronic data sets a precedent for perpetual surveillance. There is also a lack of definition on what constitutes “grounds to believe”. At this point, are we just meant to ask the mirror on the wall?

I am sure the mirror will have a hard time finding who, not because there is none, but because there are too many. The lack of definition gives authorities unchecked power to justify indefinite data preservation. This blatant overreach tramples on privacy rights and creates a chilling effect on free speech. Might as well be the end of investigative journalism, as we know it. Article 27(b) also allows the state to force a service provider to divulge information related to the case being investigated. And we know how history is littered with good examples on why this is a bad idea.

In Vietnam, Facebook bent its knees to the government of Vietnam faster than Jon Snow bent his to the Targaryen Queen. And it was noted that Facebook has “been making repeated concessions to Vietnam’s authoritarian government, routinely censoring dissent.” In Iran, authorities have used private Telegram chats, phone logs, and text messages to incriminate activists, as seen in the case of Negin, who was interrogated and threatened with execution. In Pakistan, the government released an order titled “citizens protection against online harm 2020” which forced service providers to give out data and personal information, as requested by the country’s Inter-Services Intelligence. Not to mention how the broad definitions of crimes and the powers granted to prosecute “cybercrime” could be misused to target activists, journalists, and dissidents under the guise of national security. And asChina seeks to expand the definition of cybercrime to include the “fake news” online, we may be entering a time when the delicate balance between enforcing public order and curbing free speech could grow increasingly indistinct.

“Illegal access” (Article 7) could also be interpreted to include the activities of journalists accessing information for public interest reporting. Late last year, Delhi police carried out raids on the office of NewsClick, a news outlet that is highly critical of Narendra Modi. Houses of almost 50 journalists, activists and comedians in India were also raided under the ‘anti terrorism’ law that allows charges for “anti-national activities. In the Philippines, a similar ‘anti-terrorism’ law was being used to surveil environmental activists. At least 281 environmental defenders were killed in the Philippines between 2012 and 2022. In Jordan, the situation is particularly severe for LGBTI individuals, where the cybercrime law prohibits content that “promote, instigate, aid, or incite immorality.” The Jordanian law also bans the use of Virtual Private Networks (VPNs), proxies, and Tor. And this prohibition forces many LGBT individuals to choose between maintaining their identity’s security and freely expressing their opinions online.

Second, the treaty’s provisions for international cooperation (specifically Article 37) does not sufficiently safeguard against the extradition or transfer of individuals to countries where they might face political persecution. Paragraph 15 mentioned ‘substantial grounds’, but it was not clearly defined. Again, this lack of clarity will lead to individuals being extradited for politically motivated reasons. Article 3 of CAT supports the prohibition of extradition to countries where individuals would face serious risks to their life or freedom.

The treaty is also paving a way for states to create a digital autocracy where governments can compel service providers to preserve data and to provide such data to authorities without stringent oversight. A treaty that facilitates international cooperation on data sharing and broadens the scope for surveillance can also become a tool for governments to crack down on minorities. The ability to access and preserve electronic data without robust safeguards (Article 41 and Article 42) can be exploited to target marginalised communities, such as ethnic, religious, and LGBTQ+ groups. In countries like Russia or Uganda, where the state has a history of using legal frameworks to persecute LGBTI individuals, the ability to monitor, intercept, and collect digital communications under the pretense of preventing “cybercrime” could lead to identifying and prosecuting individuals based on their sexual orientation or gender identity. But to these countries, these people will just be collateral damages.

Paragraph 14 and Paragraph 9 of the treaty’s Article 37 presents a contradiction. While paragraph 14 guarantees fair treatment and the enjoyment of rights and guarantees provided by the domestic law of the state party, paragraph 9 encourages states to simplify evidentiary requirements to expedite extradition procedures. Simplifying evidence standards compromise the accuracy and fairness of the proceedings, which in turn erode due process rights. The recently concluded case of Julian Assange exemplifies the issues within Article 37. Another case of Ola Bini exemplifies these risks. Bini was detained at Quito’s Mariscal Sucre International Airport as he was preparing to travel to Japan for a vacation. The arrest occurred without clear or sufficient evidence, and Bini was held in custody without formal charges. While the treaty mentions respecting human rights and fundamental freedoms, it lacks concrete procedural safeguards against the misuse of the powers it grants. The provisions for search, seizure, and interception of data do not clearly require judicial oversight or other independent review mechanisms, potentially allowing for unchecked governmental overreach and violations of due process rights (uhm, can the UN people please refer to Article 14 of the ICCPR–a UN document?).

While the treaty mentioned the words “human rights” seven times, it lacks concrete procedural safeguards against the misuse of the powers it grants to state parties, educing the invocation of human rights to nothing more than hollow rhetoric. The provisions for search, seizure, and interception of data, as defined in Article 28, do not clearly require judicial oversight. This exposes the treaty as a clear conduit for unchecked governmental overreach and egregious violations of due process rights. Take Indonesia as an example. West Papuan human rights defenders often face significant challenges due to heightened surveillance and frequent seizures of their communication devices such as phones, laptops, and hard drives. This practice not only undermines due process but also poses a direct threat to the protection of civil liberties, operating in a legal gray area that facilitates potential abuses.

A specific provision within Article 28(3d) grants the state an alarming authority to “render inaccessible or remove” data within accessed information and communication systems. But this clause is not just about access, it is about granting the state the power to alter or delete data. This raises severe implications for information integrity and individual rights and sets the precedence for data manipulation without stringent oversight mechanisms in place. Such actions could irreversibly affect data integrity and availability and can be misused in a way that would alter evidence. Article 28 is deeply troubling. Clause (4), in particular, mandates individuals with system knowledge to assist in state investigations. If the coercion includes threats of legal penalties including imprisonment for non-compliance, it violates the right to freedom of thought–which is an absolute human right. Article 28, as it currently stands is a serious assault on fundamental human rights and stands as an abomination to these principles. It contravenes long-standing rights protections enshrined in the ICCPR, including 17, 19, 14 and 9.

Given that “cybercrime” can be politically charged, individuals could be unjustly targeted for their online activities that are critical of governments. Saudi Arabia’s sweeping Anti-Cyber Crime and Counter-Terrorism laws have been used to harshly penalise peaceful protesters, such as Nourah al-Qahtani, who was sentenced to 45 years for her social media posts. These laws, enacted in 2007 and 2014, are intentionally vague, allowing the government to arrest individuals under broadly defined charges like “tearing the social fabric” or “violating public order.”

Third, the treaty mentions that it “acknowledg[es] the right to protection against arbitrary or unlawful interference with one’s privacy, and the importance of protecting personal data”. Sure, then we don’t have a problem anymore, right?

No, Padme. You are wrong. The keyword here is the term “unlawful interference with privacy”. And with the recent anti-encryption campaigns and legislations that are sweeping across Europe and the Five Eyes, we know the government will find ways. In fact, Articles 27 and 28 implicitly discourage encryption practices by facilitating access to stored data. The weakening of encryption and anonymity endangers human rights defenders, journalists, and minorities.. As we have seen in the past, vague laws and treaties only mean one thing: governments can do whatever they want. And the only difference with this one is they will have a treaty that protects them. They will justify intrusive surveillance measures under the pretext of national security or fighting cybercrime at the expense of a person’s privacy and freedoms without accountability. The treaty eerily follows the Snooper’s Charter of the UK. The Charter, passed in 2016, allows authorities to retain emails and electronic communications indiscriminately and requires private companies to store this data. The Snooper Charter is having a facelift to expand the government’s access to large personal datasets, potentially allowing broader and more flexible use of personal data.

So what exactly is the point of this blog? What is even the point of criticising this treaty when some governments all over the world are doing it? The point is simple. Just because some governments are openly spying and jailing their journalists and human rights activists doesn’t mean we need a treaty to legitimise it. The UN was established to serve as a global platform where checks and balances can be applied not just within countries but across international borders. For millions of people around the world who are victims of authoritarian regimes, the UN is the only platform where they can advocate for their rights and a platform to air their grievances. And now, they are trying to take that away. The UN Cybercrime Treaty, under the guise of promoting international cooperation against cybercrime, is fraught with potential for abuse and overreach. It infringes on privacy rights, free speech, and the freedoms of activists, journalists, and minority groups.

But more than anything else, this Russia-backed treaty aims to normalise digital autocracy by channeling their efforts through the UN to create an illusion of universality and necessity. By allowing tools and justifications for digital monitoring and data collection, the treaty can aid in the establishment of a digital autocracy, backed by the United Nations. The notion of the internet as a free and open space is already fading, but with this treaty, we are paving the way for a future where every digital action is monitored and controlled by the state.

I spent the last two years of my life becoming increasingly vocal about my conviction that our current understanding of “AI” is a construct of PR and marketing strategies employed by major tech companies to promote that the technology we have now is both ‘artificial’ and ‘intelligent’. If you pay close attention to how the bourgeois media has framed AI, you would see how desperate they are to sell us the idea that LLMs are sentient, all-knowing entities which can either solve all humanity’s problems or be the beginning of our extinction. Both views are highly dangerous because they are too keen about the future that they often forget about the ‘now’.

AI evangelists would argue for techno-utopianism. They sell us the idea that a little bit of automation here and there would save humanity from all the troubles that we are facing from inequality to climate change. On the other hand, AI doomsdayer loves the SkyNet myth because it sells. Our species is obssessed with tales of the world ending as a way to confront our own mortality and the impertinence of the human civilisation.

I remember I was in high school when the news about 2012 being the end of the world according to the Mayan calendar boomed. I would be lying if I said that I did not believe it. As a highly religious teenager, I was gullible. The fact that even my pastor mentioned about it in one of his sermons together with some passages from the book of Revelations made me contemplate about my life everyday to the point that I could not sleep. The 2012 phenomenon came and went. It left behind a trail of relieved sighs and perhaps some embarrassed chuckles. But the lesson lingered. It showed how easily we can be swayed by narratives that resonate with our pre-existing beliefs and fears, regardless of their factual basis.

The doomsday narrative on AI often play on the same psychological and emotional chords as the 2012 prophecy. They tap into our fears and anxiety, which obscure rational discourse. The doomsday narrative on AI is founded on the idea that the technology we have now is sentient and can therefore make autonomous decisions. But that is far from the truth. All it does is absolve the creators and company leaders of accountability and obfuscates the genuine issues at hand: the widening gap between the rich and the poor, algorithmic biases, privacy invasion, rising inequality and the most crucial yet often overlooked, LLM’s energy expenditure. The use of 'artificial' to describe the tech is just a tool of evasion of accountability. It's a deflection tactic that if shit hits the fan, they can just say, “Well, the AI did it.”

Enter your email to subscribe to updates.