Our addiction to end-of-world stories
I spent the last two years of my life becoming increasingly vocal about my conviction that our current understanding of “AI” is a construct of PR and marketing strategies employed by major tech companies to promote that the technology we have now is both ‘artificial’ and ‘intelligent’. If you pay close attention to how the bourgeois media has framed AI, you would see how desperate they are to sell us the idea that LLMs are sentient, all-knowing entities which can either solve all humanity’s problems or be the beginning of our extinction. Both views are highly dangerous because they are too keen about the future that they often forget about the ‘now’.
AI evangelists would argue for techno-utopianism. They sell us the idea that a little bit of automation here and there would save humanity from all the troubles that we are facing from inequality to climate change. On the other hand, AI doomsdayer loves the SkyNet myth because it sells. Our species is obssessed with tales of the world ending as a way to confront our own mortality and the impertinence of the human civilisation.
I remember I was in high school when the news about 2012 being the end of the world according to the Mayan calendar boomed. I would be lying if I said that I did not believe it. As a highly religious teenager, I was gullible. The fact that even my pastor mentioned about it in one of his sermons together with some passages from the book of Revelations made me contemplate about my life everyday to the point that I could not sleep. The 2012 phenomenon came and went. It left behind a trail of relieved sighs and perhaps some embarrassed chuckles. But the lesson lingered. It showed how easily we can be swayed by narratives that resonate with our pre-existing beliefs and fears, regardless of their factual basis.
The doomsday narrative on AI often play on the same psychological and emotional chords as the 2012 prophecy. They tap into our fears and anxiety, which obscure rational discourse. The doomsday narrative on AI is founded on the idea that the technology we have now is sentient and can therefore make autonomous decisions. But that is far from the truth. All it does is absolve the creators and company leaders of accountability and obfuscates the genuine issues at hand: the widening gap between the rich and the poor, algorithmic biases, privacy invasion, rising inequality and the most crucial yet often overlooked, LLM’s energy expenditure. The use of 'artificial' to describe the tech is just a tool of evasion of accountability. It's a deflection tactic that if shit hits the fan, they can just say, "Well, the AI did it."