The Technological Dystopia

A few weeks back, an autonomous self-driving car killed a pedestrian in Arizona and this sparked a heated debate about ethics in technology, more specifically AI.  I hate the term ”artificial intelligence”, it is so contrived and clearly a misnomer.  There’s nothing intelligent about a pre-programmed software undertaking the commands a human being is issuing.  It’s a technological servant, if anything.  The idea that computers and machines will one day surpass us in terms of cunning and intelligence, just boggles my mind.  It is redolent of the adage ”God is dead and we killed him”, as it points to us humans, being rendered obsolete by the robots we bestowed upon the world.

It is such a highly nuanced debate, because for one thing, we need AI.  At the rate things are going, we’re going to have to manage our time more efficiently and outsource the stuff we can automate, to keep up with the growth trend.  If the Anthropocene epoch has taught us anything, it’s that economic growth is considered as the apotheosis of our species.  And to maintain that trend, we’re going to have to devote our time to producing more things, more material things to keep up with this patina of growth.

But with pre-programmed cars and the likes, machinery that have the potential to kill or injure human beings, comes a great responsibility.  How would we find the perfect algorithm to avoid any such tragedy from occurring?  How will scientists grapple with this philosophical quandary and what should we expect when hordes of automation will become the norm in our daily lives?  This is an ethical cul-de-sac that will rely mostly on game theory projections, without an iota of introspection, because automation is also synonymous with dehumanization.

We can’t dovetail AI with philosophy as of yet, because we failed to address the shortcomings of humans when the technological evolution was nearly upon us.  There was no attempt to reconcile human nature with those daunting tools that will define our future and replace our productivity.  There was no semblance of trying to arrogate to ourselves moral guidelines in order to strengthen our species on the basis of our humanity, because once we’ve designed tools that can follow the trail of our thinking, we’ve basically ceded half of our existence to them.

We’re finally beginning to see this technological dystopia shaping the world around us.  Massive technological corporations are routinely interfering in the elections of the most democratic countries in the world, with little to no sanctions being imposed on their activities, because for one, they’re bereft of that moral responsibility and secondly, equity is not a prerequisite for growth.  Social Media companies sell our private information to the highest bidder and see their stocks skyrocket as they think of newer ways to exploit our private information.  Internet media companies callously shove away any competition and exert a monopolistic influence on the market, and no one bats an eye.  After all, everything is fair and square in the race to economic growth.

It saddens me to see people so obviously hindered by their own primal instincts and yet do nothing about it, as the world is about to enter into an era of massive technological upheavals.  If we do not address our own lapses as a species, we’re only going to worsen things, and that would definitely lead us into a fully-fledged technological dystopia.

The only way the word ”Artificial Intelligence” could make any sense, would be if we persevered to replace our primal, barbaric instincts with an artificial peaceable way of being.

Lukshana Gopaul

I'm a writer and content creator for PLAG. I'm well-versed in politics, culture, music and fashion.

%d bloggers like this: