menu_open Columnists
We use cookies to provide some features and experiences in QOSHE

More information  .  Close

The Silicon Conquistadors: Humanity and Digital Colonialism in the Age of AI

51 1
previous day

This is an excerpt from The Praeter-Colonial Mind: An Intellectual Journey Through the Back Alleys of Empire by Francisco Lobo. You can download the book free of charge from E-International Relations.

As we saw in a previous chapter, the great novelist of the age of empire, Joseph Conrad, once compared colonialism (at least its idea) to the advance of light against the receding darkness (Conrad 2022b, 107). The light in this admittedly problematic metaphor represents progress, ushered in by science and knowledge – that is to say, by data. Those who possess more of it (science, knowledge, data) are better off than those who have very little or none. An asymmetry of information, therefore, arises, similar to the ‘epistemic asymmetry’ that according to Oxford Professor Amia Srinivasan exists between teacher and student (Srinivasan 2022, 131). In this chapter I want to address a concerning trend of our time, one that is huddling all of us together and placing us at the vulnerable end of an epistemic asymmetry between humanity, on the one hand, and Artificial Intelligence (‘AI’), on the other. As Pete Buttigieg has recently remarked: “the terms of what it is like to be a human are about to change in ways that rival the transformations of the Enlightenment or the Industrial Revolution, only much more quickly (Buttigieg 2025, para. 4).”

Powered by the winds of our own aggregate data, the ships of AI are fast approaching our shores rendering us as vulnerable as the Aztecs or the Māori were in the eve of the first contact with their European conquerors: ‘As AI now arrives on our proverbial shores, it is, like the conquistadors, triggering whispers both of excitement and of mistrust’ (Kissinger et al 2024, 84).

This is one of the greatest challenges the praeter-colonial mind can face, namely the fact that we, as humans, are potentially about to become a subjugated species by means of our own making and data coming out of our own minds. In a future where data is power and the form of intelligence that best manages it is king, the praeter-colonial mind will struggle to make sense of the fact that it can be subdued by its own knowledge. Further, as we are still grappling with the many legacies of our most recent experiences with colonialism from the past five hundred years, some of us are even ushering in this neo-colonial future into the present without much reflection.

The Colombian novelist and winner of the Nobel Prize in Literature, Gabriel García Márquez, opens One Hundred Years of Solitude with the tale of a man facing a firing squad – a man whose last thoughts take him back to when his father took him to see ice for the first time as a child (García Márquez 2017, 13). The strange substance was brought to them by a company of travelers (‘gitanos’ in the novel), who specialized in entertaining the locals with all kinds of rare objects and artifacts from foreign lands – not just ice, but also magnets, magnifying glasses, astrolabes, telescopes, and the like. Their leader, an enigmatic and good-hearted man named Melquíades, told the locals as he demonstrated how magnets work: ‘all things are alive inside – it is only a matter of awakening their spirit’. Similarly, as he amused the villagers with a telescope, he would declare: ‘science has eliminated distances. Soon, man will be able to see what goes on in every corner of the earth without leaving home’. Melquíades was not wrong, and he was indeed talking about scientific accomplishments, both present and future. Yet, he was not a man of science himself. None of the travelers showcasing these technologies were. All they needed was a basic understanding of how things worked so they could demonstrate to anyone unfamiliar.

It is the similar level of knowledge we all possess when approaching any piece of modern technology. Take, for instance, your own phone. You are fairly confident you can explain how it works to a stranger, maybe even teach them a few tricks or amuse them with one or two novel functionalities. Yet very few of us can open our phones and fix whatever might be wrong with them. We would probably take it to a specialist, an expert in the technology and the science that goes into making it.

What the travelers of Macondo resemble – and, for that matter, most of us when it comes to science and technology – is what is known as the ‘sorcerer’s apprentice’. In his latest monograph on AI, titled Nexus, Yuval Noah Harari recalls Goethe’s poem about a sorcerer’s apprentice (popularized by Disney’s Fantasia) who enchants a broom to do his work for him. Before long, things get out of hand when the broom carries so much water into the lab that it threatens to flood it, the apprentice panicking and hacking the broom with an axe only to find it splits into more and more autonomous brooms relentlessly continuing the task for which they were ‘programmed’. Harari quotes Goethe directly (‘The spirits that I summoned, I now cannot rid myself of again’), thus reaching a sobering conclusion in the Prologue and setting the tone for the rest of the book: ‘The lesson to the apprentice – and to humanity – is clear: never summon powers you cannot control’ (Harari 2024, xii).

And we may add to Harari’s prescription: never summon powers you cannot control, and that you do not understand. Indeed, in a recent interview with CNN, Judd Rosenblatt, the CEO of an AI company named AE Studio – which developed an AI software that during the testing phase started blackmailing some of its human users – somberly confided that:

as AI gets more and more powerful, and we just don’t actually understand how AI models work in the first place – the top AI engineers in the world who create these things – we have no idea how AI actually works, we don’t know how to look inside it and understand what’s going on, and so it’s getting a lot more powerful and we need to be fairly concerned that behavior like this may get way worse as it gets more powerful (CNN 2025a, at 01:39).

This CEO’s concerns echo those of plenty of people working with AI models in the private sector. In 2023, the Future of Life initiative issued a public letter, signed by the likes of Elon Musk and Harari, with the following exhortation:

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. (…). If such a pause cannot be enacted quickly, governments should step in and institute a moratorium (Future of Life 2023, para. 1).

Similarly, the Center for AI Safety in San Francisco conveyed a similar message in 2023, endorsed by several scientists, including Bill Gates: ‘Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war’ (Center for AI Safety 2023). Government reaction has been slow to come, other than in the form of some policy initiatives such as the Hiroshima AI Process (European Commission 2023) and the Bletchley Declaration (UK Government 2023). Some germinal legislation and regulations have been enacted in the US (White House 2023) and the EU (European Commission 2024), and even Pope Leo XIV has warned of the dangers of AI (Watkins 2025).

However, at the end of the day,........

© E-International