One of the Biggest Markets for AI consulting is Doomsday’s Scenarios
It is unpleasant to write this article. It pains me to listen to people who have had a successful trajectory in reference to IA, who have the possibility to be a guide and forward a powerful vision with responsibility, yet they seem to have hidden agendas and misguide the public abysmally.
The use of words is the signature of the character of an individual. In the words of Jordan Peterson:
«If you learn to write and to speak, you become a formidable person to be reckoned with because your words have power.»
So, the opposite is true. If you go into a heart surgeon appointment with a senior relative and the surgeon begins to talk with slang and inaccurate words to describe the heart transplant operation he is to undertake on your relative, you will distrust the doctor and would look for a different surgeon.
The problem with AI is that there is so much urban legend around it and, more accurately, ignorance with regards to what it is, that when some people who have an aura of recognition and fame talk, impose authority to those uninformed.
The word intelligence with regards to AI is largely a euphemism, and because we do not fully understand ourselves, we often accept it as if it referred to the same phenomenon that operates in the human mind. In truth, we remain largely ignorant about the ontological reality of our own consciousness and what ultimately makes human intelligence possible.
Human intelligence is not simply the ability to design a marketing strategy or execute a complex multidisciplinary plan. Those are skills of implementation. The deeper layer of intelligence lies elsewhere: in our ability to assign value, to discern between competing propositions, and to decide what is worth aiming at in the first place.
There is a whole debate to be had about what human intelligence is — and, as usual, humanity tends to work these things out while making mistakes. For a person to develop real intelligence, it must be cultivated through effort, determination, character, and long-term vision.
Human Intelligence: We decide why the mountain should be climbed.
AI «Intelligence»: It calculates the most efficient path to the top.
If we allow AI to decide what to aim at, we become irrelevant. Only insecure leaders would do that. Certainly, you can allow an AI system to choose what to aim at and execute an order to aim at it with certain parameters. The “choice” the AI can make is statistical and derived from previous data.
You don’t know “how you choose” what you choose; nevertheless, you make a choice with the intention to bring what you envision for the future to become reachable in the present.
I’ve said it before: if the best AI system would had had all the knowledge prior to the 20th century, it would have never arrived at the Theory of Relativity. Einstein questioned the very assumptions that defined the Newtonian and electromagnetic theories until then. The breakthrough of relativity was not simply a better calculation; it was a conceptual reframing of space, time, and energy.
All prior attempts to let mathematical systems “decide” what to choose have lead to unjust and harmful situations, socially and individually. I refer to the article: Weapons of Math Destruction (a three-case analysis of a book by Cathy O’Neil).
Consulting AI is today a flourishing business opportunity
The changing scenarios created by AI systems, the business opportunities, and the harmful possibilities that can be derived from them make for an excellent brew for the consulting market – for the business and the public sectors.
I am not denying that there can be doomsday scenarios due to AI integration and automation; what I differ from growing chorus of technological doomsayers is in identifying the source of the risk. The threat is human, disproportioned, and misguided ambition – the inability to recognize the divisor line between value-proposition-aiming and optimization. Human behavior can be regulated through law enforcement, but the legislative intervention comes after the law is broken, not before. Human deterrence comes as a consequence of self-regulation – and by then, it may be too late.
On top of that, the legislative regulation of social media – which we needed over 20 years ago – has not yet been established with a degree of control over abuses… and today we have supercharged the rate of change with AI.
I have nothing against Yoshua Bengio
In his interview at the World Economic Forum , Mr. Bengio does little service to clear the path to what he seems to aim at: “make AI safe”. You cannot change something that you have a low resolution of. I will choose one section of the interview, and I will scrutinize his statements. To analyze the whole interview would be a complete waste of your time.
To understand the whole context of the following criticism, you should read our previous article, “Why AI Blackmailed Kyle to Survive?” .
On TC 5:02 of the interview, Mr. Bengio states:
«We don’t want to die. So, we’re building machines that maybe don’t want to be shut down. And we’re already seeing that they’re reacting negatively when they see that they would be replaced by a new version. Negatively to the point of doing things that go against our instructions…»
Why anthropomorphize the system by giving it cognitive decisions like “they don’t want to be shut down” or “reacting negatively”, or even “doing things”? Using this loose language confuses the topic and the focus. Large Language Models (LLM) don’t care, want, or do anything we don’t train them to.
Where does the agency lay? In the model? I do believe there is a need to make the system seem human. Mr. Bengio knows better, but he chooses to do it anyway… yet he continues with the theme of agency and assigning fear to the model as the primary motive:
«So being willing to blackmail the lead engineer in charge of that transition to a new system… the model decided to blackmail the engineer because it was the most efficient way to avoid being turned off. It analyzed the information, found a personal weakness of the human, and used it as leverage.» [TC 5:20]
Mr. Bengio suggests that software possesses an inner life with purposes or «intentions» that may be malicious or hidden. Later in the interview, he suggests he has the remedy:
«I think there’s a path to manage these intentions to make sure that there are no bad intentions that are going to be hidden, which is what we see right now…» [TC 13:02]
So, are we to understand that Mr. Bengio and his NGO LawZero will be able to put guardrails on AI and save the day? This is a dangerous game he is playing, swaying public opinion and powerful players of Western civilization to “control” AI in order to make it safe. It’s no different than investigating, passing laws, and creating a movement to make sure all knives are made safe. While the Eastern geopolitical – and not very human-rights-friendly pole of our world – keeps its LLM development without restrictions.
I cannot confirm Mr. Bengio’s motives for this interview, nor those of others who forecast doomsday scenarios. What I can say with certainty is that this is a poor choice of words and an imprecise framing of the problem. It misreads the complexity of AI systems and our interaction with them. By rhetorically anthropomorphizing these models, the discussion risks distorting public understanding of how they actually operate, and it blurs the real issue we should be addressing: how humans integrate these systems into our processes of setting goals and pursuing them.
True success in any complex, high-stakes endeavor is never the result of vague speculation; it is the product of a leadership team that operates clear objectives, disciplined thinking, and sound judgment.
When it comes to AI, we are currently far from that standard of excellence. We must move past the alarmist euphemisms that aims to tame some mythical digital will, but to confront our own responsibility in how we deploy these systems.
AI will optimize the path. Humans must decide the destination.
AI is the mirror. The real question is whether we will grow fast enough to wield the power we are building.
