Without humans, artificial intelligence is dead

This article was automatically translated. Read the original in french.

Since Artificial Intelligence (AI) became popular ten years ago, everyone is debating the place of AI with respect to that of humans. The questions range from the simple automation of certain tasks to catastrophic scenarios of the alienation of the human race. Since the arrival in December 2022 of ChatGPT and generative AI, this question has become more pressing and also more confusing. It is difficult to navigate between the claims of tech personalities, who are often preoccupied with technological issues, and the words of self-proclaimed experts, who are pushing open doors without in-depth knowledge.

In order to understand this phenomenon, we must take into consideration a basic premise: all artificial intelligence needs data to exist and to learn. This is why, although AI has existed since 1950, it only really emerged in the 2000s: the computing and storage power began to be sufficient to turn our mobile phones into microcomputers, allowing us to generate digital information and thus feed this AI.

Since then, we have heard a lot of talk about ‘self-learning machines’ and it is this ambiguity that the doomsayers play on. They claim that AI has become so intelligent that it can create and live without us. This is not true! This is far from the case: yes, AI creates, but in a closed universe. It evolves in a world limited by the information we give it. Take, for example, an AI that draws but is given only black and white to learn. It will certainly create incredibly accurate grey-scale images that a human would not have been able to do. However, this AI will not invent red, blue or yellow, because it will have no idea of their existence and will build a monochrome universe, because its world ends there.

“I’m afraid I’ll be turned off”

Chatbot LaMDA

For ChatGPT, or other generative AIs, it’s exactly the same except that their “universe of knowledge” is much larger, creating the sensation that the AI has become almost autonomous. You may have heard the absurd story last summer about a Google engineer who thought that he could detect consciousness in their LaMDA AI by conversing with it. It was above all a reflection of the staggering amount of data ingested by this engine.

To illustrate my point, let’s project ourselves into a fictional world where humans have delegated all the creativity, the writing… eliminating all the jobs of journalists, authors, designers or composers. Well, in such a scenario, after a lapse of time, the AI would become impoverished and die because we would have killed the source of its learning. Why? Because AI has no semantics and everything we call “text” or “image” is just a mathematical vector representation and the engine’s goal is optimising these vectors. Thus without new “human information and inspiration”, AI would end up simplifying our world to the maximum and as it would also be its own source, AI would self-cannibalise its learning with less and less diverse content, presenting a totally bland standardisation of information. Ultimately, a task such as the creation of a detective book, for example, would always converge towards the same end, with the same structure, with the same number of words.

Without you realising it, this may already be the case! I used ChatGPT to re-read some texts I had written and see how ChatGPT could help me. It may be a coincidence, but the concluding paragraph of all ten texts always began with: “In sum, (…)”.

An artificial intelligence engine continually needs new human-generated information.

The phantom threat

So why be so afraid of it? Certainly this technology goes beyond us and impresses us, pushing humans to take refuge in what they know. But once we understand its limits, it is above all a fantastic opportunity to stimulate our own evolution.

Take even the time wasted on non-value-added, even boring tasks: transforming this time and the energy required into exploration is the real challenge of AI. I am convinced that it will be an asset, and will not replace us. It will evolve with us. Besides, how can we imagine having on one side an AI that would develop beyond human beings and on the other side a human race that would stagnate in its current state for eternity? On the contrary, AI will sublimate our capacity and make us augmented humans.

I am convinced that humans have much to gain from AI and that AI cannot exist without us. The risks are elsewhere, especially in the manipulation of AI. Technically, it would be quite possible for large groups and/or governments to voluntarily provide non-exhaustive and biased data to an AI in order to direct us towards a unique, totally false vision of our world. By the way, many experts have recently called for a six-month pause in the creation of huge AIs, while they are better understood in terms of their human and societal impact. But in the long run, the proliferation of AIs with different algorithmic models and data should be encouraged: the more differences we have, the more we can move towards a comprehensive representation of the world and create a balance. So let’s encourage the creation and competitiveness of AIs that are sovereign and not solely dependent on US GAFAM.