It is not a sci-fi flea market, but a pressing reality. The greater their presence and responsibility, when many paths of our evolution come determined by their influential cooperation -banking, autonomous cars, disease control-, more possibilities we will have to be destroyed by an AI.Although, to date, our technological advances have given us opportunities to optimize time and resources. We have machines as advanced as the Orion 9000 Predator system, a gaming device with Intel Core i9 Extreme Edition, GTX 1080 Ti SLI graphics and liquid cooling. That is, the only predictions of catastrophe and destruction we have contemplated on our favorite video games. But what does the future suggest?
A little history
AI is always spoken in abstract terms. In fact, it is common to find Ramón Llull cited as a pioneer, who devoted a large part of his life to designing, building and writing about his Ars Magna Generalis (1308), a logical machine that produced different results by adding letters and geometric shapes to which He associated different theological signifiers.
Of course, there was nothing of AI, no machines thinking, understanding and reasoning, but a man moving cranks like the rudder of a ship. But it was the substrate on which Gottfried Leibniz (1666) and other mathematicians built some of their postulates. Then came Jonathan Swift, author of ‘Gulliver’s Travels’, and he cajoled about both.
The starting point to define an AI in theoretical terms, limiting the capabilities and virtues, does not arrive until 1949. That year we find Giants Brains, a masterpiece on mimesis and computer thinking; the article Organization of the conduct, that proposes a theory on the learning emulating the neuronal networks – and that nowadays is still valid.
That same year, the engineer Claude Shannon creates a chess simulator, the first videogame in strict terms. A few months later, Alan Turing finally published a work that he had been ruminating for quite some time, Computing Machinery and Intelligence. Here he proposes the first test to check whether, based on imitation, machines can impersonate a human being. Like the bots on your smartphone!
A lot of detractors came to Turing, but the germinal idea came together in the collective imagination. Until a few years ago, in 2014, when a bot named Eugene passed the Turing Test due to a failure in the analysis process. Nowadays machines are too ready to walk imitating behavior. If a machine is capable of defeating all human beings by playing Go, put another AI learning to crush that machine.
The old test has been relegated in favor of other models – such as solving Winograd schemes. And the new artificial intelligences no longer walk with monsergas: they still distinguish between classes of cucumbers that help diagnose diseases such as cancer or schizophrenia.
Depending on the data
The new IA’s have been pushed through mining and data harvesting. We have billions of data everywhere. If we call this century the age of information is not euphemistic: every year more information is accumulated than we have collected throughout the history of mankind.
These data have enabled the fuel with which we feed the AIs. The so-called deep learning is but a form of AI based on deep learning from experience: we program some basic rules, we set out some ethical guidelines, and the rest are endless combinations. After we carry out descastes, we take samples and take advantage of those database management for something useful.
This intake has left very suggestive results: if the main cities of the world invested in smarter transport systems, we would save around 500,000 million dollars a year.
But this is a slow process. We experienced it on the Stock Exchange with the High Frequency Trading (high frequency trading): from one day to the next, the machines became the most competitive system compared to the traditional investment parks. They overcame arbitration and displaced human intermediaries. We are also feeling it, like a roar, in front of the traditional logistics models; or in the automated production chains.
But where some see debates about the theft of employment from the ordinary citizen, Elon Musk sees chaos and destruction if we get rid of a very advanced AI.
The argument is not based on invoking a Skynet, of course. There will come a time where the absolute efficiency of autonomous cars will nullify our ability to drive: we will be a danger at the wheel. Joining a road without an automated system, with real-time readings of everything that happens inside the road, will be like trying to walk on a highway with a blindfold.
There is solution for (almost) everything
Of course, that control will not destroy the human race. For this there are certain protocols of action before critical states. There are simulations for almost any scenario. All are containment protocols, alternative routes and pure analogy – like the solution to a damaged elevator: using stairs – but they are the only safe solutions.
And are the advanced AIs “a risk to the existence of our civilization,” as Musk warned? Of course, like any form of progress, like that fire that could burn the cave of its creator.
His fatalistic sentence “until people see robots kill people on the street will not understand the dangers of artificial intelligence” is not surprising: every day there are accidents of this caliber among us. The core to be debated resides in containment plans, the correct “labeling” of each situation and the so-called “reset”.
Luckily, these intelligences are, to date, serving our purposes, not stopping our technological advance: while the WHO charts graphs to estimate the possible effectiveness of a disease X, an epidemic of extreme virality (contagiousness), Google It serves huge databases and machine-learning to simulate possible scenarios in a matter of hours and find the right medication key, the one with the least side effects and the highest efficiency ratio.
As in security, the protocols will have to be updated according to the needs. Security is reactive. The knowledge too.