Scientific and legal data
The latest report[1] on “Artificial Intelligence” (AI) of the Commission Nationale Informatique et Libertés (CNIL) of December 2017 underlines that AI is the “great myth of our time” and that its definition remains very imprecise. When we talk about AI, we think of targeted advertisements on the Internet, connected objects and the Internet of Things, massive and heterogeneous data processing output from a survey, digital machines and humanoid robots capable of learning and evolution, autonomous vehicles, brain-machine interfaces and algorithms.
Algorithms are in a way the “skeletons of our computer tools”. They are instruction systems that allow the digital machine to provide results from the data provided. They are at work when using an Internet search engine, or when proposing a medical diagnosis based on statistical data, but also when choosing a car route and selecting information on social networks according to the tastes of friends’ networks. Algorithms belong to the designer and very often remain unknown to users. They become capable of more and more complex tasks thanks to the exponentially increasing computing power and to automatic learning techniques (automatic adjustment of the parameters of an algorithm so that it produces the expected results from the data provided). In this way, “deep learning” has met with many successes. It already makes it possible to recognize images and objects, to identify a face, to pilot an intelligent robot…
The links between neuroscience and AI are the basis of the European Human Brain Project, one of whose objectives is to simulate the behaviour of the human brain. AI can also be used to better understand neuronal diseases such as compulsive disorders or depression. It is therefore a question of building so-called intelligent machines both to drive evolutionary systems and to participate in the understanding of the human brain.
The Data Protection Act of 6 January 1978, amended regularly since, states that “data processing must be at the service of every citizen… must not infringe human identity, human rights, privacy or individual or public freedoms”. In particular, it defines the principles to be respected for the collection, processing and storage of personal data. It specifies the powers and sanctioning capacities of the CNIL. The new European regulation on the protection of personal data (RGDP), adopted on 27 April 2016, takes effect on 25 May 2018 in the Member States of the European Union to strengthen legal regulation. That is why a bill (No. 490) was introduced in the National Assembly on 13 December 2017.
For many, the AI is a tremendous opportunity in terms of the knowledge economy. Its contributions in the fields of medicine, robotics, learning and science in particular are already considerable. But how to tame the AI so that it is truly at the service of all?
Questions this raises
Among the fears and risks most often expressed are the problems of job cuts with robots. There is also distrust, even a “loss of humanity” in the face of the “black box” represented by the algorithms “that govern us”, on the Internet, on social networks, in e‑commerce and even in our private lives. But these algorithms could also govern the doctor and the employer who would rely on them to make their decisions. Who controls what? “This is the question often raised. Thus one wonders what are the “biases” by which judgments are made to recruit an employee through AI, with suspicion of discrimination.
In addition to the protection of personal data, the main questions of the CNIL report are:
- Faced with the power of machines, how can we apprehend the forms of dilution of responsibilities in decisions
- How far can we accept the “autonomy of the machines” that can decide for us?
- How to deal with the lack of transparency of algorithms as to the biases they use to process data and “decide the results”?
- How to apprehend this new class of objects that are humanoid robots likely to drive
What status should so-called intelligent robots be given, and what legal consequences in terms of liability in the event of a problem?
Faced with the risks of a possible form of more or less invisible “dictatorship of digital technology”, the CNIL report argues for two founding principles for the ethics of AI:
- collective loyalty (for transparency and democratic use of algorithms for example) ;
- vigilance/reflexivity with regard to the autonomy of machines and the biases they propagate or generate, so that man does not “lose control” over AI.
Anthropological and Ethical Aims
To ask the question of the status of robots is signifying a “disorder” introduced by AI concerning the relationship that man maintains with his “learning machines”. A European Parliament resolution encourages research on granting “electronic person” status to certain robots[2].
This legal expression would relativize the notion of person, which is rooted in the dignity of the human being[3]. The term “cognitive robot”, for example, would be preferable. However, how close can the capabilities of machines be to those of humans, and then exceed them? Some extreme transhumanists await this moment when AI will surpass human intelligence, a sort of “singularity” from which a man-machine fusion will constitute a “cyborg” that will take over from homo sapiens!
Without entering into such fantasies, celebrities like Stephen Hawking, Bill Gates and Elon Musk have repeatedly expressed concerns about AI[4]. They express their fear that learning machines will control us, because they will have statistical and combinatorial skills far superior to ours, as well as access to gigantic databases that man cannot process directly. It is essentially on the computing aspect that the power of digital machines is apprehended today. Only this form of intelligence is at stake, whereas man has many forms of intelligence (rational, emotional, artistic, relational, etc.). Of course, we understand that the powerful calculators allow the machine to find the combinations to beat the champions of the game of Go. However, the AI is now in the simulation field. However, there is a threshold between “simulating” an emotion and experiencing it. Emotion, with its communicative dimension, leads the man who experiences it to attribute a value to things from which he makes choices in daily life. Emotion expresses the wealth of the vulnerable man. The learning machine isn’t there yet! Does AI humanize? It is indeed a “power” that must be subjected to discernment in the face of fragility and vulnerability as sources of humanization. Similarly, it is impossible to compare human consciousness (existential, psychological and moral) with a possible machine consciousness[5].
When it comes to AI, the idea that “thinking is calculating” is often prevalent. This leads to a lot of confusion. Man, endowed with an intelligence made for truth and wisdom, has another register of thought much more varied, vast and subtle[6]. Our consciousness is situated in a body shaped by millions of years of evolution, with beautiful capacities of reason, creation, psychic life and spiritual depth, which go far beyond the most sophisticated combinatories.
Some point out that instead of pointing out the risks of computational and combinatorial intelligence of machines, it is more urgent to make public the values that algorithm designers introduce into their software. The transparency of algorithms is a real issue of substance. Does their design always aim to improve the care and service of human dignity?
BIBLIOGRAPHICAL REFERENCES TO CONTINUE THE WORK
IA, Promises and Peril, Cahier du Monde n. 22696, 31 December 2017–2 January 2018.
Frédéric Alexandre and Serge Tisseron, “Where are the real dangers of AI? “in Les robots en quête d’humanité, Pour la Science, n. 87, April-June 2015, p.102–107.
Milad Doueihi and Frédéric Louzeau, Du matérialisme numérique, Hermann, 2017.
Serge Abiteboul and Gilles Dowek, Le temps des algorithmes, Le Pommier, 2017.
February 2, 2018
———————————————————-
[1] Comment permettre à l’homme de garder la main ?, Rapport de la CNIL, publié le 15 décembre 2017.
[2] http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2017–0051+0+DOC+PDF+V0//FR
[3] La notion de « personne morale » ne permet pas d’ambiguïté. Par ailleurs, elle n’est pas reconnue par toutes les traditions juridiques.
[4] Voir Alexandre Picard, « L’intelligence artificielle, star inquiétante du web summit à Lisbonne », Le monde économie, 10 novembre 1017.
[5] Voir par exemple Mehdi Khamassi et Raja Chatila, « La conscience d’une machine », in Les robots en quête d’humanité, Pour la Science, n° 87, avril-juin 2015. Cf. Vatican II, constitution Gaudium et spes, n. 16 ; Déclaration Dignitatis humanae, n. 1–3 ; Jean-Paul II, encyclique La splendeur de la vérité, 6 août 1993, n. 31–34.
[6] Cf. Vatican II, constitution Gaudium et spes, n. 15 ; Jean-Paul II, encyclique Foi et raison, 14 septembre 1998, n. 16–33.