Select Page

05.- Intelligence artificielle / Artificial intelligence — CHURCH OF FRANCE / États généraux de la bioéthique — Which world do we want for tomorrow? The brave new world…

05.- Intelligence artificielle / Artificial intelligence — CHURCH OF FRANCE /  États généraux de la bioéthique — Which world do we want for tomorrow? The brave new world…
Advertisement

Scientific and legal data

The lat­est report[1] on “Arti­fi­cial Intel­li­gence” (AI) of the Com­mis­sion Nationale Infor­ma­tique et Lib­ertés (CNIL) of Decem­ber 2017 under­lines that AI is the “great myth of our time” and that its def­i­n­i­tion remains very impre­cise. When we talk about AI, we think of tar­get­ed adver­tise­ments on the Inter­net, con­nect­ed objects and the Inter­net of Things, mas­sive and het­ero­ge­neous data pro­cess­ing out­put from a sur­vey, dig­i­tal machines and humanoid robots capa­ble of learn­ing and evo­lu­tion, autonomous vehi­cles, brain-machine inter­faces and algorithms.

Algo­rithms are in a way the “skele­tons of our com­put­er tools”. They are instruc­tion sys­tems that allow the dig­i­tal machine to pro­vide results from the data pro­vid­ed. They are at work when using an Inter­net search engine, or when propos­ing a med­ical diag­no­sis based on sta­tis­ti­cal data, but also when choos­ing a car route and select­ing infor­ma­tion on social net­works accord­ing to the tastes of friends’ net­works. Algo­rithms belong to the design­er and very often remain unknown to users. They become capa­ble of more and more com­plex tasks thanks to the expo­nen­tial­ly increas­ing com­put­ing pow­er and to auto­mat­ic learn­ing tech­niques (auto­mat­ic adjust­ment of the para­me­ters of an algo­rithm so that it pro­duces the expect­ed results from the data pro­vid­ed). In this way, “deep learn­ing” has met with many suc­cess­es. It already makes it pos­si­ble to rec­og­nize images and objects, to iden­ti­fy a face, to pilot an intel­li­gent robot…

The links between neu­ro­science and AI are the basis of the Euro­pean Human Brain Project, one of whose objec­tives is to sim­u­late the behav­iour of the human brain. AI can also be used to bet­ter under­stand neu­ronal dis­eases such as com­pul­sive dis­or­ders or depres­sion. It is there­fore a ques­tion of build­ing so-called intel­li­gent machines both to dri­ve evo­lu­tion­ary sys­tems and to par­tic­i­pate in the under­stand­ing of the human brain.

The Data Pro­tec­tion Act of 6 Jan­u­ary 1978, amend­ed reg­u­lar­ly since, states that “data pro­cess­ing must be at the ser­vice of every cit­i­zen… must not infringe human iden­ti­ty, human rights, pri­va­cy or indi­vid­ual or pub­lic free­doms”. In par­tic­u­lar, it defines the prin­ci­ples to be respect­ed for the col­lec­tion, pro­cess­ing and stor­age of per­son­al data. It spec­i­fies the pow­ers and sanc­tion­ing capac­i­ties of the CNIL. The new Euro­pean reg­u­la­tion on the pro­tec­tion of per­son­al data (RGDP), adopt­ed on 27 April 2016, takes effect on 25 May 2018 in the Mem­ber States of the Euro­pean Union to strength­en legal reg­u­la­tion. That is why a bill (No. 490) was intro­duced in the Nation­al Assem­bly on 13 Decem­ber 2017.

For many, the AI is a tremen­dous oppor­tu­ni­ty in terms of the knowl­edge econ­o­my. Its con­tri­bu­tions in the fields of med­i­cine, robot­ics, learn­ing and sci­ence in par­tic­u­lar are already con­sid­er­able. But how to tame the AI so that it is tru­ly at the ser­vice of all?

Questions this raises

Among the fears and risks most often expressed are the prob­lems of job cuts with robots. There is also dis­trust, even a “loss of human­i­ty” in the face of the “black box” rep­re­sent­ed by the algo­rithms “that gov­ern us”, on the Inter­net, on social net­works, in e‑commerce and even in our pri­vate lives. But these algo­rithms could also gov­ern the doc­tor and the employ­er who would rely on them to make their deci­sions. Who con­trols what? “This is the ques­tion often raised. Thus one won­ders what are the “bias­es” by which judg­ments are made to recruit an employ­ee through AI, with sus­pi­cion of discrimination.

In addi­tion to the pro­tec­tion of per­son­al data, the main ques­tions of the CNIL report are:

  • Faced with the pow­er of machines, how can we appre­hend the forms of dilu­tion of respon­si­bil­i­ties in decisions
  • How far can we accept the “auton­o­my of the machines” that can decide for us?
  • How to deal with the lack of trans­paren­cy of algo­rithms as to the bias­es they use to process data and “decide the results”?
  • How to appre­hend this new class of objects that are humanoid robots like­ly to drive
    What sta­tus should so-called intel­li­gent robots be giv­en, and what legal con­se­quences in terms of lia­bil­i­ty in the event of a problem?

Faced with the risks of a pos­si­ble form of more or less invis­i­ble “dic­ta­tor­ship of dig­i­tal tech­nol­o­gy”, the CNIL report argues for two found­ing prin­ci­ples for the ethics of AI:

  • col­lec­tive loy­al­ty (for trans­paren­cy and demo­c­ra­t­ic use of algo­rithms for example) ;
  • vigilance/reflexivity with regard to the auton­o­my of machines and the bias­es they prop­a­gate or gen­er­ate, so that man does not “lose con­trol” over AI.

Anthropological and Ethical Aims

To ask the ques­tion of the sta­tus of robots is sig­ni­fy­ing a “dis­or­der” intro­duced by AI con­cern­ing the rela­tion­ship that man main­tains with his “learn­ing machines”. A Euro­pean Par­lia­ment res­o­lu­tion encour­ages research on grant­i­ng “elec­tron­ic per­son” sta­tus to cer­tain robots[2].

This legal expres­sion would rel­a­tivize the notion of per­son, which is root­ed in the dig­ni­ty of the human being[3]. The term “cog­ni­tive robot”, for exam­ple, would be prefer­able. How­ev­er, how close can the capa­bil­i­ties of machines be to those of humans, and then exceed them? Some extreme tran­shu­man­ists await this moment when AI will sur­pass human intel­li­gence, a sort of “sin­gu­lar­i­ty” from which a man-machine fusion will con­sti­tute a “cyborg” that will take over from homo sapiens!

With­out enter­ing into such fan­tasies, celebri­ties like Stephen Hawk­ing, Bill Gates and Elon Musk have repeat­ed­ly expressed con­cerns about AI[4]. They express their fear that learn­ing machines will con­trol us, because they will have sta­tis­ti­cal and com­bi­na­to­r­i­al skills far supe­ri­or to ours, as well as access to gigan­tic data­bas­es that man can­not process direct­ly. It is essen­tial­ly on the com­put­ing aspect that the pow­er of dig­i­tal machines is appre­hend­ed today. Only this form of intel­li­gence is at stake, where­as man has many forms of intel­li­gence (ratio­nal, emo­tion­al, artis­tic, rela­tion­al, etc.). Of course, we under­stand that the pow­er­ful cal­cu­la­tors allow the machine to find the com­bi­na­tions to beat the cham­pi­ons of the game of Go. How­ev­er, the AI is now in the sim­u­la­tion field. How­ev­er, there is a thresh­old between “sim­u­lat­ing” an emo­tion and expe­ri­enc­ing it. Emo­tion, with its com­mu­nica­tive dimen­sion, leads the man who expe­ri­ences it to attribute a val­ue to things from which he makes choic­es in dai­ly life. Emo­tion express­es the wealth of the vul­ner­a­ble man. The learn­ing machine isn’t there yet! Does AI human­ize? It is indeed a “pow­er” that must be sub­ject­ed to dis­cern­ment in the face of fragili­ty and vul­ner­a­bil­i­ty as sources of human­iza­tion. Sim­i­lar­ly, it is impos­si­ble to com­pare human con­scious­ness (exis­ten­tial, psy­cho­log­i­cal and moral) with a pos­si­ble machine consciousness[5].

When it comes to AI, the idea that “think­ing is cal­cu­lat­ing” is often preva­lent. This leads to a lot of con­fu­sion. Man, endowed with an intel­li­gence made for truth and wis­dom, has anoth­er reg­is­ter of thought much more var­ied, vast and subtle[6]. Our con­scious­ness is sit­u­at­ed in a body shaped by mil­lions of years of evo­lu­tion, with beau­ti­ful capac­i­ties of rea­son, cre­ation, psy­chic life and spir­i­tu­al depth, which go far beyond the most sophis­ti­cat­ed combinatories.

Some point out that instead of point­ing out the risks of com­pu­ta­tion­al and com­bi­na­to­r­i­al intel­li­gence of machines, it is more urgent to make pub­lic the val­ues that algo­rithm design­ers intro­duce into their soft­ware. The trans­paren­cy of algo­rithms is a real issue of sub­stance. Does their design always aim to improve the care and ser­vice of human dignity?

BIBLIOGRAPHICAL REFERENCES TO CONTINUE THE WORK

IA, Promis­es and Per­il, Cahi­er du Monde n. 22696, 31 Decem­ber 2017–2 Jan­u­ary 2018.

Frédéric Alexan­dre and Serge Tis­seron, “Where are the real dan­gers of AI? “in Les robots en quête d’hu­man­ité, Pour la Sci­ence, n. 87, April-June 2015, p.102–107.

Milad Douei­hi and Frédéric Louzeau, Du matéri­al­isme numérique, Her­mann, 2017.

Serge Abite­boul and Gilles Dowek, Le temps des algo­rithmes, Le Pom­mi­er, 2017.

Feb­ru­ary 2, 2018

———————————————————-

[1] Com­ment per­me­t­tre à l’homme de garder la main ?, Rap­port de la CNIL, pub­lié le 15 décem­bre 2017.

[2] http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2017–0051+0+DOC+PDF+V0//FR

[3] La notion de « per­son­ne morale » ne per­met pas d’ambiguïté. Par ailleurs, elle n’est pas recon­nue par toutes les tra­di­tions juridiques.

[4] Voir Alexan­dre Picard, « L’intelligence arti­fi­cielle, star inquié­tante du web sum­mit à Lis­bonne », Le monde économie, 10 novem­bre 1017.

[5] Voir par exem­ple Meh­di Khamas­si et Raja Chati­la, « La con­science d’une machine », in Les robots en quête d’humanité, Pour la Sci­ence, n° 87, avril-juin 2015. Cf. Vat­i­can II, con­sti­tu­tion Gaudi­um et spes, n. 16 ; Déc­la­ra­tion Dig­ni­tatis humanae, n. 1–3 ; Jean-Paul II, ency­clique La splen­deur de la vérité, 6 août 1993, n. 31–34.

[6] Cf. Vat­i­can II, con­sti­tu­tion Gaudi­um et spes, n. 15 ; Jean-Paul II, ency­clique Foi et rai­son, 14 sep­tem­bre 1998, n. 16–33.

FEBRUARY 8 — SAINT BAKHITA

ADLAUDATOSI INTEGRAL ECOLOGY FORUM WEBINARS

FABRICE HADJADJ — VIRTUAL AND REAL WORLDS: HOW TO INHABIT THE DEVASTATED EARTH?

OUR MISSION:

THE PURPOSE IS TO SHARE BEST PRACTICES AND PROMOTE ACTIONS AGAINST HUMAN TRAFFICKING.

WE MAKE AVAILABLE TO YOU GUIDES AND RESEARCH ON TRAFFICKING IN HUMAN BEINGS FROM THE MOST RECOGNISED LEGAL AND OPERATIONAL ACTORS.

AN EXAMPLE FOR CATHOLIC ENTITIES TO FOLLOW: ERADICATE MODERN SLAVERY IN ALL ITS FORMS FROM THE OPERATIONS AND SUPPLY CHAINS OF CATHOLIC ENTITIES IN AUSTRALIA — PROPOSAL OF ACTION PLAN – MODERN SLAVERY RISK MANAGEMENT PROGRAM FROM 2021 TO 30 JUNE 2023

HELP OUR ORGANIZATION BY MAKING A DONATION TODAY!

Adlaudatosi Webinars Videos VIMEO

Videos of the speakers’ interventions adlaudatosi VIMEO

Adlaudatosi Webinars Videos YOUTUBE

Religious Helping Trafficking Victims along the Road of Recovery (ON-DEMAND VIDEO WEBINAR)

Religious Working In International Advocacy Against Human Trafficking (ON-DEMAND VIDEO WEBINAR)

Impact Of Human Trafficking On Health: Trauma (ON-DEMAND VIDEO WEBINAR)

Impact Of Human Trafficking On Health: Healing (ON-DEMAND VIDEO WEBINAR)

International Prosecution Of Human Trafficking — Where Are We Now? (ON-DEMAND VIDEO WEBINAR)

International Prosecution Of Human Trafficking — What can be done? (ON-DEMAND VIDEO WEBINAR)

International Prosecution Of Human Trafficking — Best Practices (ON-DEMAND VIDEO WEBINAR)

Demand As Root Cause For Human Trafficking – Sex Trafficking & Prostitution

Human Trafficking — Interview with Prof. Michel Veuthey, Order of Malta — 44th UN Human Right Council 2020

POPE’S PAYER INTENTION FOR FEBRUARY 2020: Hear the cries of migrants victims of human trafficking

FRANCE — BLOG DU COLLECTIF “CONTRE LA TRAITE DES ÊTRES HUMAINS”

Church on the frontlines in fight against human trafficking

Holy See — PUBLICATION OF PASTORAL ORIENTATIONS ON HUMAN TRAFFICKING 2019

RIGHT TO LIFE AND HUMAN DIGNITY GUIDEBOOK

Catholic social teaching

Doctrine sociale de l’Église catholique

Register to our series of webinars adlaudatosi on Human Trafficking

You have successfully registered !