Ethics and artificial intelligence: the principles of Asilomar and Human-centered AI

The Dartmouth conference (1956) went down in history as the meeting that marked the birth of the field of research on artificial intelligence. It was a meeting made to open a new research space «on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it» (Dartmouth Proposal, p. 1).

Since this event, many steps forward have been made and, at the same time, the need for ethical guidance has become stronger. A route was opened in 2017 with the 23 principles of Asilomar. It is a text divided into three areas: the first on “research”, the second on “ethics and values”, the third and last on “scenario problems”. These principles have been developed precisely to guide research towards a “beneficial and safe” development of AI: they concern, in fact, topics such as “research transparency”, “responsibility”, “human control over AI systems”. The «autonomous AI systems should be designed so that their goals and behaviors can be assured to align with human values throughout their operation» (n. 10) and « Super-intelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization» (n. 23).

This means that the “AI problem” must be addressed – as I said – in a multidisciplinary way, the only one capable of guaranteeing a solid basis for safe development or – as Fei-Fei Li says – for a human-centered AI approach:

«At Stanford HAI – says Fei-Fei Li (director of the Institute) – our vision is led by our commitment to studying, guiding and developing human-centered AI technologies and applications».

A real challenge that calls into question other important issues such as the objectivity of values ​​and the nature of man, fundamental prerequisites for resaerch.

Giovanni Covino

Lascia un commento