Incontro tra l'AI e l'essere umano.

On the way to an ethical use of AI

Author
Andrea Allegra
Visual Curator
Ana Calderón
Translator
Viviana Grasso

Humankind has been using the tools of artificial intelligence for more than sixty years now: from the first educated machines to today, progress has been enormous and sometimes unexpected. By now, AI is an integral part of everyone’s life, helping and conditioning existence. For some time now, people have been talking about a potential loss of control where technology will prevail over the human aspect. Consequently, an ethical use of AI becomes essential to ensure that this does not happen. But in what way can ethical intelligence be generated?

How can an existential threat be avoided?

There are several aspects that experts are considering. According to Chinese scientist Fei-Fei Li, one of the most prominent figures in the field of artificial intelligence, who was academically trained in the United States, in order to avoid future problems, it is essential to develop human-focused studies, where technology is used to enhance human capabilities and improve the quality of life, rather than replacing or diminishing the role of people.

Besides being one of the creators of ImageNet (a huge database of images), she is also a co-founder of a non-profit organisation devoted to diversity and inclusion, which recruits mainly people of colour and women AI4ALL and, at Stanford University, she founded the Center for Human-Centred AI that promotes research and development of artificial intelligence that respects human dignity and rights. This was intended to prevent AI from creating social inequalities.

Conosciuta anche come la madrina dell'IA, Fei-Fei Li promuove l'uso etico dell'IA con i suoi progetti innovativi.
Also known as the godmother of AI, Fei-Fei Li promotes the ethical use of AI with her innovative projects. Photo: David Paul Morris/Bloomberg

Sam Altman and OpenAI

Sam Altam, CEO of OpenAI popularized for the release in November 2022 of its most used chatbot in the world: ChatGPT. For his part, Altman believes that mankind is capable of developing AI in a way that suits its needs while avoiding technological overpowering. Therefore, this company also aims to avoid the dominance of the machine over man: a fundamental ethical principle. Some of the highlights of OpenAI‘s governance: “OpenAI‘s mission is to ensure that general artificial intelligence (AGI), by which we mean highly autonomous systems that outperform humans in the most economically valuable jobs, will benefit the whole humanity. Our mission is to build safe and beneficial AGI directly, but we will also consider our work fulfilled if it helps others to achieve this goal”.

Sam Altman, cofondatore e CEO di OpenAI durante conferenza.
Sam Altman, co-founder and CEO of OpenAI during a conference. Photo: Chona Kasinger/ Bloomberg via Getty Images / Fox Business

Anthropic and the ethical use of AI

Precisely Anthropic was born from a rib of OpenAI. In December 2022, the two Italian-American brothers left the company to create their own Public Benefit Corporation called Anthropic. Their intention is to connect US technology entrepreneurship with ethical scientific research. This was discussed by Pope Francis and Dario Amodei in an audience at the Vatican in March 2023. Moreover, it is interesting to read under Governance on the company website: “Anthropic is a Public Benefit Corporation, whose purpose is the responsible development and maintenance of advanced artificial intelligence for the long-term benefit of mankind.

Bilancia che misura uso etico dell'AI e il del ragionamento umano.
Balance measuring ethical use of AI and human reasoning. Photo: Tabata Tech

Helpful, honest and harmless

Helpful, honest and harmless. This is how AI should be: a balance of technological innovation that promotes respect for human rights. Helpful, honest, harmless thus become the pillars that developers should follow to promote an ethical use of AI.

This process was explained in more detail by Daniela Amodei during a Stanford eCorner podcast: ” One of our company’s high goals is to build artificial intelligence systems that are generally safe, reliable and above all ethical. What we have always wanted to do is build generative artificial intelligence systems and products that people can feel comfortable while using. Therefore, we believed that in order to put reliable tools on the market, we needed to incorporate these technical safety flows of model training right from the start. Simultaneously, we had to see if there were no bad, negative and inherently biased elements.

La presidente di Anthropic, Daniela Amodei, durante la conferenza Cerebral Valley AI.
Anthropic president Daniela Amodei during the Cerebral Valley AI conference. Photo: Kitrum

An additional key aspect of the ethical use of AI is to be in constant interaction with the human being. Feedback is very important. Then, it has to be considered that we cannot create a charter of rights that is the same all over the world: we have to take into account that if, for example, certain ethical and moral values are considered fair in Canada or the United States, they might not be fair in other parts of the world. The same goes for cultural or religious values. Also, laws and their application may vary from country to country.

As a matter of fact, companies dealing with AI should not be arbiters of themselves, but should be constantly confronted with different realities. For example, non-profit associations are very useful, as they can provide us with additional information that other companies may have missed; this is precisely the way to explore what the global situation is in various respects.

Ethical use of AI – crucial aspects

Consequently, it is crucial to consider AI and its use as factors that will totally disrupt the existence of individuals and societies in its most crucial aspects: from privacy to work, from fairness to medicine. However, the crucial aspects of security will have to be ethically declined, in order that the essentials of humankind as a whole are respected. This impact must avoid a negative effect on the world of work by instead making it more and more equitable, eliminating inequalities, and most importantly trying to make it more accessible and more usable.

To conclude, by quoting the scientist Fei Fei Li herself she reminds us that these are the most salient and most important moments. Therefore, we should not allow ourselves to worry, giving more importance to the concern of what will not happen. Rather, we should be concerned with what will have to happen: an ethical use of AI. But we must still act now: we have time but we must act now.

You may also be interested in Artificial Intelligence for a sustainable world