How is the Italian Legal System Dealing with Artificial Intelligence? Let’s Explore

Par Elisabetta Ceriello            Étudiante en séjour de recherche                     Laboratoire de cyberjustice

The evolution and development of artificial intelligence pose challenges in various fields: scientific, sociological, commercial, economic, and, last but not least, legal. Artificial intelligence has always attracted the interest of jurists, who are used to describing the phenomenon and examining the logical-legal implications of new technological applications.

 

As is now well known, the new forms of artificial intelligence can give rise to a plurality of harmful situations completely different from those traditionally analyzed by jurisprudence. From a legal perspective, the (now cross-border) need to regulate the effects of the actions of artificial intelligence becomes urgent, particularly regarding the identification of the imputability of damages – patrimonial and non-patrimonial – for errors and wrongful acts. In the face of such a situation, the question remains open: who, and in what capacity, will be liable for the autonomous conduct put in place by artificial intelligence devices? If a self-driving car hits a pedestrian, who will be liable? The car manufacturer, the designer of the algorithm, or the seller?

 

In European attempts to regulate artificial intelligence, there are mainly two qualifying aspects:

 

  • The algorithm – which the jurist approaches with fear – as a potentially uncontrollable and in itself dangerous tool;
  • the need for a regulatory framework that fits into an artificial cognitive path, establishing principles and standards.

 

The issues addressed by the Italian judges aim at « judicial controllability » of algorithmic decisions whenever they are capable of effectively affecting the rights and legal situations of individuals. Recent Italian jurisprudence, therefore, does not focus on the abstract concept of algorithm, but rather on its use in concrete terms, on its destination within a public or private organization, as well as on the possibility of activating the ordinary jurisdictional guarantees in the face of specific guarantees already provided for by law.[1]

 

It is helpful to mention here the path begun by the Holy See in 2020 with the Workshop « Rome Call for AI Ethics » held at the Pontifical Academy for Life on February 28, 2020, with the participation of Brad Smith, president of Microsoft, John Kelly III, deputy executive director of IBM, David Sassoli, president of the European Parliament and Qui Dongyu, the director general of the FAO. [2]

 

          Join the signature of the Rome call for AI ethics by Abrahamic Religions –

                                  Vatican City, January 10, 2023; Credits: Vatican Media.[3]

 

On the occasion, a “Call for Ethics” was signed to involve companies in a path to examine the effects of artificial intelligence-related technologies, the risks they entail, possible regulations, and educational implications. It is a path aimed at giving concrete application to the concept of “algorethics”, i.e. “give ethics to algorithms”. More specifically, Italian jurists highlight the need for an intervention that goes beyond “mere education in the correct use of new technologies”, requiring “a broader educational action”.

As announced by Italian Prime Minister Giorgia Meloni on April 26, 2024, the Holy See’s participation is actively pursued, and Pope Francis will participate in the G7 session dedicated to artificial intelligence.[4] Pope’s participation will undoubtedly make a significant contribution because regulating artificial intelligence requires  legal but also social and ethical input.

Continuing our study of the new developments by the Italian legislature, it is pertinent to mention that the Italian Government is the first in Europe to work on a bill concerning artificial intelligence. This was emphasized by Mr. Butti, Undersecretary of State to the Presidency of the Council of Ministers. The government has drafted a bill that implements the principles outlined in the European Regulation, the AI Act, approved by the European Parliament on March 14, 2024. The bill includes rules suitable for a broad (rather than analogical) evolutionary interpretation.

This  ambitious law aims to direct and shape the future of AI in Europe emphasizing a proactive and precautionary vision of its development and use. It operates in five areas: national strategy, national authorities, promotional actions, copyright protection, and criminal sanctions. The Government will promote the training and information of citizens on artificial intelligence, starting from school to university.

On the criminal law front, Italian Justice Minister Carlo Nordio asserts that the diffusion, without consent, of videos or images created or modified with the use of artificial intelligence will be punishable by imprisonment from 1 to 5 years, as modern artificial intelligence is capable of producing devastating effects, “creating a reality that is no longer virtual but real”, thus necessitating the intervention of criminal law. [5]

As concerns the doctrinaire aspect (which is much more dynamic and less conditioned than the political-legislative aspect), there are many proposals drawn up by the Group of Experts of the Ministry of Economic Development (MISE) on artificial intelligence, the contributions of eminent jurists, as well as the results of conferences, seminars, research, and meetings, which allow jurists to increase and examine the dissertation on the relation between AI and liability. In consideration of what has been observed so far, a few remarks seem appropriate:

 

  • First, it should be noted that the theory of full or partial legal subjectivity invented to resolve liability issues arising from the events of the AI, « through a prudent analogy of the liability rules » [6] has definitively fallen. The partial legal subjectivity, which had already fallen due to a correct interpretation of Article 1228 of the Italian Civil Code on the subject of contractual liability and Article 2049 of the Italian Civil Code about delictual liability.
  • Another reflection seems necessary: at the moment, there are few – and of limited interest – judgments of legitimacy and merit that have dealt with the conflict arising from the application of artificial intelligence while some greater interest is generated by judicial cases concerning computerized or mathematical instruments with algorithms as their basis. More specifically, the cases dealt with so far have as their object the verification of the correct setting or structure of the mathematical, computerized, and probabilistic data, not also the function of the correct use of the algorithm, i.e. that of guaranteeing the attainment of a user’s interest (for example today we are discussing the damage produced by an automatic taxi with an analysis limited to what is provided for in the setting of the algorithm, but not also the unforeseen event that the taxi may encounter along the way: strikes, potholes, etc…..). The same applies to administrative jurisprudence.

In conclusion, a hint of criminal liability: article 42 of the Italian Penal Code states that “no one can be punished for an action or omission provided for by law as a crime, if he has not committed it with conscience and will”.  [7]

 

In other words, to configure a crime hypothesis, it is not enough to have a material fact, i.e. the objective element, it is also necessary to have the concurrence of will.

 

While waiting for the Italian bill and the AI Act to become operational, the subject of investigation by experts on the phenomenon is particularly complex. The main question is whether, in the face of emerging scenarios, traditional regulations and existing special laws can resolve the issues under consideration. In other words, whether such current and complex problems necessarily postulate regulatory innovations or whether they can, instead, be mediated, even by way of interpretation, by the existing rules; in particular, whether, and to what extent, liability from “intelligent entities” is extraneous to the Italian legal system and whether the latter is capable of regulating the phenomenon by resorting to literal, analogical or even evolutionary interpretation of the existing rules.


[1] CORASANITI G., Tecnologie intelligenti, Rischi e regole, Mondadori Università, Gennaio 2023, p. 12.

[2] https://www.romecall.org/wp-content/uploads/2024/02/RomeCall_report-web.pdf

[3] https://www.vatican.va/content/francesco/it/speeches/2020/february/documents/papa-francesco_20200228_accademia-perlavita.html

[4] https://www.quotidiano.net/cronaca/g7-papa-francesco-lavori-intelligenza-artificiale-n7zmipci

[5] https://www.ildubbio.news/giustizia/intelligenza-artificiale-nordio-reclusione-da-1-a-5-anni-per-chi-crea-danni-con-lia-w8en9q9b

[6] PROCIDA  MIRABELLA  DI LAURO A, Intelligenze Artificiali e Responsabilità Civile, in AAVV. Diritto delle Obbligazioni, Napoli 2020, pag.514 e ss, XV, Convegno Nazionale degli Studiosi del Diritto Civile “Rapporti Civilistici e intelligenze artificiali: attività e responsabilità.

[7] Art.42 Italian Penal Code

Ce contenu a été mis à jour le 27 mai 2024 à 18 h 00 min.