Recent times have seen a disruptive advance in the use of automated decision-making processes operating on the basis of algorithms, better known as artificial intelligence (AI). The positive effects of such technology, however, risk being neutralized where the opacity of the aforementioned processes does not allow for the easy identification of the party to whom the risk of harm or damage should be allocated. The problem arises in particular with regard to so-called strong AIs operating by means of algorithms. Strong AI operating by means of cognitive-predictive algorithms, especially in machine-to-machine relations, where human intervention is totally excluded. The prospect of artificial intelligence as a vehicle for achieving early human evolution is gradually being replaced by the fear of losing control of the technology itself, the anguish of running too fast and laying the foundations for a dystopian future in which there is no room for human values. Hence the proliferation of precautionary and preventiv
I tempi recenti hanno visto un avanzare dirompente dell’utilizzo di processi decisionali automatizzati che operano sulla base di algoritmi, meglio conosciuti come intelligenza artificiale (IA). Gli effetti positivi di tale tecnologia tuttavia rischiano di essere neutralizzati laddove l’opacità dei processi summenzionati non permetta di individuare facilmente il soggetto in capo al quale allocare il rischio di danno o pregiudizio. Il problema si pone in particolare difronte alle cd. IA forti funzionanti per mezzo di algoritmi cognitivo-predittivi, in specie nei rapporti machine to machine, ove l’intervento umano è totalmente escluso. Alla prospettiva dell’IA quale veicolo per raggiungere una precoce evoluzione umana si sta man mano sostituendo la paura di perdere il controllo della tecnologia stessa, l’angoscia di correre troppo e di porre le basi per un futuro distopico in cui non c’è spazio per i valori dell’umanità. Da ciò deriva il proliferare di norme precauzionali e preventive, specialmente di matrice eu
Responsabilità civile e Intelligenza Artificiale / Mollicone, MARTA MARIOLINA. - (2024 Mar 25).
Responsabilità civile e Intelligenza Artificiale
MOLLICONE, MARTA MARIOLINA
2024-03-25
Abstract
Recent times have seen a disruptive advance in the use of automated decision-making processes operating on the basis of algorithms, better known as artificial intelligence (AI). The positive effects of such technology, however, risk being neutralized where the opacity of the aforementioned processes does not allow for the easy identification of the party to whom the risk of harm or damage should be allocated. The problem arises in particular with regard to so-called strong AIs operating by means of algorithms. Strong AI operating by means of cognitive-predictive algorithms, especially in machine-to-machine relations, where human intervention is totally excluded. The prospect of artificial intelligence as a vehicle for achieving early human evolution is gradually being replaced by the fear of losing control of the technology itself, the anguish of running too fast and laying the foundations for a dystopian future in which there is no room for human values. Hence the proliferation of precautionary and preventivFile | Dimensione | Formato | |
---|---|---|---|
Tesi_Mollicone.pdf
embargo fino al 30/09/2025
Tipologia:
Tesi di dottorato
Licenza d'uso:
Tutti i diritti riservati
Dimensione
1.87 MB
Formato
Adobe PDF
|
1.87 MB | Adobe PDF | Visualizza/Apri Richiedi una copia |
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.