Author: Giovanna Spano, the University of Florence

The significant benefits of combining technology in health, economy, military, employment and other areas of life make artificial intelligence an integral part of everyday life. For example, ChatGPT makes it possible to create visual and verbal content using lexical designing and chatbots allow us to maintain digital communication via discourse. AI becomes a central tool in attempts to facilitate many social interactions. However, easier technological conduct does not necessarily herald a fairer practice.

Countries use AI technologies for security purposes. For example, by developing weapons systems, detection, and cyber warfare mechanisms. Civil society also transferred its activity to cyberspace with the understanding that it is possible to maintain integration with activists and volunteers in more efficient and faster ways than ever before. To give one example, identifying acts of extremism on social networks is a trend that is part of the effort to identify radicalisation processes in real-time. But the same rising need for artificial intelligence technologies also gives rise to questions concerning the preservation of our social rights as human beings. These questions are centered on the dilemma between the limitations that technology may produce and the encouragement of freedoms and the individual choice of each user.

In a world where AI solutions are becoming a central part of our private and public lives, one of the contemporary debates on the issue concerns the legal proclamation for regulative actions regarding the tension between treating radicalisation through digital preventive solutions and the desire to preserve liberties and human rights. Human rights and security are two areas that have frequently clashed since the dawn of modern democracy and may continue to collide as part of the attempt by governments and international institutions to find a balance between the two.

Violations of human rights can be embodied in the wrong identification of security threats which are attributed to individuals during the routine use of technology. For example, in screening job candidates prior to interview selection. We have recently seen an even greater need to examine the consequences and harmful actions inherent in developing advanced technologies. While discrimination and bias may lead to greater inequalities and social exclusion, frequent use of artificial intelligence over the past decade has revealed that sometimes the same algorithm can help us to filter, categorize and create. These processes may be biased in favor of centralist worldviews or even discriminate on an automated basis. How?

The critical points in time where discrimination or built-in bias of artificial intelligence can be detected are the planning stages, which are done by human action. In fact, part of system planning sometimes includes copying social norms into technological development, whether consciously or not, when these are not necessarily directed towards fair habits.

Thus, an attempt to prevent radicalisation on social networks can sometimes lead to the opposite result through incorrect identification of the perpetrators of the extreme actions when there is a built-in bias within the assisting algorithm. Since incitement, fake news, and impersonation of extremist behaviour are three ‘hot trends’ that characterize radicalization on the internet. Their reliance on the ability to spread information widely, anonymously and unmediated is a challenge for state and non-state actors who seek to preserve the liberties of all users on the one hand but do not allow sufficient attention to the legal infrastructure required for this purpose. Here, it should be considered that using developed technology requires the parallel development of universal standards for fair planning and design of artificial intelligence systems towards all its users.

 


Photo by Gertrūda Valasevičiūtė on Unsplash