PREVENTION OF CYBERCRIME IN THE AGE OF ARTIFICIAL INTELLIGENCE (AI) WITHIN THE EUROPEAN UNION
Abstract
The significant technological development experienced between the late 20th and early 21st centuries has raised a series of fundamental questions regarding the protection of civil, political, and social rights. The exploitation of new technologies by governments and international organizations, as well as private companies and individuals, opens up enormous possibilities for development on one hand, while posing serious risks to the aforementioned rights on the other. In recent decades, cybercrime has represented a crucial challenge for global actors and continues to evolve thanks to the availability of increasingly advanced technologies. What mechanisms do countries around the world use to protect themselves and their citizens? Is full legal protection even possible? These are questions that constantly arise and are gaining more and more significance.
The European Union, especially in the past 10 years, has taken legal measures to create a safe space for its institutions and member countries. Among the most interesting technologies is artificial intelligence (AI), which has captured global attention on the subject in recent years, especially after the public exploitation of generative AI systems. Since AI technologies mainly rely on machine learning systems that exploit data, European regulation of these tools, pending the full implementation of the AI Act, primarily depends on data protection laws. In this regard, the adoption of the General Data Protection Regulation (GDPR) in 2016 represented a milestone for data protection and the right to privacy within the European Union context; on the subject, reference is also made to the "Convention 108+" of 2018.
The second part of this contribution will focus on the definition and taxonomy of cybercrimes. These crimes, facilitated by the development of new technologies, are expected to undergo a significant acceleration thanks to the development of AI technologies. Also for this reason, the approval of the risk-based Artificial Intelligence Act has been one of the top priorities for the EU, in order to promote trust in artificial intelligence technologies, and at the same time to ensure that it does not jeopardize fundamental rights guaranteed by EU treaties and acts.