Aidez notre cause.
Opinion

“AI Act: a front to counter the mass surveillance inspired by the Chinese model”

In recent years, the use of artificial intelligence has raised increasing concerns, particularly regarding potential abuses related to social surveillance systems, such as those already implemented in countries like China. In these contexts, technology is often used to monitor and control daily lives of citizens, undermining their freedom and privacy. In order to counter these risks, Europe has embarked on a legislative journey with the AI Act, a piece of legislation that aims not only to regulate the use of artificial intelligence, but also to counter invasive forms of surveillance and protect citizens’ fundamental rights. The AI Act, in fact, aims to ensure the ethical use of artificial intelligence, promoting a model of ‘trusted AI’ that respects the principles of transparency, fairness and non-discrimination.

The path towards the approval of this law began in 2018, when the European Commission set up an expert group on artificial intelligence; this group drafted ethical guidelines for AI in Europe, identifying the concept of ‘trustworthy AI’ as the only acceptable model in member countries.
Subsequently, the proposed regulation was presented by the European Commission on 21 April 2021 with the intention of creating a harmonised and proportionate regulatory framework for artificial intelligence within the European Union.

The AI Act is founded on the principle that artificial intelligence should be developed and deployed in a way that ensures safety, ethical standards, and respect for fundamental rights and European values. To achieve this, the proposal establishes a classification system for AI technologies based on their potential risks to individuals’ safety and rights. It also outlines a set of requirements and obligations for both providers and users of these systems. The regulatory framework of the AI Act also includes a classification of AI systems based on the level of risk they pose to individuals and society. This classification distinguishes four levels of risk: unacceptable, high, limited, and minimal or no risk.

– Unacceptable risk: This includes AI systems that violate the fundamental values of the European Union, such as respect for human dignity, democracy, and the rule of law. These systems are generally prohibited, or, in specific cases, such as real-time biometric surveillance for security purposes, they are subject to strict restrictions. Examples of prohibited systems include technologies that manipulate human behaviour to the point of undermining users’ autonomy, or systems that enable social scoring by public authorities, as it occurs in China.
-High risk: This includes AI systems that can have a significant or systemic impact on individuals’ fundamental rights or safety. As a result, these systems are subject to strict requirements and must meet rigorous obligations before being placed on the market or used. Examples of such systems include technologies used in recruitment and hiring processes, admission to education, delivery of essential social services such as healthcare, remote biometric surveillance, and applications in the judicial or law enforcement sectors. Systems used to ensure the security of critical infrastructure are also included in this category.
Limited risk: This includes AI systems that can influence users’ rights or choices, but to a lesser extent compared to high-risk systems. To ensure informed use, these systems are subject to transparency requirements that allow users to know when they are interacting with an AI system and to understand its operation, features, and potential limitations. Examples of this category include technologies used to generate or manipulate audiovisual content, such as deepfakes, or to provide personalized recommendations, for example through chatbots.
Minimal or no risk: This includes AI systems that do not directly affect individuals’ fundamental rights or safety, ensuring users have full freedom of choice and control. To encourage innovation and technological exploration, these systems are not subject to any regulatory obligations. Common examples include applications for entertainment purposes, such as video games, or those with aesthetic goals, such as photo filters, which do not have significant implications for society or individual rights.

The AI Act aims to ensure safety and ethics in the use of artificial intelligence, protecting the rights of individuals and organizations. The main measures include:
– Requirements for high-risk AI systems to protect fundamental rights such as privacy, dignity, and non-discrimination.
– Human oversight to monitor and correct AI systems, preventing harm to individuals or the environment.
– Bans on AI systems that violate EU values, such as those that manipulate behaviour or exploit vulnerabilities.
– Establishment of a governance framework involving all stakeholders, with measures for cooperation, monitoring, and sanctions.
– Promotion of a culture of responsible AI, encouraging transparency, accountability, and education to strengthen public trust.

Therefore, the AI Act aims to regulate areas where risks arise, focusing more on the uses of artificial intelligence rather than the technology itself. In defining regulations that govern the impact of technology on people’s lives, it is crucial to pursue at least four objectives, balancing them carefully: encouraging technological innovation, ensuring the protection of citizens’ rights, ensuring the feasibility of the imposed requirements, and making the law sustainable over time. This latter aspect, known as “future proofness,” involves the need to create regulations that remain valid and applicable even in a continuously evolving technological context.

SOURCES:
–   https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Related posts
OpinionOpinion

Le débat sur l'immigration en Allemagne: le pare-feu politique est-il en train de se fissurer?

The rise of far-right and anti-immigrant sentiment in Germany coincides with the following years of 2015. Although, the German public opinion towards…
OpinionOpinion

"Elmasry’s Release: Italy’s Decision in Light of the ICC Warrant, Legal Missteps and Human Rights Concerns"

ActualitésOpinionOpinion

How Donald Trump's Policies Could Transform The U.S.

The election of Donald Trump to the presidency of the United States will likely have significant impacts across various sectors, including the…
Abonnez-vous à notre newsletter!

Abonnez-vous pour recevoir les dernières informations sur notre lutte pour promouvoir les droits de l'homme.