
{"id":6239,"date":"2025-02-21T09:12:27","date_gmt":"2025-02-21T09:12:27","guid":{"rendered":"https:\/\/assedel.org\/?p=6239"},"modified":"2025-09-04T09:06:13","modified_gmt":"2025-09-04T09:06:13","slug":"ai-act-a-front-to-counter-the-mass-surveillance-inspired-by-the-chinese-model","status":"publish","type":"post","link":"https:\/\/assedel.org\/fr\/ai-act-a-front-to-counter-the-mass-surveillance-inspired-by-the-chinese-model\/","title":{"rendered":"\"Loi sur l'IA : un front de contre attaque \u00e0 la surveillance de masse tir\u00e9e du mod\u00e8le chinois\""},"content":{"rendered":"<p>In recent years, the use of artificial intelligence has raised increasing concerns, particularly regarding potential abuses related to social surveillance systems, such as those already implemented in countries like China. In these contexts, technology is often used to monitor and control daily lives of citizens, undermining their freedom and privacy. In order to counter these risks, Europe has embarked on a legislative journey with the AI Act, a piece of legislation that aims not only to regulate the use of artificial intelligence, but also to counter invasive forms of surveillance and protect citizens\u2019 fundamental rights. The AI Act, in fact, aims to ensure the ethical use of artificial intelligence, promoting a model of \u2018trusted AI\u2019 that respects the principles of transparency, fairness and non-discrimination.<\/p>\n\n\n\n<p>Le chemin vers l'approbation de cette loi a commenc\u00e9 en 2018, lorsque la Commission europ\u00e9enne a mis en place un groupe d'experts sur l'intelligence artificielle ; ce groupe a r\u00e9dig\u00e9 des lignes directrices \u00e9thiques pour l'IA en Europe, identifiant le concept d'\u00abIA de confiance\u00bb comme le seul mod\u00e8le acceptable dans les pays membres.<br>Par la suite, le projet de r\u00e8glement a \u00e9t\u00e9 pr\u00e9sent\u00e9 par la Commission europ\u00e9enne le 21 avril 2021, dans l'intention de cr\u00e9er un cadre r\u00e9glementaire harmonis\u00e9 et proportionn\u00e9 pour l'intelligence artificielle au sein de l'Union europ\u00e9enne.<\/p>\n\n\n\n<p>The AI Act is founded on the principle that artificial intelligence should be developed and deployed in a way that ensures safety, ethical standards, and respect for fundamental rights and European values. To achieve this, the proposal establishes a classification system for AI technologies based on their potential risks to individuals\u2019 safety and rights. It also outlines a set of requirements and obligations for both providers and users of these systems. The regulatory framework of the AI Act also includes a classification of AI systems based on the level of risk they pose to individuals and society. This classification distinguishes four levels of risk: unacceptable, high, limited, and minimal or no risk.<\/p>\n\n\n\n<p>\u2013 <strong>Risque inacceptable<\/strong>: This includes AI systems that violate the fundamental values of the European Union, such as respect for human dignity, democracy, and the rule of law. These systems are generally prohibited, or, in specific cases, such as real-time biometric surveillance for security purposes, they are subject to strict restrictions. Examples of prohibited systems include technologies that manipulate human behaviour to the point of undermining users\u2019 autonomy, or systems that enable social scoring by public authorities, as it occurs in China.<\/p>\n\n\n\n<p>\u2013 <strong>Risque \u00e9lev\u00e9<\/strong>: This includes AI systems that can have a significant or systemic impact on individuals\u2019 fundamental rights or safety. As a result, these systems are subject to strict requirements and must meet rigorous obligations before being placed on the market or used. Examples of such systems include technologies used in recruitment and hiring processes, admission to education, delivery of essential social services such as healthcare, remote biometric surveillance, and applications in the judicial or law enforcement sectors. Systems used to ensure the security of critical infrastructure are also included in this category.<\/p>\n\n\n\n<p>\u2013 <strong>Risque limit\u00e9<\/strong>: This includes AI systems that can influence users\u2019 rights or choices, but to a lesser extent compared to high-risk systems. To ensure informed use, these systems are subject to transparency requirements that allow users to know when they are interacting with an AI system and to understand its operation, features, and potential limitations. Examples of this category include technologies used to generate or manipulate audiovisual content, such as deepfakes, or to provide personalized recommendations, for example through chatbots.<\/p>\n\n\n\n<p><strong>\u2013 Minimal or no risk<\/strong>: This includes AI systems that do not directly affect individuals\u2019 fundamental rights or safety, ensuring users have full freedom of choice and control. To encourage innovation and technological exploration, these systems are not subject to any regulatory obligations. Common examples include applications for entertainment purposes, such as video games, or those with aesthetic goals, such as photo filters, which do not have significant implications for society or individual rights.<br><br>La loi sur l'IA vise \u00e0 garantir la s\u00e9curit\u00e9 et l'\u00e9thique dans l'utilisation de l'intelligence artificielle, tout en prot\u00e9geant les droits des individus et des organisations. Les principales mesures comprennent :<br>\u2013 Requirements for high-risk AI systems to protect fundamental rights such as privacy, dignity, and non-discrimination.<br>\u2013 Human oversight to monitor and correct AI systems, preventing harm to individuals or the environment.<br>\u2013 Bans on AI systems that violate EU values, such as those that manipulate behaviour or exploit vulnerabilities.<br>\u2013 Establishment of a governance framework involving all stakeholders, with measures for cooperation, monitoring, and sanctions.<br>\u2013 Promotion of a culture of responsible AI, encouraging transparency, accountability, and education to strengthen public trust.<br><br>Therefore, the AI Act aims to regulate areas where risks arise, focusing more on the uses of artificial intelligence rather than the technology itself. In defining regulations that govern the impact of technology on people\u2019s lives, it is crucial to pursue at least four objectives, balancing them carefully: encouraging technological innovation, ensuring the protection of citizens\u2019 rights, ensuring the feasibility of the imposed requirements, and making the law sustainable over time. This latter aspect, known as \u201cfuture proofness,\u201d involves the need to create regulations that remain valid and applicable even in a continuously evolving technological context.<\/p>\n\n\n\n<p> SOURCES: <br>\u2013\u00a0\u00a0 <a href=\"https:\/\/www.europarl.europa.eu\/topics\/en\/article\/20230601STO93804\/eu-ai-act-first-regulation-on-artificial-intelligence\">https:\/\/www.europarl.europa.eu\/topics\/en\/article\/20230601STO93804\/eu-ai-act-first-regulation-on-artificial-intelligence<\/a><br>\u2013 <a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/regulatory-framework-ai\">https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/regulatory-framework-ai<\/a><\/p>\n\n\n\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>In recent years, the use of artificial intelligence has raised increasing concerns, particularly regarding potential abuses related to social surveillance systems, such&#8230;<\/p>","protected":false},"author":1,"featured_media":6246,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"give_campaign_id":0,"_uag_custom_page_level_css":"","footnotes":""},"categories":[28,96],"tags":[],"class_list":{"0":"post-6239","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-opinion-eng","8":"category-opinionfr"},"uagb_featured_image_src":{"full":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia.png",2000,1013,false],"thumbnail":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-150x150.png",150,150,true],"medium":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-300x152.png",300,152,true],"medium_large":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-768x389.png",768,389,true],"large":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-1024x519.png",1024,519,true],"1536x1536":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-1536x778.png",1536,778,true],"2048x2048":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia.png",2000,1013,false],"trp-custom-language-flag":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-18x9.png",18,9,true],"inhype-blog-thumb":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-1140x694.png",1140,694,true],"inhype-blog-thumb-grid":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-555x360.png",555,360,true],"inhype-blog-thumb-widget":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-220x180.png",220,180,true],"inhype-blog-thumb-masonry":["https:\/\/assedel.org\/wp-content\/uploads\/2025\/02\/cover-gaia-360x182.png",360,182,true]},"uagb_author_info":{"display_name":"assedel","author_link":"https:\/\/assedel.org\/fr\/author\/assedel\/"},"uagb_comment_info":0,"uagb_excerpt":"In recent years, the use of artificial intelligence has raised increasing concerns, particularly regarding potential abuses related to social surveillance systems, such...","amp_enabled":true,"_links":{"self":[{"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/posts\/6239","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/comments?post=6239"}],"version-history":[{"count":8,"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/posts\/6239\/revisions"}],"predecessor-version":[{"id":6320,"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/posts\/6239\/revisions\/6320"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/media\/6246"}],"wp:attachment":[{"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/media?parent=6239"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/categories?post=6239"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/assedel.org\/fr\/wp-json\/wp\/v2\/tags?post=6239"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}