top of page

EU Launches a First-of-Its Kind AI Law

 The Artificial Intelligence Act law aims to promote safety innovation and protect people's fundamental rights. This will be done through numerous restrictions on AI, ranging from general purpose AI to use by law enforcement. However, multiple executives of AI using companies have shared their disapproval of the law, believing that it will hinder innovation.


June 2nd, 2024

By Mariana Prieto

Editor: Aydin Levy

NEW YORK–– The European Parliament passed the Artificial Intelligence Act. This landmark law will prevent AI from violating the rights of citizens: banning non-targeted scraping of facial images, which is used to train AI, and emotion recognition in schools and workplaces. This law also includes bans on predictive policing, algorithm-based social scoring, AI that uses people's vulnerabilities to manipulate them, and biometric categorization systems, which allocate people to specific categories (eye color, hair, age, etc). Law enforcement is also restricted from using biometric identification systems, save for “exhaustively listed and narrowly defined situations.” It can only be used with certain precautions, including stopping terrorist attacks and a missing person’s search. 

AI will have to follow specific transparency requirements. General-purpose AI systems must adhere to EU copyright law, while the more potent models must comply with further requirements: reporting incidents, going through model evaluations, mitigating and assessing risks to the system. Any AI generated videos, images or audio will have to be clearly labeled for consumers to identify it. 

People will also be granted the ability to submit complaints about AI and be given explanations on choices regarding high-risk AI systems that impact their rights. The European Parliament says, “Examples of high-risk AI uses include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration, and border management, justice and democratic processes (e.g. influencing elections).”

This law has been met with criticism from executives of the companies Amazon and Meta, the latter of which owns Instagram, Facebook, Whatsapp, and Threads. They claim that “some of the fears about artificial intelligence are overblown and that the European Union’s sweeping new AI rules risk holding back innovation.” Yann LeCun,

Meta’s AI chief, disclosed to CNN that “there are clauses in the EU AI act and various other places that do regulate research and development. I don’t think it’s a good idea.” 

Similar concerns were made by Werner Vogels, chief technology officer of Amazon. He further commented on the law by saying, “When thinking about risks, regulators should consider the application of the new technology to, for instance, health care and financial services differently from its use to summarize meetings.”

While this may hold some truth, this law stems from the possibility of AI harming democracy, rule of law, health, safety, environment, vocational training and education. There are also rising fears that the intelligence of AI will one day exceed humanity’s. Artificial intelligence systems are believed to cause significant harm, which is why these prohibitions are being made. 

Deputy Director of Media Relations at Teens for Press Freedom, Aydin Levy, shares, "It’s important that technology is used appropriately and effectively. Misuse of technology is a driving factor in spreading misinformation and privacy violations; the implications can harm society. While the government should strategically place limitations so societal advancement isn't hindered, safety must be the number one priority.”   



bottom of page