European Parliament
On March 13, the European Parliament, which is the law-making body for the EU, approved the Artificial Intelligence Act, making international headlines. The regulation was overwhelmingly approved and allows for certain AI applications that threaten citizens’ rights to be banned. These include “biometric categorization systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases” and “emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behavior or exploits people’s vulnerabilities.”
The use of biometric identification systems (RBI) by law enforcement will be prohibited “in principle” from using biometric identification systems, except under narrowly defined situations with strict safeguards.
In addition, certain transparency requirements must be met for general-purpose AI (GPAI) systems that include EU copyright compliance and published detailed summaries for content that is used for training.
The regulation is expected to be finally adopted before the end of the legislature where all rules of procedure will be followed and is formally endorsed by the Council.
TechTarget nicely summarizes the EU AI Act (directly quoted):
- Bans certain AI uses, such as emotion recognition in workplaces and social scoring.
- Implements fines from $8.1 million or 1.5% of global turnover to $37.9 million or 7% of turnover.
- Gives consumers rights to launch complaints.
- Establishes transparency requirements for general-purpose AI systems.
- Creates four risk categories for AI models: unaccepted risk, high risk, limited risk, and minimal risk.
According to quoted professionals in Legal Dive, U.S.-based companies should not ignore the AI ACT because “one is hard pressed to find a publicly traded U.S. company that is not impacted by Europe,” and “companies that are following good AI governance practices need not panic.”
G7
On March 15, 2024, industry ministers from the Group of Seven (G7) major democracies agreed to align rules on the development of artificial intelligence for joint investment and to ensure the involvement of small and medium-sized enterprises. According to the G7 Ministerial Declaration, they are affirming the importance of joining forces and endorsed the proposed development of a report by the end of the year. This report will focus on: 1) analyzing driving factors and challenges of AI adoption and development among companies, 2) providing policy options to promote safe, secure, and trustworthy adoption, and 3) ensuring all applicable legal rights, including intellectual property rights and protection of trade secrets.
In addition, a toolkit will be developed by year end that will help the public sector and other stakeholders translate principles for safe, secure, and trustworthy AI into actionable policies.
Photo by Frederic Köberl on Unsplash