The European Parliament has passed a draft law known as the A.I. Act, which aims to regulate artificial intelligence (AI) and its riskiest uses.
The draft law would impose new restrictions on facial recognition software and require AI system developers to disclose more about the data used in their programs. The European Union has been debating AI regulation for over two years, with the issue gaining urgency after concerns were raised about the potential effects of AI on employment and society. The United States and other Western governments are also working on AI regulation, with the White House proposing rules for testing AI systems and protecting privacy rights.
China has unveiled draft rules that would require chatbot developers to adhere to censorship rules and give the government more control over AI data usage.
The effectiveness of AI regulation remains uncertain, as the technology is evolving rapidly, and lawmakers are struggling to keep up.
The latest version of the EU’s bill includes transparency requirements for generative AI systems and safeguards to prevent the generation of illegal content. The EU’s approach to AI regulation is “risk-based,” focusing on applications with the potential for human harm and requiring risk assessments before deployment. Tech industry groups warn against overly broad regulations that stifle innovation while acknowledging the need to address defined risks. Debate continues on the use of facial recognition, including whether exemptions should be allowed for national security and law enforcement purposes.