Hi there,
On April 21st, the European Union unveiled its draft proposal for the regulation of AI. The policy aims at prohibiting and regulating high-risk AI systems detrimental to individuals. Among the practices deemed “at-risks” are biometric surveillance, job or loan applicant filtering, or social scoring. Just like the GDPR, failure to comply would lead to heavy fines and sanctions. (The draft mentions fines up to 6% of annual global turnover). Once adopted by the parliament, it would likely become a set of laws at the horizon of 2024.
A lot of industries will be impacted: healthcare, banking, insurance, education, etc. The large scope of AI raises questions on the real-world effects of such policy for AI professionals. The upcoming dialog between policymakers, industry experts, AI practitioners, and researchers will be crucial. The regulators will need to thoroughly outline the outcomes and the uses they seek to regulate if they want to avoid any “unhelpful vagueness” in the regulation, already pointed out by some.
Like they did with personal data protection, the European authorities have decided to set the bar for the regulation of AI. No doubt it will inspire other jurisdictions. Already, the Federal Trade Commission in the US announced it would be going after discriminatory AI applications. And just like with personal data protection, proactivity and "ethical-by-design" AI might be the way for enterprises to future-proof AI development.
Below, we shared our favorite readings from the privacy and data protection worlds of the past month.
Happy reading!
The Statice team