The EU has introduced the Artificial Intelligence Act, which is likely to be enacted in October 2024. What looks to be well meaning legislation around privacy and security will likely trigger adverse reactions in the technology sector and have implications for businesses excited about using AI in innovative ways.
First off, this does look like an interesting opportunity for the UK, as being outside of the EU now, can carve its own AI path which looks likely to be more in tune with the American approach than the EU's. What will really happen, especially with an election, remains to be seen.
The act classifies AI into risk categories and the most interesting of these are 'unacceptable' and 'high' risk. Unacceptable are AI based products that can establish biometrics in real time, tools that gather facial recognition databases and tools that conclude about emotions in the workplace from facial expressions. This is quite a specific category: and so many companies can rest easy knowing it won't affect them.
High risk is more loosely defined and as a result, could be tricky. Education, Employment, Healthcare and Banking all come under this category, so this includes AI sorting job applications or approving bank loans.
There are other categories too and it's worth reading up a little if you're interesting. Military and national security areas are exempt from legislation, so state controlled facial recognition or emotion is considered OK: which has attracted some criticism as you would imagine.
Any AI generated text, images etc, should be labelled as such. Because deep fakes and misinformation distributors care about the law, right?
Short term effects will include a mass relocation of AI startups out of the EU especially those looking at Unacceptable or High-risk categories. Just to avoid the hassle and potential risk of fines and being banned. While the world moves fast in US, China, Israel and other tech hot spots, expect a much slower and risk adverse EU stance.
Back to article list