The usage of face recognition for surveillance and algorithms that exploit human behaviour would be prohibited under proposed EU artificial intelligence regulations.
The extensive plans leaked ahead of their formal release have promised strict new guidelines on what they consider high-risk AI.
This covers algorithms utilised for law enforcement and in recruiting. According to experts, the regulations were ambiguous and included loopholes. AI used in the military is excluded, as are devices used by police to protect public safety.
The following AI systems are recommended to be prohibited:
- those built or used in a way that manipulates human behaviour, attitudes, or decisions…causing an individual to act, shape an opinion, or make a decision that is harmful to them
- AI devices used for indiscriminate monitoring in a broad sense
- Artificial intelligence (AI) applications used for social ranking
- those that use knowledge or forecasts to manipulate an individual or group of people’s weakness
For AI considered high risk, member states will be required to exercise even greater supervision, including the appointment of evaluation bodies to evaluate, certify, and audit these programmes.
Companies who introduce illegal services or refuse to provide accurate details about them will face fines of up to 4% of their global sales, equivalent to GDPR fines.
Examples of high-risk AI include:
- systems for assigning preference in the dispatch of emergency personnel
- systems that determine who has connections to or is assigned to educational institutions
- algorithms for hiring
- those that assess creditworthiness
- those in charge of making individual risk evaluations
- Predictive algorithms for crime
Mr Leufer went on to say that the recommendations should be “be expanded to include all public sector AI systems, regardless of their assigned risk level”
“This is because people typically do not have a choice about whether or not to interact with an AI system in the public sector.”
In addition to providing human supervision of new AI systems, the EC recommends that high-risk AI systems include a so-called kill switch, which may be a pause button or some other procedure to automatically toggle the device off if necessary.
“AI vendors will be extremely focussed on these proposals, as it will require a fundamental shift in how AI is designed,” said Herbert Swaniker, a Clifford Chance lawyer.
Sloppy and Risky
The EC has had to walk a delicate tightrope with this regulation, ensuring AI is seen as “a tool… with the ultimate aim of increasing human wellbeing” but still ensuring it would not prevent EU countries from competing with the US and China on technical advances.
It also agreed that AI had already influenced certain facets of our lives.
The specifics will alter again until the regulations are formally announced next week.