Zebeth Media Solutions

regulating artificial intelligence

A look at the EU’s plan to reboot product liability rules for AI • ZebethMedia

A recently presented European Union plan to update long-standing product liability rules for the digital age — including addressing rising use of artificial intelligence (AI) and automation — took some instant flak from European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers will be left less well protected from harms caused by AI services than other types of products. For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liability protections, only last month the UK’s data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform ’emotional analysis’ — urging such tech should not be used for anything other than pure entertainment. While on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law. And, in recent years, the UN has also warned over the human rights risks of automating public service delivery. Additionally, US courts’ use of blackbox AI systems to make sentencing decisions — opaquely baking in bias and discrimination — has been a tech-enabled crime against humanity for years. BEUC, an umbrella consumer group which represents 46 independent consumer organisations from 32 countries, had been calling for years for an update to EU liability laws to take account of growing applications of AI and ensure consumer protections laws are not being outpaced. But its view of the EU’s proposed policy package — which consist of tweaks to the existing Product Liability Directive (PLD) so that it covers software and AI systems (among other changes); and a new AI Liability Directive (AILD) which aims to address a broader swathe of potential harms stemming from automation — is that it falls short of the more comprehensive reform package it was advocating for. “The new rules provide progress in some areas, do not go far enough in others, and are too weak for AI-driven services,” it warned in a first response to the Commission proposal back in September. “Contrary to traditional product liability rules, if a consumer gets harmed by an AI service operator, they will need to prove the fault lies with the operator. Considering how opaque and complex AI systems are, these conditions will make it de facto impossible for consumers to use their right to compensation for damages.” “It is essential that liability rules catch up with the fact we are increasingly surrounded by digital and AI-driven products and services like home assistants or insurance policies based on personalised pricing. However, consumers are going to be less well protected when it comes to AI services, because they will have to prove the operator was at fault or negligent in order to claim compensation for damages,” added deputy director general, Ursula Pachl, in an accompanying statement responding to the Commission proposal. “Asking consumers to do this is a real let down. In a world of highly complex and obscure ‘black box’ AI systems, it will be practically impossible for the consumer to use the new rules. As a result, consumers will be better protected if a lawnmower shreds their shoes in the garden than if they are unfairly discriminated against through a credit scoring system.” Given the continued, fast-paced spread of AI — via features such as ‘personalized pricing’ or even the recent explosion of AI generated imagery — there could come a time when some form of automation is the rule not the exception for products and services — with the risk, if BEUC’s fears are well-founded, of a mass downgrading of product liability protections for the bloc’s ~447 million citizens. Discussing its objections to the proposals, a further wrinkle raised by Frederico Oliveira Da Silva, a senior legal officer at BEUC, relates to how the AILD makes explicit reference to an earlier Commission proposal for a risk-based framework to regulate applications of artificial intelligence — aka, the AI Act — implicating a need for consumers to, essentially, prove a breach of that regulation in order to bring a case under the AILD. Despite this connection, the two pieces of draft legislation were not presented simultaneously by the Commission — there’s around 1.5 years between their introduction — creating, BEUC worries, disjointed legislative tracks that could bake in inconsistencies and dial up the complexity. For example, it points out that the AI Act is geared towards regulators, not consumers — which could therefore limit the utility of proposed new information disclosure powers in the AI Liability Directive given the EU rules determining how AI makers are supposed to document their systems for regulatory compliance are contained in the AI Act — so, in other words, consumers may struggle to understand the technical documents they can obtain under disclosure powers in the AILD since the information was written for submitting to regulators, not an average user. When presenting the liability package, the EU’s justice commissioner also made direct reference to “high risk” AI systems — using a specific classification contained in the AI Act which appeared to imply that only a subset of AI systems would be liable. However, when queried whether liability under the AILD would be limited only to the ‘high risk’ AI systems in the AI Act (which represents a small subset of potential applications for AI), Didier Reynders said that’s not the Commission’s intention. So, well, confusing much? BEUC argues a disjointed policy package has the potential to — at the least — introduce inconsistencies between rules that are supposed to slot together and function as one. It could also undermine application of and access to redress for liability by creating a more complicated track for consumers to be able to exercise their rights. While the different legislative timings suggest one piece of a linked package for regulating AI will be adopted in advance of the other — potentially opening up a gap for consumers to obtain redress for AI driven harms in the meanwhile. As it stands,

Subscribe to Zebeth Media Solutions

You may contact us by filling in this form any time you need professional support or have any questions. You can also fill in the form to leave your comments or feedback.

We respect your privacy.
business and solar energy