Artificial intelligence: tackling the risks for consumers

Europe

Artificial intelligence and automated decision making processes can pose certain threats to consumers. Find out how the European Parliament wants to protect them.

What is artificial intelligence and why can it be dangerous?

As learning algorithms can process data sets with precision and speed beyond human capacity, artificial intelligence (AI) applications have become increasingly common in finance, healthcare, education, the legal system and beyond. However, reliance on AI also carries risks, especially where decisions are made without human oversight. Machine learning relies on pattern-recognition within datasets. Problems arise when the available data reflects societal bias.

Artificial Intelligence in decision-making processes

AI is increasingly involved in algorithmic decision systems. In many situations, the impact of the decision on people can be significant, such as access to credit, employment, medical treatment, or judicial sentences. Automated decision-making can therefore perpetuate social divides. For example, some hiring algorithms have been found to be biased against women.

How to protect consumers in the era of AI

The development of AI and automated decision-making processes also presents challenges for consumer trust and welfare. When consumers are interacting with such a system, they should be properly informed about how it functions.

The position of the Parliament

In a resolution adopted on 23 January, the internal market and consumer protection committee urges the European Commission to examine whether additional measures are necessary in order to guarantee a strong set of rights to protect consumers in the context of AI and automated decision-making.

The Commission needs to clarify how it plans to:
  • ensure that consumers are protected from unfair and discriminatory commercial practices as well as from risks entailed by AI-driven professional services 
  • guarantee greater transparency in these processes 
  • ensure that only high-quality and non-biased datasets are used in algorithmic decision systems

“We have to make sure that consumer protection and trust is ensured, that the EU’s rules on safety and liability for products and services are fit for purpose in the digital age,” said German Greens/EFA member Petra De Sutter., chair of the internal market and consumer protection committee.

Next steps

MEPs will vote on the  resolution in mid February. After that it will be transmitted to the Council and the Commission. The Commission should present its plans for a European approach to AI on 19 February.

europarl.europa.eu

pixabay

Leave a Reply

Your email address will not be published. Required fields are marked *