What are the legal regulations for AI?

One challenge of the use of AI-based systems and applications is that the inadequate use of AI gives rise to social, political or technical risks that must be regulated so that the use can take place under safe framework conditions. Therefore, there is also a need for legal issues that may arise in the use of AI to be clarified by legislators.

For example, it could be that the use of AI discriminates against certain social groups, which is not desirable from a social perspective and is therefore likely to be subject to new legal regulations.

To this end, the EU submitted a draft regulation establishing harmonized rules for artificial intelligence ("EU AI Regulation") on April 21, 2021.

The EU AI Regulation assumes a risk-based approach that differentiates between the following risks:

  • Unacceptable risks e.g. influencing (unconscious) behavior, social scoring, biometric recognition.
  • High risks e.g. critical infrastructure, student assessment
  • Low risks e.g. chatbot, deep fakes
  • Minimal risks e.g. video games, SPAM filters.

For more information, see e.g. https://www.digital-recht.at/die-kuenstliche-intelligenz-verordnung-der-eu-uebersicht.

n this context, testing whether the components of such a legal standard are fulfilled is a "classic" test topic similar to testing a web application or a software application whether the legal requirements on the topic of accessibility are fulfilled.

Contact show/hide

Ihr Ansprechpartner bei imbus

Mr. Tilo Linz

This might also be of interest to you: