Artificial intelligence (AI) can be found in more and more areas of application, be it in autonomous driving, in supporting marketing and sales, or in detecting faulty parts - even works of art are already being created by AI.
The use of artificial intelligence offers great opportunities, but it also poses risks. In any case, AI-based applications and systems must be tested for compliance and regulatory requirements just as much as any conventional (in the sense of non-AI) system before they are deployed.
This is because AI-based applications and systems must meet user expectations, perform contractual requirements, meet regulatory requirements and standards just like any other system. If, in addition, lives are at stake, safety must also be ensured, as in any other safety-critical system. In this respect, everything must also be checked and tested accordingly.
This is an issue that will face the industry in the foreseeable future. In the recent past, for example, there have been fatal accidents with autonomous driving vehicles: the SZ reports on 19.3.2018 (https://www.sueddeutsche.de/wirtschaft/kuenstliche-intelligenz-frau-stirbt-bei-unfall-mit-autonomen-auto-von-uber-1.3913385).
The same article points to another case in 2016: Electric car maker Tesla has been involved in several accidents caused by cars while driving in so-called Autopilot.
Even if the manufacturers point out that the system is only supposed to support the driver, and the driver is responsible for still being able to manually intervene or intervene, fully autonomous AI-based systems will be used in the foreseeable future for various reasons. And by then, at the latest, such things must no longer occur. Instead, the autonomous vehicles must have been tested just as safely and intensively as conventional systems.
Are you looking for a suitable partner to support you with this topic? No matter if you want to develop one or if you want to test with AI, we help and advise you on your project. In addition, we help you to implement the right quality measures and create confidence in your product.
Different testing methods will be used for different AI systems. For example, one will have to test an expert system in which the experience of doctors has been implemented in a set of rules differently than a system that is supposed to detect fraud in lending after learning from very many data sets.
Therefore, we each need to define what exactly we mean when we talk about quality assurance and testing of AI.
There are different types of AI (see the following graphic), but mostly machine learning is meant when we talk about it.