Testing and validating AI-based software systems

The use of artificial intelligence already influences our lives today:

Banks assess our creditworthiness using AI-based software, insurance companies calculate insurance rates AI-based on the basis of individual risks, medical AI software supports medical diagnosis, e.g. through AI-based analysis of X-ray images.

The use of such AI-based software for decision support offers enormous advantages: even with a very high number of cases and large amounts of data, the AI software can analyze each individual data set (e.g. the customer's insurance application or the patient's X-ray image) individually and classify it fully automatically. Even after thousands of data records, the AI system never gets tired or inattentive and always applies the same criteria.

The result of such automated decisions can have serious consequences for the customer or patient concerned. For example, if an insurance application is rejected because the AI system predicts a high risk of damage. Or if a medical diagnosis is made incorrectly because the AI system incorrectly classifies the X-ray image.

AI-based decision systems vs autonomous systems

Manufacturers, but also users of AI-based decision systems must ensure that their AI software is trustworthy!

Faulty decisions of the AI can have various causes:

  • the AI processes the data to be checked incorrectly because (as with "normal" software) there is a programming error in the program code,   

  • the AI classifies incorrectly because the classification model to be applied is inadequate or the AI has "learned" it incorrectly or inadequately,

  • the AI classifies incorrectly because the data basis or data preprocessing is incorrect, inaccurate or incomplete

All these aspects have to be checked not only once (in the course of testing and validation of the AI-based decision system), but over the entire life cycle of the system:

 

  • The used AI algorithms must be basically suitable and powerful enough for the intended application.
  • The training data used must be selected and used representatively and without discrimination.
  • The system should have the ability to make its decisions comprehensible/explainable ("explainable AI"). The better the system can "explain" why a decision has turned out one way or the other, the stronger the user can have confidence in the respective decision.

For further information, please refer to the article "Why AI needs intelligent quality assurance" OBJEKTspektrum, issue 02/2020 and German Testing Magazin, issue 01/2020, p. 20-24 by Nils Röttger, Gerhard Runze and Verena Dietrich.

Trust in AI-based systems through quality assurance

For AI systems or software containing AI components to function correctly, reliably, trustworthily and ethically, it is essential that the development and operation of such systems is accompanied by a professional quality assurance process.

Since mid-2019, our imbus AI specialists and other experts from business and society - under the leadership of the German Institute for Standardization e.V. (DIN) - have been working on and the German Commission for Electrical, Electronic & Information Technologies (DKE) in a joint project with the Federal Ministry of Economics and Energy (BMWi) - a roadmap for norms and standards in the field of AI.

 

Artificial Intelligence (AI) for your business

Based on its project experience, imbus has developed a process model for the quality assurance of AI systems. Part of this model is a guideline that imbus has already successfully used in projects. It links the respective project phases with concrete QA measures. In this way, we can provide fast and targeted assistance if, for example, quality problems occur with the behavior of the AI system because, for example, the learning data no longer match the productive data due to drift or bias.

In our projects, we have found that an independent view from the outside is worthwhile in the AI environment, because AI requires extensive expertise in this domain on the one hand. On the other hand, the use of classic methods of software quality assurance continues to be useful and necessary even in AI applications. In addition, it is precisely this external view that allows undiscovered problems regarding quality and also ethical aspects to be uncovered and remedied more quickly or, in the best case, they do not even occur in the first place. In addition, you benefit from our extensive experience with integration, performance or robustness testing, which provide further essential building blocks in the quality assessment of AI systems.

» Arrange an appointment right here «
... for an initial discussion, e.g. on your test strategy, with our experts for AI testing, who have worked on the roadmap for AI at DIN, among other things. In a non-binding initial discussion, we will address your specific challenges and can jointly agree on a further course of action.

If more in-depth discussions and analyses are of benefit to you, we would be happy to offer you our Expert Day on the topic of AI. An Expert Day is individually tailored to your needs. There are intensive discussions with two of our experts, who analyze your situation, work out recommendations for improvement and present them in a final presentation at the end.

For an awareness of the topic of testing and AI and a basic introduction to the corresponding methodology, we recommend our A4Q training "AI and Softwaretesting Foundation" with certification option.

Contact show/hide

Ihr Ansprechpartner bei imbus

Mr. Tilo Linz

This might also be of interest to you: