Artificial intelligence (AI) is no longer a foreign concept to us humans in this age of digitalization. In recent years, it has become increasingly important in our everyday lives and has also made groundbreaking progress in research. The basic concept of an AI is to imitate human cognitive abilities. AIs have complex algorithms that must be extensively tested before they are released. This is because, depending on the AI type and area of application, errors can have serious consequences - for example, in areas such as autonomous driving. For reliable quality assurance, testing and consulting, we are your partner. The imbus team has made it its business to help customers develop trust in their software and, in the course of this, advises and tests your projects against professional quality standards - get advice on AI applications now and develop trustworthy artificial intelligence!
With increasing visibility and demand for Artificial Intelligence, it is important for us as experts in software testing to do our part to support sustainable trustworthy and secure AI development. With our trainings and workshops, we would like to help you understand important aspects and comprehend test methods. However, we are also happy to formulate and implement tests for you in-house. Which test methodology we use for your AI depends on the type of product. Since artificial intelligence involves complex systems that differ depending on the product, there is no generic test method. Moreover, it may well be that a combination of tests is applied to test your AI. For example, well-known techniques such as the white-box method and the black-box method can be combined here. But we also use more modern methods such as metamorphic testing. It is very important to us that our customers can have confidence in their products. Because even in the technical world of software and AI, there are certain requirements and standards that are regulated by law and must be met accordingly. For this reason, we not only support you in the execution of the tests, but also advise you extensively on the procedures and which is best suited for your project. We are your experts for planning, specification and execution of tests for AI-based.
Contact us and get more information about our approach to AI quality assurance. imbus is your partner for software testing and quality assurance. We test your AI so that you can develop confidence in your product and ensure compliance with all important standards and norms for AI.
The multifaceted historical development of artificial intelligence has also influenced the definition of the term. The term "Artificial Intelligence" was coined by computer scientist John McCarthy in 1955 - he was considered one of the first pioneers of AI research. But what exactly is artificial intelligence? According to a definition by the Fraunhofer Institute for Cognitive Systems IKS, AIs are systems that use machine learning and programming, among other things, to recognize and sort information in order to mimic human abilities. Machine learning happens primarily through one thing - repetition. In order to imitate human cognitive behavior, the processing structures of artificial intelligence systems are based on the neuronal structures of the human brain.
Because an AI also has a kind of neural network, which was inspired by the nerve cell connection in the human brain. In this network, constant repetition bears fruit and the system learns to classify the input data anew and correctly. In the meantime, however, intelligent systems can also be found in a wide variety of everyday areas, such as in contact with our smartphones. Another exciting development is autonomous driving, which is now established on our roads in the form of automated driving. In some countries, there are even driving services that autonomously take customers from A to B without a human driver, but this is not yet permitted in Germany. The use of drones for deliveries or even robots that deliver food are also no longer novelties.
The term Artificial Intelligence also refers to the complex field of research around such systems, which is constantly evolving. It can be divided into different subareas and methodologies. Methodologically, a distinction is made in the field of artificial intelligence between symbolic and subsymbolic AI. Subsymbolic AI is more complex than symbolic AI and thus additionally subdivided into different subareas.
Symbolic AI is the classic approach to the development of artificial intelligence. These systems are part of early research and find their origin in the 1950s. They fall under the category of weak AI because they are designed for specific tasks and also do not draw their knowledge themselves from the data at hand. In fact, symbolic AI cannot function without human intervention because it draws its knowledge from a programmed "expert system" - this is created by humans. By accessing structured data sets, the AI can search for a suitable solution to its task. However, this is now considered outdated and for this reason is sometimes referred to as GOFAI (Good-Old-Fashioned AI) and can be found in speech and text recognition, for example. In short, symbolic AI can only work by embedding human knowledge into a computer program.
Subsymbolic AI is significantly more complex and also more intelligent than the symbolic AI method. Here, the focus is on the independent learning of the system - without human intervention. With the help of neural networks, the machine can learn about various issues and find suitable solutions accordingly. Subsymbolic AI is better suited for complex problems where large amounts of data need to be processed. It enables the machine to recognize patterns in non-linear contexts and unstructured data such as images, speech or sensor information. Compared to symbolic AI, subsymbolic AI relies less on human experts because it can learn from the data itself.
However, subsymbolic AI also has its limitations. It has difficulty providing explanations for its decisions because it is based on complex statistical models. In addition, it can be susceptible to glitches and attacks because it relies less on formal rules. Overall, however, subsymbolic AI is enabling progress in areas such as image recognition and natural language processing. This methodology is now considered a more promising model for the future, as here the machine actually demonstrates independent intelligence.
Subsymbolic AI can be divided into different subareas, which are all interrelated. Thus, the term Artificial Intelligence is the supercategory of all subfields. Within it are neural networks, machine learning, and deep learning. Here we provide you with an initial overview of quality assurance and software testing procedures in connection with artificial intelligence - Arrange a consultation appointment now!
We have already briefly touched on the topic of machine learning in our definition. The name reveals that we are dealing here with the actual learning of a machine. As with us humans, this is mainly done by repeating and applying known information. In this way, the systems gain experience that they can apply in future processes. In machine learning, algorithms are trained to recognize patterns in data sets in order to establish correlations between them. A crucial point here, however, is the amount of data that is fed into the system - the more information, the more correlations the algorithm can recognize and the more accurate the results. This is because artificial intelligence processes data in order to deliver results or forecasts. The special feature here: According to the Fraunhofer IKS, no solution path is prefabricated, so the artificial intelligence algorithm finds its own path based on its experience.
For machine learning to work properly, an algorithm must be equipped accordingly. Neural networks are currently the most widely used algorithms for this purpose and are based on the nerve cell connections of the human brain. These networks consist of a large number of layers of nodes that are linked together. The basis for the neural network to learn is repetition. The network learns to classify data correctly and to adjust the weighting of the individual connections between the layers in the event of errors. This happens as long and as often as necessary until certain quality criteria are achieved. These include functional performance metrics such as accuracy, precision or sensitivity, as well as robustness and performance. The neural network is the backbone of deep learning algorithms.
Deep learning is also part of machine learning and primarily uses complex neural networks and large amounts of data. In practice, deep learning is mainly used to understand texts and recognize images, but has a very broad application potential.With the help of deep learning, algorithms can solve complex tasks and problems with sufficient training and often do so faster and more effectively than a human.As the process is very computationally intensive and based on repetition, it can take months for artificial intelligence to find correct solutions or make good decisions.
The historical development of AI began in the 1950s. Researchers such as John McCarthy and Marvin Minsky were concerned with the question of how machines could exhibit intelligent behavior.The first algorithms for symbolic artificial intelligence were developed and tested in these years on the basis of electrical circuits.The Turing test also dates back to this time, which was designed to find out whether a machine has human intelligence or can imitate it. Around 10 years later, progress was also made in the technical world. With the help of computers and transistors, the first steps towards programming an AI were taken in the 1960s. In the following years, the methodology of symbolic AI was expanded. In the history of AI, however, there were often years in which research came to a standstill. In the 1980s, for example, there was a so-called "AI winter". During this time, the symbolic approach reached its limits and it became clear that this methodology was not capable of imitating human intelligence. Accordingly, new methods were sought that could generate human intelligence by machine. The principle of the expert system was particularly influential in research in the 1970s - a well-known example of this is the language assistant ELIZA. In the 1980s and 1990s, neural networks and thus also the topic of machine learning steadily moved to the forefront of research - sub-symbolic AI was researched. In 1997, the first gaming computer based on artificial intelligence gained media attention.Here, too, the technology was not sufficiently advanced for great success, but this changed around 20 years later.With the dawn of the new millennium in 2010, it became possible to make greater use of inexpensive hardware with high storage capacities and computing power and thus collect sufficient data. As a result, research into AI and machine learning could be further expanded.
The Turing test was invented in 1950 by the scientist Alan Turing. The test is designed to check whether an artificial cognitive system is comparable to a human being. An ongoing conversation is used to determine whether the system can give answers that are indistinguishable from those of a human being. But how exactly does this work? A total of three parties are involved in the test:
- Person A: The artificial intelligence being tested
- Person B: The human team partner of the artificial intelligence
- Person C: The human tester who asks the questions
The test takes the form of an ongoing conversation in which person C (the tester) is physically separated from persons A (AI) and B (AI team partner). During the test, both person A and person B try to convince person C (tester) that they are thinking humans. If person C cannot clearly distinguish the human from the artificial intelligence after the time has elapsed, the test is passed.
However, a number of important requirements must be met to ensure that the test is fair and orderly. First and foremost, it is important that the tester is physically separated from the tested parties so that an unbiased assignment is possible. In addition, a format is defined in advance that determines how the tester asks the questions. The context and subject area must also be defined in advance so that PCs and humans can provide answers on a fair basis. Finally, a fixed time window must be defined in which the test is to be carried out. Only with these specifications can it be fair and a conclusion drawn by the tester afterwards.
Criticism of the Turing test
However, there are also voices in research that criticize the Turing test to the highest degree. One of these voices belongs to the philosopher John Searle, who primarily criticizes the lack of testing of consciousness. He is not alone in this opinion: other critics are also of the opinion that the Turing test only tests the functionality of an artificial intelligence and does not prove whether the device actually possesses a consciousness comparable to that of a human being.
It is also criticized that systems can be programmed to deceive the interlocutor and therefore no real intelligence or cognitive abilities are necessary.In June 2014, the Turing test was passed for the first time by a chatbot called Eugene Goostman.However, this is exactly what critics criticize:The machine was prepared with various algorithms and strategies to convince the conversation partner that it was a real person.
In addition, the test quickly becomes inaccurate, as computers increasingly show that they lack social intelligence and therefore give themselves away in sensitive areas. They also show themselves to be machines if they answer complex topics too quickly, which a human being cannot explain at this speed.
Successor to the Turing tests
This criticism has since given rise to other test procedures that extensively expand on the Turing test. For example, there is the Lovelace test, which primarily demands creativity from artificial intelligence. They are also required to perform tasks for which they have not been programmed. This should make it possible to recognize whether the system has its own consciousness.
In addition to the Lovelace test, the Metzinger test is also a test method that puts the consciousness and memory of artificial intelligence to the test.Here, the system must be actively involved in a discussion and convincingly argue in favor of its own theory of consciousness.
Consciously or unconsciously, artificial intelligence is firmly integrated into our world and is also regularly used by us humans. Whether it's recommendations from streaming providers, personalized advertising or the use of voice assistants - there's a good chance that you've already come across an artificial intelligence in your everyday life. Whether it's facial recognition on smartphones or Google translators, AIs and their work can be found in many places. If the AI cannot help, it forwards contact options so that an employee can provide advice. The chatbot OpenAI has also attracted a lot of attention in just a few months. The AI can perform a wide variety of tasks based on user input. The best-known systems that use AI and are already used by many people around the world in everyday life are systems that make autonomous driving possible. Are you particularly interested in autonomous driving? Then find out more about our HolmeS3 research project, which focuses on safeguarding autonomous vehicles through scenario-based testing. With this type of artificial intelligence in particular, it is important to prioritize safety and trustworthiness. This is achieved primarily through highly professional testing and sustainable quality assurance. As experienced software testers, we are able to test your AI!
Artificial intelligence is increasingly being used in companies as a work aid. It is already being used in a wide range of applications to facilitate work processes and relieve employees. After all, AIs can automate certain processes and thus save entrepreneurs quite a bit of time and money. But before you can integrate an AI into your company, you should think about the possible applications and what benefits an AI brings with it. The German Research Center for Artificial Intelligence (DFKI) conducts research that also addresses the benefits of AIs in a wide variety of application areas. According to the DFKI, the following industries are qualified for the application of AIs:
- Medicine and care
There are big plans for integrating artificial intelligence into companies. Indeed, the general goal is to use new technologies such as AIs to make work processes faster, more effective, less expensive and more secure. For example, an AI can be a great help in evaluating large amounts of data or can also help with fraud detection or system security. Detailed information and research projects on the possibilities of the latest technology in specific work sectors can be found on the DFKI website. Companies can already use artificial intelligence for a wide range of tasks and relieve employees. AI can be a helpful teammate in logistics, customer service or marketing, for example.
imbus TestBench - the smart management system
At imbus, we work every day to test and maintain the software quality of our customers. To this end, we have also developed our own smart test management system, TestBench, which is used by companies that want to develop high-quality products. The management solution helps with manual and automated software tests, among other things.
For over 30 years, imbus has stood for trustworthiness and security in the context of software testing. We have made it our business to accompany and support companies in the development of software with our testing and consulting services. The growing interest in Artificial Intelligence has not gone unnoticed in our company - that is why we offer testing, consulting and also training specifically for companies that are enthusiastic about Artificial Intelligence, or are even developing AI on their own. As the digital landscape is constantly evolving and making tremendous strides, we want to help companies produce trustworthy as well as secure Artificial Intelligence. First and foremost, we provide consulting services for this purpose. We want you to understand what constitutes the quality of an AI and how to achieve it. In parallel, we are specialists in software quality testing and are your contact when it comes to accompanying development and providing you with detailed advice. Our services include the testing of AI, the formulation as well as the realization of tests to ensure reliable quality assurance - do not hesitate to contact us personally, we will be happy to answer any questions you may have!
In the last years we have learned that our customers also want to build up knowhow in-house. That's why we are proud to offer training as well as workshops in our academy. Among them is the ISTQB Certified Tester AI Testing course, in which we discuss step by step important content on AI and also go into testing methodology for AI. You can also take a course for the "ISTQB® Certified Tester AI Testing" certificate. This course provides insights into AI and explains the associated quality characteristics as well as the development and testing of artificial intelligences. It provides participants with the current state as well as trends in AI testing. Submit a course request now and soon become a certified ISTQB® Certified Tester AI Testing!
Are you looking for a suitable partner to support you in this area? Regardless of whether you want to develop an AI or test with AI, we can help and advise you on your project. We also help you to implement the right quality measures and create trust in your product.