When a Sticker is a Safety Threat

When a Sticker is a Safety Threat

With the increasingly widespread use of artificial intelligence (AI), both the opportunities and the risks of its use in technical applications are growing. The TÜV NORD GROUP has recognised this potential and, as a pioneer in the market, is already offering AI testing services.

The voice assistant built into our smartphones immediately answers that pressing question, and that technical text in another language is translated in seconds. Artificial intelligence is no longer a distant vision: it has long since found its way into our everyday lives. Attempts to transfer human learning and thinking to computers with the help of data and software and to allow technology solve problems independently are proving successful on an ever larger scale. Transport vehicles are trundling independently through factory halls, and machines can detect quality defects with the aid of image recognition software. Self-driving vehicles are likely to become a fixture of life in towns and cities in the near future. More and more companies around the world are working to refine all the new convenient functions using the right training data for machine learning.

People want secure AI applications

When it comes to the future of AI, the fear often arises that beings like the Terminator could take control. While this will probably remain in the realm of science fiction, there are some very real risks that companies need to prepare for. As the use of AI spreads ever more widely, the risks posed by faulty AI systems, data misuse or cyberattacks will also increase. Dr. Dirk Stenkamp, Chairman of the Board of Management of TÜV NORD AG and acting President of the TÜV-Verband (TÜV association), sees artificial intelligence as being at a turning point: “We’re now seeing the move from abstract ideas to specific applications. Only through trust in AI can markets and products develop into AI investments,” explained Dr. Stenkamp at the TÜV AI Conference in October 2021 in Berlin.

This point is also underlined by a representative study by the TÜV-Verband on the safety of artificial intelligence: 78 percent of people in Germany would like to see laws and regulations put in place to regulate AI. 85 percent believe that AI products should not be launched onto the market until their safety has been independently verified.

How should a political AI strategy be designed? This was the topic discussed by representatives from politics and business at the AI Conference of the TÜV-Verband on 27 October 2021 in Berlin.

“Only through trust in AI can markets and products develop into AI investments.”

Dr. Dirk Stenkamp

At the AI Conference, Dr. Dirk Stenkamp explains why regulations are the basis for the functional and ethical security of AI.

Innovation award for new AI testing method

This is exactly where the TÜV NORD GROUP comes in, with the work of TÜV Informationstechnik (TÜViT). As an independent testing service provider for IT security, TÜViT is an international leader and represents the entire IT business unit, one of six in the TÜV NORD GROUP. Driven by its vision of trustworthy and secure AI, the TÜViT team in Essen has been focusing on innovation since 2018, explains Dr. Henning Kerstan, Senior IT Security Expert at TÜViT. “From the very beginning, we’ve asked ourselves this question: What do we need if we want to enable us to test AI applications? What would attackers do?”

In specific terms, Dr. Kerstan’s team is working on a testing tool for AI and a methodical evaluation scheme based on that tool to analyse the security of AI applications. For this work, Dr. Kerstan and AI specialists Vasilios Danos and Thora Markert were honoured with the Innovation Award at the DIGITAL Leadership Convention of the TÜV NORD GROUP in June 2021. For example, TÜViT's own testing tool, which was developed on the initiative of Vasilios Danos, uses special tests to measure the robustness of AI – after all, it is this very quality that will determine whether and how well an AI algorithm can withstand hacker attacks or natural disruptions such as weather events. Based on these test results, which are embedded in the corresponding evaluation scheme, it is possible for the experts to make a structured statement on the security properties of an AI system.

The basis for AI applications is large amounts of data. An essential characteristic of AI, or, more precisely, of the machine learning sub-area, is the recognition of patterns in such data. This is what the AI system learns in the training phase: during supervised learning, images of humans and animals are shown to an AI. The AI can then recognise a cat, for example. However, as Vasilios Danos found out in the case of a smart surveillance camera, if it is tattooed on the arm of a thief, an image of a cat becomes a disruptive factor in a system that is supposed to use image recognition to provide protection against intruders. If such pictures have not already been encountered in training, the thief with the cat picture might be classified overall as a “cat” and, consequently, not trigger an alarm. While such obvious mistakes are still easy to understand, “even the smallest changes that are barely perceptible to the human eye can lead to a mistaken decision and thus to serious errors,” says Dr. Henning Kerstan. This fundamental weakness in the robustness of machine learning is the reason why it needs to be tested. Otherwise, a small sticker can suddenly turn a stop sign at a busy junction into a right of way sign. “AI systems can fail in situations that we humans perceive as humdrum.” This is why the extent to which an application can withstand such manipulations needs to be carefully examined.  

AI in mobility

More security for artificial intelligence (AI) in road traffic: the Federal Office for Information Security (BSI) and the technology company ZF Friedrichshafen AG have launched a joint project to investigate how the IT security of AI systems in cars can be tested according to generally accepted criteria. TÜV Informationstechnik (TÜViT), the TÜV NORD GROUP company specialising in IT security, is also involved as a project partner.

The findings from the project will be used in the medium-term development of test criteria for AI systems in the form of a modular technical guideline. This may influence the future development of security tests for cars and international standardisation efforts.

“We’re helping manufacturers to enhance and objectively demonstrate the security of their AI technologies.”

Dr. Henning Kerstan

Test procedures are becoming more fluid

TÜViT’s collective expertise is creating a genuine competitive advantage for the TÜV NORD GROUP in the increasingly attractive market for testing and certification providers: the Group is already working with some pioneering companies, for example from the automotive sector, thereby already offering real-world added value derived from its vision. “We’re helping manufacturers to enhance and objectively demonstrate the security of their AI technologies,” says Dr. Kerstan. “We’re already offering testing services that future regulation is going to make mandatory as part of a holistic system testing process.”

When testing and measuring the various AI applications, although the TÜViT specialists can draw on their wealth of methodological experience as inspectors they also have to take into account completely new testing tools and analytical methods. A great deal of expertise is also required in the application and evaluation of the test. “It isn’t just a matter of ticking a box once you’ve used a measuring device,” reports Dr. Kerstan. Since the procedures are also constantly evolving and every use case is different, dynamic procedures will also be required in the future with regard to testing and certification work. “We’re going to have to adopt a completely new mindset: there’s no longer a quality seal for the status quo; instead, the testing methods are becoming more fluid.”

 

Putting people first

These developments also require a political response. At the European level, the findings of the AI White Paper are being used in the specific design of a regulation; the work in Germany is to continue the government’s AI strategy. With the newly founded TÜV AI Lab, the TÜV-Verband is actively involved in the debate on the technical and regulatory requirements for AI. In the development laboratory jointly operated by the TÜV companies, the idea is to follow and support the development of standards for the testing of safety-critical AI applications. “The TÜV AI Lab is going to make an important contribution to making AI safer and allowing its use in safety-critical areas,” says Dr. Dirk Stenkamp. These include, for example, automated vehicles, assistance systems in medicine, and mobile robots. Moreover, AI systems have to be able to meet certain requirements when fundamental rights such as privacy or equal treatment are at risk. “With the development of suitable testing methods, we’re supporting the regulation of AI which the majority of people want and providing practical examples of its use,” says Dr. Stenkamp.

AI is a game changer which has left the policymakers floundering in its wake – and this is why the regulatory framework needs the kind of elasticity called for by the experts at the Berlin TÜV AI Conference in October. Dr. Dirk Stenkamp also warned against focusing too much on risk in the evaluation of AI. Intelligently selected training data can reduce latent discrimination, for example when it comes to granting loans. And it will also be virtually impossible to fend off cyberattacks without the support of artificial intelligence. “Regulation should therefore be as open to technological developments as possible,” Dr. Dirk Stenkamp demands. “AI should be functionally secure and respect civil rights. We need to put people first here too.”

 

With the newly founded TÜV AI Lab, the TÜV-Verband has become actively involved in the debate on the technical and regulatory requirements for AI and, with the idea of an AI Testing Hub as a central national point of contact for artificial intelligence, is acting as a driving force.