Corporate: „Klarer Rechtsrahmen ermöglicht wirtschaftliches Wachstum, Innovation und Wettbewerbsfähigkeit.“
A commentary by TÜV NORD CEO Dr. Dirk Stenkamp on the EU AI Act
With the entry into force of the EU AI Act, we live on the continent that protects us most effectively against the dangers of artificial intelligence (AI) worldwide. The regulation lays down uniform safety and risk criteria for the use of AI systems in all EU member states.
Now, at the end of a long political coordination process, we can say that it was worth the wait. Open and controversial debates, countless rounds of negotiations and the exchange of arguments essentially announce: Europe says “yes” to a human-centered and value-based use of AI and “yes” to the sensible regulation of a technology whose impressive development is only just beginning. The launch of ChatGPT found over 100 million users in less than 5 days - a record. In the USA alone, tech giants Microsoft, Google and Amazon invested 27 billion dollars in AI projects in 2023. It has long been apparent that artificial intelligence and the possibilities of its applications are scaling exponentially. However, it is just as obvious that AI can not only help people, but also harm them: Be it by endangering critical infrastructures, such as hospitals or energy supplies, or through the mass dissemination of false information using AI-based search engines or social media applications.
As the second important pillar alongside the EU Data Act, the EU AI Act regulates four key areas in order to promote AI-based innovations and protect fundamental rights at the same time: Safety (technical safety) must be guaranteed so that AI systems, exemplified by automated driving vehicles, can be operated reliably and safely without endangering people or infrastructure. Security (IT security) is equally mandatory, according to which AI systems must be effectively protected against cyber attacks. Ensuring privacy concerns the protection of human privacy and is also required by the EU AI Act. Finally, ethical principles, which are based on the values of the EU community, must also be fulfilled by AI systems. The EU AI Act classifies the risk potential of AI applications into a total of four risk classes according to the sequence low-limit-high-acceptable. Harmonized norms and standards are currently being worked on at full speed to make the regulations of the EU AI Act effectively verifiable for the four risk classes after various transitional periods of 6 to 36 months have elapsed, with liability risks and fines threatening in the event of non-compliance.
The implementation of the EU AI Act is a mammoth task for AI developers as well as for companies and institutions that use AI. Testing methods and tools must be developed for the development of AI applications that make it possible to effectively and efficiently verify compliance with the required regulations. AI providers who start to take into account the safety requirements and risk criteria of the EU AI Act today will gain a competitive advantage. Users, on the other hand, must be able to trust that AI applications comply with the required regulations. For this reason, TÜV companies in Germany are committed to developing test methods and test criteria that make AI applications safe, transparent and compliant with regulations. Whether on a voluntary basis for the lower AI risk levels or mandatory for high-risk AI, comprehensive testing of AI applications by independent auditors such as TÜV will contribute significantly to building trust in society and thus accelerate the mass use of trustworthy AI.
The EU AI Act also focuses on the regulation of AI systems with a general purpose (so-called general purpose AI). These are powerful AI systems that can be used in a generalist way to solve multiple problems such as image and speech recognition, language translation, text generation or answering questions (ChatGPT application). TÜV companies are already developing effective testing methods for these highly complex AI systems, which are based on neural networks and machine learning with billions of learned parameters, to reduce the risk of fake news being disseminated, the AI itself being attacked or ethical principles being violated. The transparency obligation for these pioneering AI systems - to which applications from many other, often small manufacturers or start-ups dock - with regard to documentation, copyright protection and the training data used is a historic achievement of the EU AI Act.
The desire of more than 500 million Europeans for peace, security and prosperity shows how closely social and technological development are intertwined. The EU AI Act strengthens these fundamental social values and helps to reduce conflicts and tensions that can arise from the misuse of AI. The clear legal framework encourages companies to develop and use AI technologies in Europe. It enables economic growth, innovation and competitiveness. In addition, the EU AI Act ensures diversity and the protection of the rights of all citizens, regardless of origin, culture or identity.
With human-centered and value-based AI regulation, we have the opportunity to make a fundamental contribution to the security of one of the world's most important technological developments since the first industrial revolution: artificial intelligence, starting from our continent. In this way, “AI made in/for Europe” can also become a seal of quality that stands for trust worldwide.
Founded over 150 years ago, we stand for security and trust worldwide. As a knowledge company, we have our sights firmly set on the digital future. Whether engineers, IT security experts or specialists for the mobility of the future: in more than 100 countries, we ensure that our customers become even more successful in the networked world.