Skip to content

TÜV NORD GROUP: "A kind of AI TÜV for more transparency in large language models"

Language models such as ChatGPT or Gemini must fulfil higher requirements with new level of the EU AI Act

Eine Frau mit lockigen dunklen Haaren sitzt an einem Schreibtisch in einem modernen Büro. Sie trägt eine hellblaue Bluse und arbeitet konzentriert an einem silbernen Laptop. Ihr Gesichtsausdruck ist freundlich und fokussiert. Im Hintergrund sind große Fenster, mehrere Schreibtische und Büroeinrichtungen zu sehen, die eine offene und helle Arbeitsatmosphäre schaffen.
12.05.2025

From 2 August 2025, new rules for artificial intelligence will apply in the European Union, which are intended to offer consumers more transparency and security. "With a kind of AI MOT, we are scrutinising large language models for more transparency. This is how we ensure that AI is safe and trustworthy," says Vasilios Danos, Head of AI Security at TÜVIT. TÜVIT, as a subsidiary of the TÜV NORD GROUP, supports companies in implementing these requirements and contributes to the safety and reliability of AI technologies.

The new regulations under the EU AI Act concern so-called general purpose AI systems (GPAI), i.e. AI systems with a broad range of applications, such as large language models. These include AI systems such as ChatGPT, Gemini or Llama or applications specially developed for companies and their specific requirements. Companies must register these systems with the EU and provide detailed technical information. The aim is to disclose the origin of the training data, the computing resources used and the energy consumption in order to ensure the sustainability and transparency of these technologies. "The EU has reacted quickly to the rapid development in the field of artificial intelligence and created a framework that keeps pace with technological developments," explains Thora Markert, Head of AI Research and Governance at TÜVIT.

There is a particular focus on GPAI models with systemic risk, which must fulfil stricter requirements due to their broad application and potential social impact. This also includes intensive testing and the implementation of countermeasures. This refers to measures against the misuse of AI systems in order to prevent or minimise unwanted behaviour. TÜVIT carries out these tests with a kind of AI MOT to check the countermeasures for their suitability. "These measures are crucial to ensure the security and reliability of AI systems and to prevent possible manipulation," says Markert.

TÜVIT supports companies in fulfilling the requirements of the AI Act and preparing for the coming stages of regulation: "We offer practical guidelines and audits to ensure that companies fulfil their legal obligations both as providers and users of AI systems," emphasises Markert. TÜVIT began developing and applying initial testing methods for AI applications at an early stage. Together with other leading TÜV companies, the TÜV NORD GROUP is also a partner in the TÜV AI.Lab, which supports the development of standards and regulations for AI applications.

The next stage of the AI Act will come into force in August 2026 and will cover high-risk systems that are not already regulated by other EU directives.

About the TÜV NORD GROUP

Founded over 150 years ago, we stand for security and trust worldwide. As a knowledge company, we have our sights firmly set on the digital future. Whether engineers, IT security experts or specialists for the mobility of the future: in more than 100 countries, we ensure that our customers become even more successful in the networked world.

Contact

Stefan Genz, Konzern-Kommunikation von der TÜV NORD GROUP

Stefan Genz

Digital & Semiconductor (IT, Space)