AI Act passed: What can startups expect now?

What is artificial intelligence allowed to do – and what not? After years of negotiations, the EU is finally providing an answer to these questions. The AI Act is intended to be the first comprehensive set of rules for the use of AI. In this article, we show what startups need to consider in the future and what opportunities and challenges are associated with it.

What is the AI Act?

The AI Act is an EU regulation to regulate artificial intelligence (AI). It aims to establish a clear framework for the development and use of AI for the first time. Although the US and China already have initial approaches in this regard, the European draft is the most comprehensive set of regulations to date. The first proposal for the regulation was already available in 2021, but tough negotiations on the details took place within the EU. On December 9, 2023, the negotiators finally agreed on the key points. In addition to the Future Financing Act, the AI Act is thus the second important set of regulations that particularly influences the work of startups.

Risk system: What is regulated in the AI Act?

The core of the AI Act is a risk-based approach. AI applications are divided into four risk levels. The principle: the higher the risk, the higher the requirements for the providers of the models. In the event of violations, they will face heavy penalties in the future. The following risk classes are distinguished:

  • Prohibited AI applications: The AI Act provides for general prohibitions for certain AI applications. In particular, this concerns the arbitrary surveillance of people in public spaces. One example is the US start-up Clearview AI, which captures faces in social media and stores them in a huge database. The EU wants to put a stop to this practice, but at the same time there should also be exceptions: in the event of acute danger – for example, terrorist attacks – the targeted search for specific suspects should be allowed.
  • High-risk AI applications: Strict conditions will apply to so-called high-risk systems. These include all applications that could impact safety, health or critical infrastructure. This refers, for example, to an AI used for autonomous cars or in medical devices. Use in the human resources sector – such as filtering job applications – also falls into this category. Providers of these models are then obliged to constantly monitor their systems and to inform the authorities about potential errors. To this end, they should not only set up a risk and quality management system, but also create precise technical documentation about the AI system.
  • AI applications with limited risk: The rules are less strict for AI systems with limited risk. This primarily refers to chatbots or the creation of so-called deep fakes. The most important requirement here is transparency: providers of the models must always inform their users that they are interacting with an AI system. This would affect, for example, the Düsseldorf-based start-up Cognigy, which offers companies an AI bot for customer enquiries. In the future, deep fakes must also be clearly labeled as such. This is relevant, for example, for the Berlin-based start-up Brighter AI, which can replace the faces of people in videos using its deep learning software.
  • Free AI applications: No requirements are planned for low-risk AI systems. In principle, this applies to all applications that do not fall into one of the three categories mentioned. These include, for example, AI-based video games or spam filters.

GPT, LLaMA and Co.: What rules apply to the major base models?

One of the biggest points of contention regarding the AI Act has been the regulation of large AI base models such as GPT from OpenAI. The problem: other applications can build on the models using an open-source approach and be used for a wide range of purposes – so-called General Purpose AI (GPAI).

Here, too, the EU differentiates according to risk: in principle, providers are obliged to make their training data and test procedures transparent.

These relaxed requirements should apply above all to models that are made available under an open-source license. However, basic models that pose a systemic risk must meet higher requirements in terms of risk management and cybersecurity. The decisive factor for classification is the computing power used to train the models. According to an estimate by the German AI Association, the basic model of the German Soonicorn startup Aleph Alpha, for example, currently falls below this limit.

What are the implications of the AI Act for startups?

The AI Act primarily lays down clear rules for the development and use of AI models. This affects not only the providers of large base models, but also startups that develop new business models on this basis. At the same time, AI is expected to serve as an even stronger driver of economic growth in the future. Specifically, the AI Act therefore offers startups the following opportunities and challenges:

  • Promoting innovation by startups: In order to continue to open the door to new business models for startups, the EU has agreed in principle to relaxed rules for open-source models. To this end, the AI Act also provides for so-called “regulatory sandboxes”: In controlled environments, companies should have the opportunity to test new AI applications before they are launched on the market. This should give startups in particular the chance to further develop their innovations under real-life conditions. The Fraunhofer Institute, for example, offers a comparable environment for testing 5G technologies. The Founders Foundation’s model project in Bielefeld schools is also designed to open the door to new business areas for startups.
  • Strengthening the trust of users: The AI Act aims to increase public trust in AI applications. Transparency obligations, which apply, for example, to the use of chatbots, should help to achieve this. Ultimately, greater trust in AI models can help to ensure that even more users take advantage of the services in the future. Startups that offer transparent and ethical AI solutions can benefit from this.
  • Adapting to new regulations: It is clear that startups will have to adapt their AI applications to the AI Act in the future. How much effort this takes depends on the respective use case. For example, if startups develop a system for the medical sector, they are operating in a high-risk area and are therefore subject to stricter requirements. Adapting to new regulations can then also entail additional costs. However, if they use an open-source approach, the provider of the basic model is already obliged to provide a basis for the compliance requirements. At present, large companies are not yet obliged to do so.

When does the AI Act come into force?

Now that the EU has agreed on the key points of the AI Act after long negotiations, the European Parliament and the member states still have to formally approve the project. The final legal text is therefore still pending. In any case, the regulation is to be adopted before the European elections in June 2024. The AI Act would then come into full force two years after its adoption in 2026.

Outlook: the AI Act as a blueprint?

The AI Act creates a groundbreaking framework for the future use of AI. Initially, startups should therefore familiarize themselves with the new regulations and adapt their business models – the sooner the better. At the same time, however, the major providers of basic models must also respond to new regulations and ensure greater transparency for their open-source solutions. Ultimately, the AI Act can also strengthen trust in AI, from which AI startups can benefit. The regulation makes the EU an international pioneer in AI regulation. Even if the AI Act initially only applies in the EU, it could therefore serve as a blueprint for other countries, such as the US.