What is artificial intelligence allowed to do – and what not? After years of negotiations, the EU is finally providing an answer to these questions. The AI Act is intended to be the first comprehensive set of rules for the use of AI. In this article, we show what startups need to consider in the future and what opportunities and challenges are associated with it.
The AI Act is an EU regulation to regulate artificial intelligence (AI). It aims to establish a clear framework for the development and use of AI for the first time. Although the US and China already have initial approaches in this regard, the European draft is the most comprehensive set of regulations to date. The first proposal for the regulation was already available in 2021, but tough negotiations on the details took place within the EU. On December 9, 2023, the negotiators finally agreed on the key points. In addition to the Future Financing Act, the AI Act is thus the second important set of regulations that particularly influences the work of startups.
The core of the AI Act is a risk-based approach. AI applications are divided into four risk levels. The principle: the higher the risk, the higher the requirements for the providers of the models. In the event of violations, they will face heavy penalties in the future. The following risk classes are distinguished:
One of the biggest points of contention regarding the AI Act has been the regulation of large AI base models such as GPT from OpenAI. The problem: other applications can build on the models using an open-source approach and be used for a wide range of purposes – so-called General Purpose AI (GPAI).
Here, too, the EU differentiates according to risk: in principle, providers are obliged to make their training data and test procedures transparent.
These relaxed requirements should apply above all to models that are made available under an open-source license. However, basic models that pose a systemic risk must meet higher requirements in terms of risk management and cybersecurity. The decisive factor for classification is the computing power used to train the models. According to an estimate by the German AI Association, the basic model of the German Soonicorn startup Aleph Alpha, for example, currently falls below this limit.
The AI Act primarily lays down clear rules for the development and use of AI models. This affects not only the providers of large base models, but also startups that develop new business models on this basis. At the same time, AI is expected to serve as an even stronger driver of economic growth in the future. Specifically, the AI Act therefore offers startups the following opportunities and challenges:
Now that the EU has agreed on the key points of the AI Act after long negotiations, the European Parliament and the member states still have to formally approve the project. The final legal text is therefore still pending. In any case, the regulation is to be adopted before the European elections in June 2024. The AI Act would then come into full force two years after its adoption in 2026.
The AI Act creates a groundbreaking framework for the future use of AI. Initially, startups should therefore familiarize themselves with the new regulations and adapt their business models – the sooner the better. At the same time, however, the major providers of basic models must also respond to new regulations and ensure greater transparency for their open-source solutions. Ultimately, the AI Act can also strengthen trust in AI, from which AI startups can benefit. The regulation makes the EU an international pioneer in AI regulation. Even if the AI Act initially only applies in the EU, it could therefore serve as a blueprint for other countries, such as the US.
© 2024 Hinterland of Things