Pursuant to ‘trilogue’ negotiations among major institutions of the EU, a high-level political agreement on a proposed regulation for artificial intelligence (AI) arrived at in Brussels a few weeks ago, the text of which is currently undergoing technical finetuning. This—AI Act—is the world’s first comprehensive law on AI.

According to the current draft, the AI Act should apply two years after its entry into force, likely from the second quarter of 2026.

The broad focus of the new law is a risk-based approach, based on AI system’s capacity to cause harm. Compared with the initial legislative proposal, additional elements of the current agreement include rules on high-impact, general-purpose AI models that can cause systemic risk in the future, as well as high-risk AI systems.

Once formally adopted and published in the EU’s official journal, the AI Act may set a global standard for AI regulation in other jurisdictions, just like the EU’s General Data Protection Regulation (GDPR) did (with personal information).

Extraterritorial Scope

Similar to the GDPR, an important effect of the AI Act will be its extraterritorial scope, which will impose obligations on non-EU businesses. Accordingly, this new law may perpetuate the so-called ‘Brussels effect’, a term coined to describe the EU’s unilateral power to regulate global markets.

For instance, irrespective of their place of establishment, the AI Act will apply to both providers and deployers of in-scope AI systems that are used, or which produce an effect, in the EU. In other words, AI developers or users based in other countries (for example, India) may need to comply with the rules of the AI Act if the output of their AI system: (1) is used in the EU, or (2) creates an impact that occurs within the EU. For example, if an Indian company develops a platform which deploys AI to make decisions about applications for financial products that are offered by, and/or to, EU entities—such as credit scoring applications which may be categorised as ‘high risk’—the AI Act will apply even if the Indian company does not have an established presence in the EU. As a result, non-EU producers/developers/providers of AI systems will need to account for compliance risks, especially in light of the high penalties involved—much like in the GDPR regime.

However, companies may also be able to leverage certain aspects of their GDPR compliance programs. For instance, policies and processes for managing personal information could provide a foundation for responsible data use while developing or deploying AI systems. Further, built-in checks and controls, including those related to cybersecurity and data breaches, could help minimise the risks that the AI Act aims to address. In addition, companies could utilise existing approaches for prior impact assessments on privacy, given that the provisional agreement of the AI Act imposes obligations on deployers of high-risk AI systems to conduct one on fundamental rights.

Furthermore, maintaining records of data processing activities under the GDPR — including for purpose controls and consent management — could provide valuable audit trails to demonstrate the measures adopted on AI governance. Such records may also prove useful when it comes to ensuring that rights related to intellectual property and personal data use are appropriately addressed – both in terms of the inputs used by an AI system, as well as the outputs it generates.

Going forward, it will become important to navigate the development, deployment and commercialization of AI technology, including in respect of the collection and use of AI training data, collaboration agreements for the development of AI systems, and contracts related to the commercialization of AI products and services.

(The author is a lawyer with S&R Associates, a law firm)

comment COMMENT NOW