AI Regulations Around the World – Overview

Key Points

  • Global AI Regulations: Different countries and regions are establishing regulations to manage the development and deployment of AI technologies, ensuring safety, transparency, and ethical practices. Key regions like the EU, the US, and China have created or are in the process of implementing AI regulations.
  • EU AI Act:
    • Implementation Date: August 2024, with phased roll-out.
    • Key Provisions: Defines AI, outlines risk categories (unacceptable, high), and emphasizes transparency, security, and human rights in AI deployment.
    • Impact: It mandates strict rules for AI systems with high risks, including biometric systems and AI in critical infrastructure.
  • US AI Regulations:
    • No Single Act: The US operates with multiple regulations, including federal and state laws.
    • Key Documents: The Bipartisan House Task Force Report on AI (2024) and Executive Order 14141 (focused on energy efficiency of AI data centers).
    • State-Level Laws: Vary by state, with some adopting AI risk-level categorization similar to the EU AI Act.
  • China’s AI Legislation:
    • Interim Measures for Generative AI Services: In force since August 2023.
    • Regulates: AI models that generate text, images, sound, and video. The legislation outlines AI service providers’ responsibilities and control frameworks.
    • Cybersecurity Bureau: Acts as the primary regulator for AI in China, focusing on AI development and security.
  • Why It Matters: AI developers must understand the laws of their target markets to ensure compliance. AI-powered software must adhere to specific regulations depending on the region where it will be used, especially in markets like the EU, where strict compliance is required.

AI is nothing new – we’ve seen some instances of it being implemented in digital products for years already. But with the rise of generative AI in 2023, which became widely available, it truly changed the way we think and work. Naturally, governments also spotted that and didn’t remain idle – different countries around the world have their own AI regulations that define ethical and safe ways to use artificial intelligence. What are they? What do they impact? Find out the answers in this article!

AI Regulations Around the World

Without further ado, let’s look into the main regulations for specific countries and regions.

European Union: The EU AI Act

Came into force: 1 August 2024. However, it’s being implemented in stages, meaning that it hasn’t been fully rolled out yet. You can find the full timeline on the official EU website: EU AI Act Timeline.

The EU AI Act serves two purposes. On the one hand, it ensures that artificial intelligence is used in a safe and transparent manner across all Member States. On the other hand, it was designed to promote innovation and encourage start-ups. What are the key provisions of this document?

  • Definition of artificial intelligence:

‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments’ Chapter I, Article 3(1)

  • Types of risks related to AI:
    • Unacceptable risk, for example:
      • cognitive behavioral manipulation, especially of vulnerable groups, e.g., children,
      • social scoring,
      • biometric identification and categorisation of people,
      • real-time biometric identification in public spaces (e.g., a face-detecting system in city cameras).

As you might expect, these types of AI are forbidden, with some exceptions. For example, the EU AI Act does not allow putting into market and using emotion-recognition systems in the workplace.

  • High risk, for example:
    • AI systems (or AI components of larger systems) that are used for the security of the whole system,
    • Certain AI products regulated in Union Harmonisation Legislation attached to the AI Act as Annex I,
    • The either of above applies to a product that (which includes the AI system) must be checked by an independent organization (a third party) to make sure it follows the safety laws, before it can be sold or used in the EU.
    • It also applies automatically to AI-powered products such as:
      • biometrics,
      • safety components in critical infrastructure,
      • education and vocational training – if the AI system is intended to be used to determine access or admission, evaluate learning outcomes, assess education level, monitor, and detect prohibited student behavior.
  • Transparency requirements:
    • disclosing AI-generated content,
    • designing generative AI models in a way that they will not generate illegal content,
    • transparency about the copyrighted data used for training, namely, through publishing summaries about said data,
    • informing the citizens if emotion AI or AI biometrics are being used on them,
    • transparency info must be clear, timely, and accessible.

The main points of the AI Act are strongly connected to basic human rights. The principle of ethics by design should be taken into account when creating AI engines, to build ones that are ethical, legal, trustworthy and transparent

You can read the full act here:

EU AI Act Explorer

United States: Multiple Policies Rather than a Single Act

The United States, unlike the EU, doesn’t have a single act regulating the use of AI. Instead, they rely on two federal-level regulations and state-level laws that vary by region. Moreover, the legal landscape in this country is changing rapidly – legislators propose multiple changes and smaller acts monthly. Let’s look into the federal law:

  • The Bipartisan House Task Force Report on Artificial Intelligence. This document provides a set of guidelines for AI policies based on research. It was launched on February 2024.
  • Executive Order 14141, 2025. This document is focused mostly on setting the ground for AI & AI data centers in terms of energy efficiency. For example, it required the Secretary of Energy and the Chair of the Council of Economic Advisors to submit a report on the impact of AI data centers on consumer & business electricity prices.
  • Other legislation such as:
    • Testing and Evaluation Systems for Trusted Artificial Intelligence Act (TEST AI Act),
    • Leveraging Artificial Intelligence to Streamline the Code of Federal Regulations Act of 2025,
    • Decoupling America’s Artificial Intelligence Capabilities from China Act,
    • Responsible AI Disclosure Act of 2024,
    • AI Leadership Training Act.

Interestingly enough, the Federal Law does not focus purely on protecting the citizens. For instance, the Leveraging Artificial Intelligence to Streamline the Code of Federal Regulations Act of 2025 imposes the obligation to review government agency regulations with the use of artificial intelligence. Or, the Decoupling America’s Artificial Intelligence Capabilities from China Act focused on developing AI in the US and prohibiting the import of AI-related technology from China.

Many state-level laws are inspired by other AI legislation around the world – for instance, the Colorado AI Act adopts a risk-level categorization similar to that of the EU AI Act. Nevertheless, not every state works on specific legislation, so when it comes to the US, you need to closely consider who you will target your AI-powered products to.

You can read the full acts:

National Artificial Intelligence Initiative Act of 2020

The Bipartisan House Task Force Report on Artificial Intelligence

Executive Order 14141

China: Interim Measures for the Management of Generative Artificial Intelligence Services

When discussing AI legislation in different countries in the world, it’s always interesting to look at Asia. It’s a continent where the general population is much more AI-aware, trusts it more, and adopts it more widely. Therefore, let’s take a closer look at one of the regional leaders: China.

China was pretty quick to adopt the first legislation, namely the Interim Measures for the Management of Generative Artificial Intelligence Services, which came into force on 15 August 2023. However, this is not an act-level legislation. Similar to the US legislation, this document is focused mostly on strengthening China’s position as an AI leader. It also deals with security, which is proven in the fact that the main AI regulator in China is the… Chinese Cybersecurity Bureau.

What does the AI legislation in China regulate? Here’s an overview:  

  • It defines generative artificial intelligence as models capable of generating text, image, sound, and video content.
  • It sets the direction of generative AI development, along with the duties of AI service providers and the framework for legal control and liability.
  • The new ordination (coming into force on September 1, 2025), also defines the responsibilities of generative artificial intelligence providers in the scope of AI-powered training and optimization, as well as other training-related data.

Why Does Understanding AI Laws Around the World Matter?

If you develop custom software empowered with AI, you always have to consider your target market. Where will it be used and for what purpose? Answering this question will provide you with information about the laws you have to look into to ensure compliance. Every AI distributor is subject to the AI Act – every entity that puts AI solution in the EU market needs to obey AIA. Therefore, you must consider legal limitations when outsourcing IT projects or working with reputable providers that can verify the legal requirements for you!

You might also read: Copyright law in relation to technology: an interview with Karolina Wrona, senior legal operations specialist at j‑labs

FAQ

What is the EU AI Act and when does it come into effect?

The EU AI Act will come into force in August 2024 and is designed to ensure AI is used safely and ethically across EU Member States. It categorizes AI risks and outlines transparency requirements, with stricter rules for high-risk AI systems, such as biometric identification.

How is AI regulated in the United States?

In the US, AI regulation is fragmented, with multiple federal and state-level laws. Some federal documents like the Bipartisan House Task Force Report on AI (2024) and Executive Order 14141 focus on energy efficiency and AI governance, while state laws, such as Colorado’s AI Act, adopt similar structures to the EU.

What are China’s regulations for AI?

China’s AI legislation, the Interim Measures for the Management of Generative AI Services, came into effect in August 2023. It defines generative AI, outlines responsibilities for AI providers, and emphasizes security, with the Chinese Cybersecurity Bureau as the main regulator.

Why is understanding AI laws important for developers?

Understanding AI regulations is crucial to ensure compliance when developing AI-powered products, particularly when targeting different global markets. Compliance with local laws like the EU AI Act ensures that AI technologies are used safely and ethically.

How do AI regulations affect global software development?

AI regulations shape how software developers implement and distribute AI technologies across borders. Developers must tailor their products to meet the legal standards of the regions they target, such as ensuring transparency, security, and ethical use in line with regulatory frameworks.

Meet the geek-tastic people, and allow us to amaze you with what it's like to work with j‑labs!

Contact us