Cookie Consent

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

George Orwell’s prediction of the world in his critically acclaimed book, 1984, has finally come true. We now have bots that can diagnose illnesses, drive cars, and even chat with us like a friend.

Artificial intelligence (AI), is transforming our lives faster than we thought was possible. But with great power comes great(er) responsibility.

52% of consumers are already worried about AI's impact on their privacy! That’s why we need to make sure AI is used ethically and doesn’t end up ruining our lives.

In this post, we discover how AI is being regulated across different countries and regions in the world. But first…

Why regulate AI?

Did you ever feel that Siri or Alexa are only getting smarter with time? Because these voice assistants are feeding on YOUR data. And that’s exactly why we need to regulate AI. But there are more reasons to it.

#1 Bias, privacy, and decision-making

AI can train from human biases and make unfair choices. This gets even serious when you let AI make decisions about who's granted a loan, which party in the contract gets an advantage over the other, and more.

AI knows tons about you. Your shopping habits, your whereabouts, even your preferences. That’s why it is essential to ensure it isn’t prying into our private life too.

#2 Job security and business fairness

AI’s efficiency is impressive, but what if it starts replacing jobs? Think about how this impacts employment and the need to support those who might be left behind.

If AI is only in the hands of big tech companies, that's not fair competition. It’s important to create a level playing field where smaller businesses also have a chance with AI.

#3 Cyber threats and misuse

AI can be a guardian of the digital world, but also a potential threat. You need to be aware of the importance of regulations that ensure AI is safeguarding, not endangering, digital security.

The misuse of AI to create deep fake content, like realistic but false videos, is a real concern. We need AI regulations to prevent such misuse and maintain trust in technology.

Regulating AI isn’t about restricting innovation; it's about guiding it responsibly. As someone living in this AI-driven era, it’s crucial for you to understand and engage in the conversation about keeping AI on the right track.

Also read: 6 steps to write a generative AI use policy

The current state of AI regulation worldwide

Different parts of the world are trying to set up regulations to ensure ethical use of AI. Each region has its unique playbook when it comes to AI, and understanding these differences is like peeking into a global laboratory of ideas.

#1 European Union: Artificial Intelligence Act

“The EU is going to lead AI regulations like they have on other regulatory issues.”

~ JP Son, Chief Legal Officer, Verbit
SpotDraft Summit: Maintaining Trust While Driving Business Value with AI

The Artificial Intelligence Act is essentially EU’s way of saying, "We like AI, but let's keep it under control."

The main goal? Keep AI safe and respectful of our rights. The EU knows AI can do cool stuff like improving healthcare and helping the environment, but they want to make sure it doesn't go out of hand.

In April 2021, the European Union came up with new rules for AI. They started sorting AI into different groups based on how risky it is. More risk, more rules.

The European Parliament's stance is that all AI in the EU must not only be safe and straightforward but also responsible, equitable, and eco-friendly. Plus, they're all for human oversight in decision-making, rather than leaving it all to machines.

They're trying to define AI in a way that will still make sense in the future, which is pretty ambitious.

Here's how they're doing it:

  • Unacceptable risk: This is for the really risky AI, the kind that could be dangerous. They're talking about banning AI that could trick or rank people in weird ways. But, they might let some of this AI be used by the police, as long as a judge agrees.
  • High risk: This is for AI that could be a safety risk or mess with our rights. They're looking at AI in two groups: one for products like toys and cars, and the other for big things like identity checks and managing important stuff. The EU wants the high risk AI to be honest about being machines, not create illegal stuff, and tell where they got their information from.
  • Limited risk: These are your everyday AI systems that aren’t really dangerous but still need some rules. The focus here is on honesty. They should make it obvious that they're AI, especially if they create things that seem real but aren't, like deepfake videos.

#2 United States: Executive Order by Biden Administration

President Biden's Executive Order on AI, issued on October 30, 2023 sets new standards for AI safety and security, requiring developers of powerful AI systems to share their safety test results and other crucial information with US regulators. This is aimed at ensuring that AI systems are safe, secure, and trustworthy before they're made public. The National Institute of Standards and Technology is tasked with developing these standards, while the Department of Homeland Security will apply them to critical infrastructure sectors and establish an AI Safety and Security Board.

Here’s a breakdown of the key components of the Executive Order:

  • New standards for AI safety and security: This directive focuses on ensuring AI systems are safe and reliable. Developers of high-impact AI must share safety test results with the U.S. government. The order mandates notification and sharing of red-team safety test outcomes for AI models posing serious risks to national security, economy, or public health.
  • Development of standards and tools: The National Institute of Standards and Technology will establish standards for AI safety testing. The Department of Homeland Security will implement these standards in critical infrastructure sectors and create an AI Safety and Security Board.
  • Biological material and AI: The order establishes robust standards for AI-assisted biological synthesis screening. Agencies funding life-science projects will incorporate these standards, managing risks potentially heightened by AI.
  • Combatting AI-enabled fraud and deception: The Department of Commerce will develop standards for detecting AI-generated content and authenticating official content, helping Americans identify authentic government communications.
  • Advanced cybersecurity program: The order emphasizes developing AI tools to identify and address vulnerabilities in critical software, enhancing cybersecurity measures.
  • National Security Memorandum on AI: A directive for a comprehensive document to guide the military and intelligence community in using AI responsibly and ethically.

#3 Japan: Social Principles of Human-Centric AI

In 2019, Japan took a unique step towards shaping the future of artificial intelligence (AI) with its Social Principles of Human-Centric AI. These principles have a vision for a society where AI enriches lives while respecting human dignity, celebrating diversity, and ensuring sustainability.

These principles lay down seven key guidelines:

  • Human-centric: AI should serve people, not the other way around
  • Education and literacy: People should understand AI and how to use it
  • Privacy protection: Keep personal data safe and private
  • Security assurance: AI should be secure from cyber threats
  • Fair competition: AI shouldn’t give unfair advantages in business
  • Fairness, accountability, and transparency: AI should be fair, and its actions should be explainable and transparent
  • Encouragement of innovation: AI should be a pathway to new discoveries and technologies

Japan's approach to regulating AI is twofold:

  • Regulation on AI: This is about reducing the risks that come with AI, like privacy breaches or unfair practices. Interestingly, Japan doesn’t have strict, binding laws that limit AI use. Instead, they prefer a flexible approach, providing guidance and encouraging companies to regulate themselves. It’s like having a coach rather than a referee.
  • Regulation for AI: Here, Japan is looking at changing laws to help AI grow. For example, they’ve made changes in traffic laws to allow self-driving cars and in financial laws to use AI in credit scoring.

There's a real focus on making sure AI doesn’t just serve the tech-savvy but benefits everyone. For instance, the Digital Rincho initiative is reviewing thousands of regulations to see how digital solutions, including AI, can replace outdated methods. It’s a bit like spring cleaning, making space for new technology in old rules.

#4 Canada: Pan-Canadian Artificial Intelligence Strategy

Canada is at the forefront of ethical AI, putting a strong emphasis on moral values in AI development. The Canadian approach is grounded in ensuring that AI practices are conducted with a high degree of ethical standards, focusing on transparency, fairness, and privacy.

This is evident in initiatives like the Pan-Canadian Artificial Intelligence Strategy, which aims to foster AI development while ensuring ethical and human-centered use of this technology.

Also read: In-House Legal Guide to Safeguarding Company Data

HTML Table Generator
Country/Region Approach to AI Regulation Key Aspects
European Union Artificial Intelligence Act Focus on safe, respectful AI; risk-based categorization; human oversight; goals include healthcare improvement, environmental protection.
United States Executive Order by Biden Administration New standards for AI safety and security; requirement for AI developers to share safety test results; focus on critical infrastructure, biological materials, fraud detection, cybersecurity.
Japan Social Principles of Human-Centric AI AI for societal harmony; principles include human-centricity, privacy, innovation; flexible regulation approach; legal reforms for AI growth.
Canada Pan-Canadian Artificial Intelligence Strategy Leadership in ethical AI; emphasis on moral values, transparency, fairness.

Challenges in AI Regulation

From one country to another, the challenge of regulating AI is as diverse as the technology itself. Each brings its unique perspective to the table—some with creative solutions, others with thoughtful care.

#1 Balancing innovation and regulation

One of the primary challenges in AI regulation is striking the balance between fostering innovation and ensuring responsible development. Technological growth must be nurtured but not at the expense of societal norms and ethics.

Take the USA, for instance. Known for its Silicon Valley and tech giants, the US has adopted a 'let the market lead' attitude. This approach has turned it into a hotbed for AI advancements. But it's not the Wild West.

The US is also keenly aware of the need for responsible AI development. So, while the tech gurus innovate, there's a watchful eye on ensuring these advancements play nice with the broader societal landscape.

#2 International collaboration vs. national interests

AI doesn’t see borders, but countries do. This creates a tug-of-war between international cooperation and national agendas. Every country wants a piece of the AI pie, but they also want to make sure it's cooked to their taste.

The European Union’s AI Act is a prime example here. It tells the world that while Europe is all for AI development, it won't compromise on individual privacy and ethical standards. It’s set a high bar for global AI players, ensuring they clear the EU’s stringent ethical and privacy standards.

#3 Adapting laws for fast-evolving AI technologies

AI moves fast, really fast. Laws? Not so much. This mismatch is like trying to program a VCR with a smartphone—there's a disconnect. Countries are striving to make their legal frameworks as agile and adaptable as the technology they aim to govern.

Japan tackles AI regulation in a practical way. They focus on making AI safe and beneficial for society, rather than just setting strict rules. Japan's approach is flexible, adapting quickly to new technology changes. This balance helps them keep up with AI's fast pace while making sure it's used in a good way for everyone.

Note: Ensure your company’s data safety by implementing a Generative AI use policy. Download our detailed playbook developed by legal leaders to help you create a robust AI use policy.
Download the AI Use Policy Playbook

What does the future of AI regulation look like?

There are some potential risks we need to be aware of, like bias, discrimination, and even misuse of our personal data. That's why governments are working hard to figure out how to regulate AI in a way that keeps us safe and keeps things fair.

Looking forward, there seem to be the following trends in AI regulations.

  • Global standards: The international community is increasingly recognizing the need for global standards to ensure the responsible development and deployment of AI. This is being driven by the fact that AI systems can have far-reaching consequences, crossing national borders and impacting people around the world.
  • Ethical AI: There is a growing consensus that AI should be developed and used in a way that is ethical and aligned with human values. This includes principles such as transparency, accountability, fairness, and non-discrimination.
Also read: Choosing the Right Legal Research AI Tool for Your Team
  • Risk-based approach: Many countries are adopting a risk-based approach to AI regulation, focusing on the most high-risk applications, such as autonomous weapons or facial recognition technology.
  • Sandboxes: Regulatory sandboxes are being used to create safe spaces for experimentation and innovation in AI. This allows companies to develop and test new AI technologies in a controlled environment, with less regulatory burden.

The role of international bodies

International bodies like the United Nations (UN) and the World Economic Forum (WEF) play a crucial role in promoting global cooperation on AI regulation. They do this by:

  • Providing platforms for dialogue: International bodies offer forums for governments, businesses, and other stakeholders to discuss the challenges and opportunities of AI regulation.
  • Developing international norms and standards: International bodies can help to develop international norms and standards for the responsible development and use of AI.
  • Promoting best practices: International bodies can share best practices for AI regulation and help countries learn from each other's experiences

Potential for a global AI regulatory framework

The development of a global AI regulatory framework is still in its early stages, but there is growing momentum behind the idea. A global framework could help to ensure that AI is developed and used in a way that is safe, fair, and beneficial for all.

There are a number of challenges that we need to be overcome before a global AI regulatory framework can be developed, such as:

  • Differing national priorities: Different countries have different priorities and concerns regarding AI regulation. This could make it difficult to reach agreement on a global framework.
  • Sovereignty concerns: Some countries may be concerned about giving up control over their own AI regulations.
  • Technological complexity: AI is a complex and rapidly evolving technology, which makes it difficult to develop comprehensive regulations.

Despite these challenges, there are a number of reasons to be optimistic about the possibility of a global AI regulatory framework. The potential benefits of such a framework are significant, and the international community is increasingly recognizing the need for cooperation on this issue.

Also read: Overcoming Roadblocks to the Adoption of Generative AI by In-House Legal Teams

What actions should lawyers take to ensure ethical AI use?

“In using technology, lawyers must understand the technology that they are using to assure themselves they are doing so in a way that complies with their ethical obligations – and that the advice the client receives is the result of the lawyer’s independent judgment.”

~ Wendy Chang, Member, ABA’s Standing Committee on Ethics and Professional Responsibility
Time to Regulate AI in the Legal Profession? (Perspective)

As we've seen from global AI regulation efforts, each region adopts a unique approach, reflecting its values and priorities. The EU's AI Act emphasizes safety and ethical use, while the US balances innovation with responsible development. Japan's strategy stands out for its flexibility and focus on societal harmony.

SpotDraft AI exemplifies these trends by integrating AI responsibly into legal and business processes. Our approach mirrors the global push for ethical AI use, emphasizing trust and value in business applications.

To ensure safe use of AI in your organization, create a robust Generative AI Use Policy with the help of our detailed playbook developed by legal experts.

Download the AI Use Policy Playbook

Download the Free Template

Email me the free Business Contract Template

Download the Free Template

Try an Interactive Demo

Try an Interactive Demo

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template