What does the future hold for AI regulation in the US, and how can you prepare your organization for the shifts ahead? The answers might be closer—and more complex—than you think.

In this guide, we’ll talk about how regulations take shape and how you can best prepare to advise your clients on AI implementation, compliance, and risk management.

Why Regulate AI: Challenges That Demand Regulation?

Regulating AI is essential due to its significant risks, such as manipulating human behavior, spreading deep fakes, and the potential for biased algorithms that can lead to unfair treatment.

Implementing policies and laws will ensure transparent monitoring of companies developing AI solutions, protect against these threats, and ensure accountability.

Key reasons that call for strict AI regulations:

AI could spread disinformation or manipulation

AI has the potential to generate and spread misinformation at an unprecedented scale and speed.

Take, for example, the 2016 US election as a reminder. A study by Nature Communication revealed that a mere 6% of Twitter accounts, identified as bots, were responsible for spreading nearly a third of all "low-credibility" information during the election period.

What’s truly concerning is the speed at which such misinformation can spread. These AI-powered bots can create a chain reaction of false information in just two to 10 seconds—less time than it takes to tie your shoelaces. This rapid spread poses a challenge as the damage might be too much by the time regulators and fact-checkers catch up.

AI could create privacy and copyright issues

AI models, by their very nature, are trained on vast amounts of existing data. This means the output isn’t new or creative in any way. It raises serious questions about the unauthorized use of personal information and copyrighted material.

A recent lawsuit against OpenAI and Microsoft by The New York Times in December 2023 highlights such concerns. The suit alleges their AI models were trained on copyrighted works without permission, highlighting the gap between AI development and IP rights.

Furthermore, the ambiguity surrounding the ownership of AI-generated content adds another layer of complexity. Who owns the rights to a piece created by AI? The programmer? The company that developed the AI? Or should it be owned by the public?

These questions underscore the need for clear laws for AI on how companies use public data in AI training.

AI could lead to cybercrime at a high scale

As with any new technology, AI has the potential to fall into the wrong hands, amplifying the threat of cybercrime.

There are already black hat alternatives of ChatGPT like WormGPT and FraudGPT, designed explicitly to aid attackers in crafting highly persuasive phishing emails and other social engineering attacks.

Consider this email, for instance. It has impeccable grammar, convincing language, and sounds genuine, but it's not.

Source

According to the FBI, business email compromise (BEC) was the second-costliest type of fraud in 2023, amounting to $2.9 billion in losses. Scammers' access to AI threatens the security of the whole organization.

The most compelling argument for AI regulations comes from the former U.S. Secretary of Defense Donald Rumsfeld termed "unknown unknowns”—the risks we haven't even conceived of yet.

The rate at which AI is evolving and will continue to evolve makes it even more difficult to imagine what’s possible. This unpredictability underscores the need for flexible yet comprehensive AI regulations that can adapt to emerging challenges.

The Current Complicated State of the U.S. AI Regulation

Artificial Intelligence has been quietly shaping our lives for years, from Google's autocomplete suggestions to personalized recommendations on streaming platforms.

But, the release of ChatGPT in 2022 thrust AI into the spotlight, revealing its potential and perils. Its widespread application has intensified the need to build AI policies and laws to regulate various aspects of this powerful technology.

Regulating AI, however, is no simple task. The complexity stems from various factors that create a challenging regulatory outlook.

Why AI regulation in the US is such a complicated matter?

The complicated structure of the US political system 

The divide between federal and state-level regulations complicates the creation of unified AI policies across the country. This fragmentation can lead to inconsistent rules and enforcement, potentially hampering innovation while leaving gaps in protection.

The unmatchable pace of AI advancement

AI is evolving at lightning speed, often outpacing lawmakers' ability to create relevant legislation.

In just five days, ChatGPT surpassed 1 million users. Following its launch, nearly every major company introduced AI tools, including Google Gemini and Microsoft Co-pilot. Such widespread adoption shows how AI innovation can benefit the public. But, a balance must be struck between fostering innovation and ensuring public safety.

Regulations that impose burdensome requirements on open-source software or AI technology development can hamper innovation, have anti-competitive effects that favor big tech, and slow our ability to benefit everyone."
~
Andrew Ng, founder of Deeplearning.AI

Lack of AI-specific regulatory bodies at the federal level

Currently, the responsibility for overseeing AI falls on existing agencies, which may lack the specialized expertise to address AI's unique challenges.

For instance, in April 2023, a joint statement from the Federal Trade Commission, Equal Employment Opportunity Commission, Consumer Financial Protection Bureau, and Department of Justice clarified that their authority extends to "software and algorithmic processes, including AI." While this approach leverages existing regulatory frameworks, it may not fully address the nuanced issues AI presents.

AI’s widespread nature across sectors

AI applies in every sector—from healthcare diagnostics to legal contract management—making a one-size-fits-all regulatory approach unsuitable. This sector-specific approach adds another layer of complexity to the regulatory puzzle.

"Artificial intelligence by the name is not something that you can actually govern. You can govern the sub-effects or the sectors that artificial intelligence can affect. And if you take them on a case-by-case basis, this is the best way to actually create some kind of a policy."
~
Khalfan Belhoul, CEO of the Dubai Future Foundation

Given such challenges, what is the current state of AI regulation in the US? 

As of August 2024, there is no comprehensive, AI-specific federal regulation in place. Instead, we noticed a patchwork of existing laws dealing with consumer privacy, data protection, and healthcare being applied to AI-related issues.

These laws offer some protection in specific areas but fall short of directly regulating AI algorithms or technology.

However, steps are being taken towards more focused AI governance.

To put this into perspective, there is an uptick in state-level AI legislation. In 2023 alone, 25 AI-related regulations were introduced across various states, marking a dramatic increase from just one in 2016. States like Texas, California, and Colorado are taking the lead through state-level-specific regulations.

On the other hand, there have been positive responses from the private sector as well. In July 2023, leading AI-tech companies, including Adobe, Amazon, IBM, Google, Meta, Microsoft, OpenAI, and Salesforce, voluntarily committed to "help move toward safe, secure, and transparent development of AI technology."

Among all these, one of the strongest moves came from President Joe Biden in October 2023. He issued Executive Order 14110, the "Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence."

This order outlines a wide range of actions to strengthen safeguards against AI risks while promoting innovation and American leadership.

The Executive Order addresses eight key areas:

  1. AI safety and security standards
  2. Privacy protection
  3. Equity and civil rights advancement
  4. Consumer, patient, and student protection
  5. Worker support and labor market impacts
  6. Innovation and competition promotion
  7. US global leadership
  8. Responsible government use

While this Executive Order represents a critical step forward, it's important to note that it provides guidance rather than enforceable regulations. It sets the stage for future legislative action and agency rulemaking.

The US is undoubtedly at a critical juncture in AI governance. The key challenge is building regulations that protect public interests without stifling innovation—a delicate balance that will require ongoing collaboration and understanding between policymakers, industry leaders, and the public.

Also read: Global AI Regulation: The World's Approach to Ethical AI Use

3 Key Things Lawyers Should Know About AI

AI impacts the legal sector at a higher level, classifying the need to stay informed about the key details related to AI.

Understanding the evolving regulatory landscape will be key to navigating the opportunities and challenges as AI continues to reshape industries and society.

AI doesn’t replace humans, it assists them

While AI has its flaws, dismissing it entirely isn’t a viable option. As AI evolves, it will become an indispensable part of the legal world. The only way forward is to embrace it responsibly, with best practices and guidelines in place, recognizing AI as a tool that can significantly enhance efficiency in legal processes.

“Some people are scared of it but I honestly think it's just going to make everyone's life so much easier. You really can't substitute a lawyer. Take risk tolerance, for example. Maybe I can give you data on which path you can take but, at the end of the day, you need context to be able to make a call. So, I personally don't think it’s scary. It’s actually really cool.” 
~
Katayoon Tayebi, AGC at FIGS

Here are some use cases of AI in legal:

  • Contract reviews: AI can accelerate the contract review process by analyzing large volumes of contracts and identifying unfavorable terms, clauses, and potential risks. Such automation not only saves time but also reduces the likelihood of errors.
  • Standardized processes: You can build standardized processes AI-driven CLMs for contract drafting, reviewing, and analysis at scale, freeing up hours and speeding up the approval process. With predetermined standards, you can focus more on high-level strategic work.
  • Analytics and reporting: AI allows you to analyze vast amounts of data at lightning speed and generate insights into various legal activities, such as budgeting, performance metrics, compliance trends, and negotiation patterns. Having data at your disposal, you can showcase your work’s value and get buy-in from executives.
“Most C-Suite executives bank on data and hard metrics and not word-of-mouth. When you have certain metrics that shed light on how legal teams have contributed to growing the revenue stream of the company, it becomes easier for the GC to make business cases.”
~
Gitanjali Pinto Faleiro, Oxford law graduate, prior VP & AGC at Goldman Sachs

Assess the risks associated with data use

AI thrives on data, so naturally, concerns arise about the privacy and security of the data fed into tools like ChatGPT.

For example, lawyers are concerned about using tools like ChatGPT, as they might access companies' data or employees' sensitive information. To assess how AI tools are used in your organization, you need to craft and implement an AI policy.

“My team started using ChatGPT right away, which I think is a good thing to stay competitive. But they were putting in data that were honestly kind of horrifying. I was caught a little flat-footed, which is unfortunate, so I would say if you don’t have a policy, put one in.” 
~
Noga Rosenthal, General Counsel and Chief Privacy Officer, Ampersand.

Also read: Crafting Effective Generative AI Policies: A Step-by-Step Guide.

AI-generated output must be monitored

AI’s output is prone to biases, misinformation, and data manipulation. So, it’s important to ensure that AI-generated outputs are accurate and free from biases. 

For example, AI tools can sometimes introduce unfavorable clauses or biases during tasks like contract drafting. Therefore, lawyers must exercise due diligence by reviewing, testing, and supervising AI outputs to ensure they align with ethical standards.

“A lawyer must know, test, look, supervise, understand, and make all necessary adjustments so that while he or she may be using AI as a tool, the ultimate advice is still independently his or hers and is ethically compliant.”
~
Wendy Wen Yu Chang, Hinshaw & Culbertson LLP

What does the Future Hold? 4 Major Predictions for AI Regulation

As we look ahead, the outlook towards AI regulation is beginning to take shape, with several key predictions emerging.

One central theme is the growing consensus on the need for external governance over internal governance. Such an approach is driven by the concern that effective regulation may be compromised if only a select few control AI development internally.

Here are some of the predictions regarding AI regulation that we might see in the foreseeable future:

Formation of external regulatory agencies

One major prediction is the establishment of external regulatory bodies dedicated to overseeing AI. These agencies would be responsible for licensing, monitoring, and ensuring the safe use of AI technologies.

In his testimony before the Senate, OpenAI CEO Sam Altman advocated for an agency that would operate "above a certain scale of capabilities." Such an agency would have the power to revoke the license and enforce compliance with safety standards, ensuring AI development aligns with public safety and ethical standards.

Similarly, Gary Marcus, a scientist and author of Rebooting.AI, has proposed the creation of an international AI agency akin to CERN, the European Organization for Nuclear Research. This body would unite scientists, governments, and companies to develop global rules and norms focused on AI safety.

This idea emphasizes the need for a coordinated international effort to manage the global implications of AI technologies.

Establishment of regulatory processes

There are also predictions of implementing a regulatory process similar to those used by the FDA for large-scale AI models. Such processes would involve rigorous safety analyses before AI systems are released to the public and ongoing post-deployment monitoring.

This approach underscores the need to take a proactive approach to minimizing AI's impact before it impacts society on a broad scale.

External evaluation and validation

The laws for AI in the US might also include the need for external evaluation and validation of AI systems. This practice involves assessing AI technologies by independent parties to mitigate the risk of biases and ensure objectivity.

This means companies will be encouraged to have their AI processes evaluated by personnel not involved in their development. Some organizations, for example, luminous.law, a law first run by Andrew Burt, have already been putting this into practice.

The Federal Trade Commission (FTC) has long advocated for accountability and independence in AI development.

In its April 2023 guidelines, the FTC recommended that companies "embrace" transparency frameworks, independent standards, and independent audits, and consider opening their data or source code to outside inspection.

This recommendation further strengthens the call for external validation of AI development processes and ensures safer, more transparent processes related to user data use.

Evaluation and audit of algorithm

The government might implement policies to conduct proper algorithm auditing. Algorithms, the very foundation of AI, are deeply integrated into the technologies shaping our daily experiences—from the ads we see online to the recommendations we receive.

Given their far-reaching impact, evaluating and auditing them is only fair to limit the spread of biased, inaccurate, and potentially harmful output.

For this, the government is likely to take proactive action, such as thorough audits, either internally or through external independent parties.

Contract Management and Compliance with SpotDraft AI

With uncertainty in AI regulation and its omnipresent nature, hiding from it isn't the way forward. AI, despite its flaws, will reshape the legal department by becoming a handy tool for managing various aspects of contract management—from drafting to reporting.

And, this calls for a reliable contract management tool that helps you stay compliant, maintain ethical standards, and improve work efficiency.

SpotDraft does just that (and more!). With built-in AI tools, you can turn hours of due diligence into minutes by checking for IP rights, addressing biases during the review process, and understanding post-acquisition obligations, rights, and permissions.

With a built-in library of clauses and the option to add custom clauses, you are assured that your contract will comply with your company’s terms and conditions.

Download the Free Template

Email me the free Business Contract Template

Download the Free Template

Try an Interactive Demo

Try an Interactive Demo

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template

Download the Free Template