New AI Laws Are Set in Motion at Year’s End | Schwabe, Williamson & Wyatt PC

The year 2023 will be regarded as a pivotal year for advances in AI. As the year comes to an end, lawmakers are hurrying to establish rules for the evolving technology, and hoping to stimulate AI’s promise of innovation, and inhibit the threat of social and economic harms. Parties concerned about the risks of AI are urging lawmakers to strictly regulate it and protect individual rights and freedoms, prevent economic upheavals, and avoid perceived threats. Others are advocating against AI regulation, which they argue would stifle innovation and hinder human progress.

Three recent actions seek to regulate artificial intelligence—in the West, at least—and we describe each here before offering our AI-related observations for businesses during this period of uncertainty.

End-of-Year Rush to Regulate AI in the EU, Canada, and California

The EU AI Act

On Friday, December 9, lawmakers in the European Union struck a deal on what could represent the world’s first comprehensive law to regulate artificial intelligence. The deal sets into motion sweeping new requirements for the use of artificial intelligence that are expected to apply in early 2026. Pressure to implement the EU’s Artificial Intelligence Act, first proposed in 2021, has been mounting due to the rise of popular generative AI tools such as ChatGPT, and recent headlines have increased public concern about AI. The deal struck by the EU Council and Parliament negotiators is expected to settle disputes among the lawmakers which were thought to pose roadblocks for the AI Act. For example, it was reported the lawmakers were previously not aligned regarding the regulation of foundational AI models and national-security exceptions to the AI Act. EU leaders believe this provisional deal will pave the way toward the AI Act’s approval. In the coming weeks, key technical details of the act will be drafted and undergo review. Once completed, the AI Act must be endorsed by the EU Council and Parliament to become law.

If passed, the AI Act will likely require businesses that use AI systems and are subject to EU jurisdiction to:

          • Meet transparency obligations; for example, by disclosing when content has been generated by AI, so individuals can make informed decisions about its use;
          • Develop and make available technical documentation for certain AI systems;
          • Implement governance structures and allocate compliance obligations intended to monitor and mitigate AI risks; and
          • Prohibit implementation of certain uses of AI systems—namely, those most likely to result in harms.

If approved, failure to comply with the EU’s AI Act will lead to significant fines—in some cases, 35 million euro or 7% of global turnover, depending on the infringement and size of the business.

Canada’s Artificial Intelligence and Data Act (AIDA)

On November 28, Canada moved a step closer to implementing its first AI regulatory framework with the government’s publication of the full text of amendments to its draft Artificial Intelligence and Data Act (AIDA). The amendments incorporated significant feedback submitted to Canadian lawmakers by diverse stakeholders in response to an initial legislative attempt, Bill C-27, that sought to ensure AI would be developed and deployed safely and responsibly. The published amendments call for:

          • Greater flexibility in the definition and classification of “High-Impact Systems,” which are central to the AIDA’s key obligations.
          • Alignment with the EU AI Act, which substantially broadens the scope of AIDA and makes it more responsive to future technological changes.
          • Clearer responsibilities for and higher accountability of persons who develop, manage, and release high-impact systems.
          • Specific obligations on the part of generative AI systems, such as ChatGPT, that would not be categorized as “high-impact systems.”
          • Greater clarity on the defined role of the AI & Data Commissioner.

The AIDA provides for robust enforcement and penalties, which would include administrative monetary penalties (AMPs) and the prosecution of regulatory and criminal offences.

California’s Draft AI-related Rules under the CCPA

On November 27, the California Privacy Protection Agency released a much-anticipated first draft of its rulemaking on automated decision-making technologies (ADMT) under the California Consumer Privacy Act as amended by the California Privacy Rights Act (CCPA). The draft aims to provide consumers with key protections when businesses use ADMT, which it broadly defines as “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” The publication of this draft sets into motion the most consequential artificial intelligence law in the U.S., with formal rulemaking procedures expected to start early 2024.

As drafted, the rules would require businesses that are subject to the agency to enable consumers to make informed decisions about ADMT by:

          • Providing “Pre-use Notices” to inform consumers about how a company intends to use ADMT and how to exercise their ADMT-related rights.
          • Giving consumers the ability to opt out of ADMT, with very limited exceptions; and
          • Enabling consumers to obtain additional, detailed information about ADMT, such as information about the company’s ADMT logic, parameters, and outputs.

As part of the draft rules, the California Privacy Protection Agency kicked off discussions of key industry topics, such as whether the ADMT rules should apply to profiling of consumers for behavioral advertising; additional restrictions on profiling children; and the use of consumers’ personal information to train ADMT. Such discussions could have significant effects for online advertising and the use of data-scraping techniques in the development of AI.

Failure to comply with the agency’s rules could result in uncapped fines of up to $2,500 per violation or $7,500 per intentional violation.

While the EU AI Act is said to become the world’s first comprehensive law intended to specifically regulate AI, many regulators have stated their intent to leverage existing laws to take action against illegal business practices involving AI. Two examples:

In the US, the Federal Trade Commission has voiced its view repeatedly that it has the authority, expertise, and the force of existing laws to hold businesses accountable for abuses and harms caused by their use of AI. In November, the FTC noted of AI,

Although AI-based technology development is moving swiftly, the FTC has decades of experience applying its authority to new and rapidly developing technologies. Vigorously enforcing the laws over which the FTC has enforcement authority in AI-related markets will be critical to fostering competition and protecting developers and users of AI, as well as people affected by its use. Firms must not engage in deceptive or unfair acts or practices, unfair methods of competition, or other unlawful conduct that harms the public, stifles competition, or undermines the potentially far-reaching benefits of this transformative technology. As we encounter new mechanisms of violating the law, we will not hesitate to use the tools we have to protect the public.

On November 21, 2023, the FTC authorized a compulsory process to expedite nonpublic investigations involving products and services that use or claim to be produced using artificial intelligence (AI) or claim to detect its use. The FTC will leverage this process to identify uses of AI that lead to deceptive or unfair acts or practices, unfair methods of competition, or other unlawful conduct that harms the public or competition in the marketplace.

Similarly, in a whitepaper published in March 2023, the UK made clear it has no plans to adopt new legislation to regulate AI as part of its deliberate “pro-innovation” approach. Rather, the UK has stated it will rely on its existing regulators, such as the UK Information Commissioner’s Office, to use their authority to steer businesses toward the use of responsible AI in their respective areas of responsibility.

Our AI-related Observations

In our data-driven economy, businesses may want to embrace the use of AI responsibly to benefit from its transformational powers, in spite of regulatory uncertainty. Doing so is not without risk, given the dynamic legal landscape. Such risks might be lessened if those businesses:

  1. Develop and maintain an AI policy that addresses:
    • The procurement and use of third-party AI tools and systems, such as ChatGPT;
    • The development and use of in-house, first-party AI tools and systems;
    • The implementation of automated decision-making; and
    • The use of first-party, third-party, and publicly available data to train AI tools and systems.

Given the popularity of generative AI tools, employees are likely using them at work. Many organizations have enabled AI features in popular productivity software. For example:

  • Finance teams may be using generative AI tools to leverage sales data, as well as third-party market data to improve forecasting.
  • Developers may be leveraging such technology to improve the quality of their code.
  • Businesses may have already enabled AI features in commonly used applications to assist in writing emails, taking notes, or creating presentations.

Individuals in your organization may also be developing their own AI applications or training large-language models using customer data or information found online. These uses can create substantial benefits for your business, though they also pose risks.

Businesses that do not adopt or update AI policies may miss easy wins, such as the opportunity to employ current business processes to vet third-party AI tools, which could help them stay in compliance. Implementing an AI policy, even as a work in progress, can set a tone for responsible uses of AI to fuel innovation for business.

  1. Implement processes to identify, review, and monitor existing and new uses of AI. Firms may want to begin to document existing uses of AI across their operations, even if such uses have not been formally vetted or approved. Such documentation may facilitate future mitigation and compliance controls as the laws evolve.
  2. Assess compliance with applicable existing laws and make necessary investments. For example, it is likely that compliance with existing privacy laws, such as the GDPR and the provisions of the CCPA that have been implemented, will empower your firm to adhere to new AI requirements with greater ease.
  3. Encourage a culture of documentation. Under the AI laws and regulations described above, transparency and accountability are centerpieces that will necessitate documentation of your company’s use of AI tools and systems. As such laws and regulations come into effect, businesses will require technical documentation that can be started now related to your use or development of AI tools and systems. For example, your firm may want to start maintaining documentation related to:
    • The performance of any vetting or risk assessments related to the use or development of AI tools and systems;
    • The inputs and outputs of AI tools and systems; and
    • The logic underpinning AI tools and systems, particularly those involved in automated decision-making.

Heading into 2024, we are closely monitoring updates to the laws, regulations, and industry standards that will shape the evolution of AI globally, and anticipate providing updates about significant developments.

Leave a Comment

Your email address will not be published. Required fields are marked *