Executive Order, IP owners seek to safely navigate the ‘promise and peril’ of AI | McAfee & Taft

Discussion of artificial intelligence has become ubiquitous, particularly generative artificial intelligence (AI) given its potential to create new content, which some reports estimate could add up to $4.4 trillion annually to the global economy.

In an effort to coordinate a federal approach to this new technology, President Joe Biden issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on October 30, 2023. Invoking authority under the Defense Production Act, which was used by former President Trump during the COVID-19 pandemic, the Order provides varying regulatory principles and priorities and directs several agencies to promulgate standards and guidelines related to AI — which the Order says “holds extraordinary potential for both promise and peril.”

The Order directs the Departments of Commerce, Energy, Health and Human Services, Homeland Security, Transportation, and Education, as well as the Copyright Office and U.S. Patent and Trademark Office (USPTO), to address potential risks and benefits from the use of AI across financial services, healthcare, biotechnology, energy, transportation, telecommunications, intellectual property, competition, labor, education, housing, law enforcement, consumer protection, cybersecurity, national security, privacy, and trade. Among concerns raised by the Order are national security risks posed by AI, algorithmic discrimination, and the use of AI in the creation and theft of intellectual property. The vast majority of the guidance, reports and standards required under the Order are yet to be seen; however, the following are some of the deadlines under the Order in the coming months:

  • February 27, 2024: guidance for USPTO patent examiners and applicants regarding inventorship and the use of AI from the USPTO Director (assuming publication of new AI study by the Copyright Office is complete)
  • March 28, 2024: a public report on best practices to manage AI-related cybersecurity risks to financial institutions from the Secretary of Treasury
  • April 27, 2024: best practices for employers to mitigate AI-related harms to employees from the Secretary of Labor
  • July 26, 2024: establishment of guidelines and best practices for developing safe and secure AI systems from the Secretaries of Commerce, Energy, and Homeland Security as well as the deadline for a proposed National Security Memorandum on AI

The regulations and guidance ultimately provided under the Order could have a significant impact on how AI may be used and what internal and external practices companies should take when dealing with AI. Companies should not wait on federal regulatory guidance before taking precautions, however. Whether developing their own or using others’ AI tools, companies face significant legal and business risks related to AI that should be considered whether or not their business faces an immediate impact from the President’s recent Order.

Enforcement considerations

With AI’s booming popularity, many are rushing to utilize AI models to evaluate internal data, including customer information. But, privacy laws in states such as Colorado and California could restrict this use. In addition, the use of consumer data to train AI models without consumer consent has already been the focus of federal regulators.

Take Everalbum, Inc., for example. In January 2021, the Federal Trade Commission (FTC) filed an administrative complaint against the technology company for its use of customer photos and videos to train AI facial recognition models without consent. The photos and videos were uploaded by customers of Everalbum’s online platform for customer use, but Everalbum created certain datasets with the uploads without user permission. In May 2021, the FTC settled with Everalbum for AI and privacy violations, requiring Everalbum to destroy the AI algorithms and models it created. This “algorithmic disgorgement” may become a common method of enforcement by regulators and poses a significant penalty where millions are spent on developing AI tools.

Everalbum is not the only AI-related enforcement by the FTC. In addition to recently issuing a 20-page investigative letter to OpenAI, the FTC issued a report to Congress warning of various issues related to AI. Concerns raised by the report include discrimination against protected classes and blocking content in violation of free speech. The FTC’s full scope of enforcement in the area remains to be seen, but it is clear that the agency has a myriad of concerns related to the development and use of AI in the private sector.

Intellectual property loss

AI also poses a risk to intangible intellectual property rights of companies.

The most obvious risk of AI is the accidental disclosure of company trade secrets by a company’s employees. Although employee use of public AI tools has now become common, employees may not realize input into AI tools may not be confidential and that the terms of use for many tools expressly allow the developer to use or disclose that input. So, an employee’s use of AI for even the simplest task risks public disclosure of sensitive, proprietary, or trade secret information. Even where the AI tool used is not public and is provided by a vendor under a formal agreement, incidental loss of ownership or confidentiality of company information could result if the third-party agreement does not expressly delineate how company information input must be protected and how and whether it may be used to train the AI model used.

A corollary risk posed by AI is to proprietary software. In-house and third-party software developers are increasingly using AI to generate code; however, these tools may be utilizing or providing copies of open source software for what is generated. Many open source software licenses require that software including or derived from open source be made freely available to the public. As a result, if AI-generated code is used or combined with proprietary company software, the ownership of that proprietary software could be at risk if the AI-generated code includes or is derived from open source.

Inability to protect AI-created intellectual property

For companies developing AI solutions, there are additional considerations as to what may be protected. As noted above, President Biden’s Order has asked the USPTO director to provide guidance as to what — if anything — may be patentable if invented using AI. The U.S. Copyright Office has already provided guidance on AI as it pertains to copyright, and has found that content produced by AI is not registrable unless it contains human-authored aspects of the work that are “independent of” the AI-generated content, and then only the human-authored aspects are registerable. The Copyright Office is in the process of a new AI study that should be published soon, and the President’s recent Order should spurn additional guidance on this topic. But — for now — companies face a substantial uphill battle in claiming intellectual property of AI-generated content.

Third-party intellectual property claims

Another potential risk companies should consider is whether content created by AI infringes third-party copyright. Even though an AI user may not ask for infringing material, an AI tool may nevertheless provide content that infringes third-party rights, and a business may not realize it until the third-party owner comes calling. The terms of use for many AI tools disclaim liability for these types of situations; however, that tide may be shifting. Last September, Microsoft announced its Copilot Copyright Commitment assuring users that they can “use Microsoft’s Copilot services and the output they generate without worrying about copyright claims.” IBM subsequently made a similar announcement stating it would indemnify customers against third-party infringement claims from use of certain AI-generated output. Most recently, OpenAI announced its new Copyright Shield, claiming it would “step in and defend [its] customers, and pay the costs incurred, if [OpenAI users] face legal claims around copyright infringement.” The specific terms of these indemnity assurances are yet to be seen, however, and may include carve-outs and exceptions that businesses should watch for when using AI platforms.

While there is no doubt that AI can be a powerful tool for companies, it also poses significant risks. Although the President’s recent Order may clarify some of the issues posed by this new technology, you should address AI-related risks now by carefully examining how your company is using AI. For more information on the legal concerns posed by AI and other emerging technologies, please feel free to contact any one of our Intellectual Property attorneys.

Leave a Comment

Your email address will not be published. Required fields are marked *