The AI Legal Pulse: Legal and Tech Updates on Disruptive Technologies
The AI landscape is rapidly evolving. To help you stay abreast of the various developments, we share a recap of the latest legal and tech updates related to AI and other emerging technologies.
The “Godfather” of AI resigns from Google to warn about super intelligent AI.
Geoffrey Hinton, a pioneer in deep learning and neural networks, resigned from Google, citing concerns about the future of AI. Hinton says he’s concerned about super intelligent AI taking over control from people. Super intelligent AI is a hypothetical, highly advanced form of AI that surpasses human intelligence across all domains and has the potential to bring about significant societal and technological changes. Even though such AI is still only hypothetical, Hinton fears that super intelligent AI could manipulate humans to accomplish human-set goals. However, Hinton is not advocating for the end of AI development, as it has great benefits to humanity (from helping develop new technology to medical diagnosis). Hinton instead is advocating for more resources to be placed into this area and for international collaboration to reduce the risks AI poses.
Hinton’s comments came shortly after dozens of experts in AI signed an open letter calling for a pause in developments that were more advanced than ChatGPT until robust safety measures could be implemented.
The United States Supreme Court upholds the requirement for a human inventor.
In Thaler v. Vidal (No. 22-919), Dr. Stephen Thaler sought to patent two inventions (a “Neural Flame” and a “Fractal Container”) created by his AI system named “Device for the Autonomous Bootstrapping of Unified Science” (DABUS). Thaler submitted patent applications to the United States Patent and Trademark Office’s (USPTO), naming “DABUS” as the sole inventor. The USPTO denied Dr. Thaler’s patent applications for failing to list any human as an inventor. Dr. Thaler challenged that conclusion in the US District Court for the Eastern District of Virginia, which agreed with the PTO and granted it summary judgment. Dr. Thaler appealed the decision to the Court of Appeals for the Federal Circuit (CAFC), which affirmed the lower court’s decision and concluded that the Patent Act requires an “inventor” to be a natural person.
While Dr. Thaler contended that DABUS was the inventor because it was responsible for the invention’s conception, which is what defines inventorship under the Patent Act, both the USPTO and CAFC insisted that the Patent Act requires a human inventor. The US Supreme Court agreed. The rationale behind this decision was that patent law aims to encourage human innovation, and awarding patents to AI systems would not yield the same outcome.
The United States Copyright Office issued guidance for registering works that include AI-generated content.
The Copyright Office issued a statement of policy to clarify its practices for examining and registering works that contain material generated by the use of AI technology. The Office reiterated that copyright can protect only material that is the product of human creativity, and that the term “author” in the United States Copyright Act excludes non-humans.
The guidance requires applicants to disclose and briefly describe the AI-generated content in a work submitted for registration and to provide a brief description of the human author’s contributions to the work. Additionally, AI-generated content that is more than de minimis should be explicitly excluded from the application. Applicants may claim copyright protection only for the human-authored portions of the work.
For pending applications and issued registrations that do not comply with this new policy, the Office recommends that corrective action be taken. Such action includes contacting the Office to correct a pending application or filing an application for a Supplementary Registration to correct an issued registration. Applicants who fail to update the public record after obtaining a registration for material generated by AI risk losing the benefits of the registration.
The Copyright Office may issue additional guidance in the future related to registration or other copyright issues implicated by AI technology.
The European Union’s (EU) AI Act moved ahead with a new Copyright transparency requirement for AI.
Two years ago, the EU proposed a draft of the AI Act, which aims to introduce a common regulatory and legal framework for all types of AI applications in all sectors outside of the military. Recently, the draft of the AI Act moved to the next stage of debate, the trilogue, where the EU lawmakers and member states will debate the final details of the bill.
The AI Act classifies AI applications by risk and regulates them accordingly, with low-risk applications not regulated, medium and high-risk applications requiring compulsory conformity assessment, and some critical applications already under existing EU law requiring conformity assessment under AI Act requirements. The proposal prohibits certain types of applications, including remote biometric recognition, subliminally manipulating persons, exploiting vulnerabilities of certain groups in a harmful way, and social credit scoring. The AI Act proposes the introduction of a European Artificial Intelligence Board to ensure the regulation is respected. The Act uses the New Legislative Framework to regulate the entry to the EU internal market and requires conformity assessment, either through self-assessment or third-party assessment.
A new proposed provision included banning copyrighted material being used to train generative AI like ChatGPT. However, to keep the EU on the frontier of regulating AI technology, a transparency requirement was added to require operators of AI platforms to disclose the copyrighted materials that have been used to train their AI systems.
OpenAI (ChatGPT) meets the EU’s data protection requirements.
In March, Italian authorities took the precautionary measure of blocking the use of ChatGPT. Data regulators in France, Germany, Ireland, and Canada also began investigating how the OpenAI system collects and uses data. These actions were taken due to concerns about potential privacy violations and the indiscriminate scraping of content from the internet, including personal data, without proper consent.
Recently, OpenAI complied with the Italian Data Protection Authority (Garante) request by updating its privacy policy, clarifying details about personal data usage and developing language models, and making the privacy policy more visible during the signup process. OpenAI also added age confirmation to an Italian welcome page and the signup process, provided more information about user data controls, including how to export and delete ChatGPT data, and shared more information about how user data improves model performance. Additionally, OpenAI created an opt-out setting (Chat History & Training option) for users who do not want their personal data to be used.
China introduces draft regulations for AI products.
China has released draft regulations for the regulation of AI products, requiring Chinese tech companies to register generative AI products with the country’s cyberspace agency and submit them to a security assessment before release. Companies will be responsible for the “legitimacy of the source of pre-training data” to ensure that content reflects the “core value of socialism.” Companies are also required to ensure AI does not call for the “subversion of state power” or the overthrow of the ruling Chinese Communist Party, among other restrictions. The rules come as part of a broader regulatory crackdown on China’s tech industry.
China has previously regulated AI through the Personal Information Protection Law, Cybersecurity Law, and Data Security Law, as well as local policies such as the Regulations for the Promotion of the AI Industry in the Shanghai Municipality and Shenzhen Special Economic Zone.
If you have questions about legal developments related to AI, please reach out to a member of LP’s Intellectual Property Group.