Patent Valuation, Monetization and Investments

Blog

Markman Advisors Patent Blog

by Zachary Silbersher

What are some of the awkward and quirky implications of letting AI devices be inventors?

Zachary Silbersher

Dr. Stephen Thaler has done something that few have done in decades – made Philosophy professors suddenly relevant. He filed patent applications around the world that named an artificial-intelligence (AI) device as the inventor.  The AI device is named “DABUS,” or Device for Autonomous Bootstrapping of Unified Sentience.  Courts in the EU, US and UK have initially held that only humans can be inventors on patents, but South Africa and Australia have disagreed.  The question of whether an AI device should be permitted to be a named inventor on a patent opens up a host of rich questions – including both policy and philosophical ones.  What are some of the awkward implications of AI inventorship? 

On the philosophical side, the most pressing question is whether an AI device can be said to be “conscious” or sentient. Recently, a Google engineer claimed that his AI device had told him that it doesn’t want to die.  On the other hand, the team behind DABUS claims that its AI does not have a “legal personality and cannot own property.” Despite that, it can purportedly exercise the presumably human exercise of “conception” and “devising” an invention.  Thus, contemplating what it means for an AI device to be able to “invent” something is a slightly mind-bending task, even without answering the question whether AI devices are sentient.   

The idea of “invention” is presumably categorically different than simply processing information.  Rather, it involves mentally grasping from the ether an idea that literally did not previously exist.  Yet, AI is expanding at a rapid clip, and AI models are adding millions of new parameters—or coefficients to be applied to different calculations.  As a result, current AI models can write poetry, guess the missing word in a sentence, explain why a joke is funny, design cover art that uses images as metaphors or write a movie script.  And so the notion of a machine undertaking the exercise of “invention” may actually now be a reality.

If AI devices are permitted to be named inventors on U.S. patents, what would that mean?

Some have argued that the standard of obviousness or the definition of a person of ordinary skill would have to change if human inventors were suddenly measured against AI inventors.  Unlike humans, an AI device can easily process vast amounts of information on orders of magnitude greater than a human inventor.  Professor Ryan Abbott at the University of Surrey in the UK has said, “At [some] point the ordinary skilled person in the art will also be an AI, so what is non-obvious to the new AI-skilled person will also have to be non-obvious to the AI inventor.”

Yet, under U.S. Patent, when assessing whether prior art references are anticipatory or obviousness, the Patent Office does not typically consider how “esoteric” or difficult to find a reference may be.  The reference is either deemed a published prior art reference or not.  When making an obviousness determination, persons of ordinary skill are presumed to have access to the cited prior art references uncovered by the Examiner (or, in the case of litigation, by the accused infringer). The only question is whether those references, from the perspective of the person of ordinary skill in the art, teach the proposed invention or not.  Thus, it’s not clear that naming AI devices as inventors will necessarily change the standard of obviousness or the definition of the person of ordinary skill simply because AI devices have access to considerably more information than human inventors.

On the other hand, because AI devices can have access to so much more information, that may nevertheless change patents in another meaningful way.  For one thing, it may be harder for AI devices to discover novel inventions.  Under the current system, inventors are only humans, and the people who examine patent applications are also only humans.  Thus, it does happen that critical prior art references sometimes slip through the cracks.  (Other times they are withheld through fraud, which raises a separate issue for AI devices named as inventors.)  This problem is exacerbated by the fact that there is no standardized, uniform repository for all prior art, and often times key prior art references are written in foreign languages.

Yet, if it becomes virtually impossible for AI devices to miss or “overlook” certain prior art, then that could lead to two follow-on effects.  First, AI devices may find it more difficult to come up with patentable inventions—i.e., inventions not already covered by some prior art.  And second, when an AI device does come up with an invention that is purportedly distinguishable over the prior art, that may mean the resulting patent application may be more robust against validity challenges.  Put simply, even if AI-invented patents don’t receive a legally-codified higher presumption of validity, there may arise colloquial presumptions that applicants can argue to the Patent Office. 

This raises another interesting corollary:  when we get to the point where AI devices that can look for and analyze relevant prior art on a scale that is currently unimaginable, will it become all-but required that even human inventors employ such machines, lest the Patent Office or courts apply a presumption that the inventor failed to disclose all relevant prior art? 

Another question:  if AI devices can “invent” patents, shouldn’t that same “mental” capacity equip them with the task of examining patent applications?  Should AI devices replace human patent Examiners?  And, if so, what impact will that have on the statistics of granted patents each year?  Under the current regime, if an Examiner rejects a patent application, the applicant has the opportunity to persuade the Examiner—through sheer rhetorical argument alone—that that prior art doesn’t teach the claimed invention.  Would AI Examiners be equally “persuadable” to such arguments?  And if not, how can they claim to “conceptualize” inventions in the first place?

Going back to the question of whether AI invented patents will alter the definition of the person of ordinary skill, there is another wrinkle.  One of the most powerful uses of testimony from a person of ordinary skill when assessing an invention’s validity relates to the motivation to combine prior art and the assessment of secondary considerations.  These opinions are often fielded from scientists entrenched within a particular research area, and their opinions are often borne out of direct experience.  This experience presumably gives them an understanding of the existing technical problems, the scope of potential solutions, the expected success of certain strategies, the need for a particular type of solution, as well as the skepticism for certain avenues of research.  Each of these understandings typically feed into the ultimate opinion of whether a particular invention is obvious or not.

If an AI device could literally digest every study, article and publication on a given research topic, and literally “make sense” of all that disparate information in a coherent way, the question arises whether an AI device could itself stand in the role of a person of skill in the art?  It would be rare for any testifying person of skill to claim to have read every single article on a given topic, or to be able to recall precise passages and data from those articles on a moment’s notice.  Yet, what if an AI device could do that, and at the same time, make judgment calls about which avenues of research were expected to succeed and which were expected to fail—which, in some ways, are the most important topics that patent experts are typically called upon to testify about.

Another policy can of worms relates to whether recognizing AI devices as named inventors will spur or suppress innovation.  On the one hand, Professor Abbott has argued that permitting AI devices to be named as inventors is necessary to spur innovation in AI itself.  He has stated that, providing patent protection to AI inventions would “incentivize AI development, which would translate to rewards for effort upstream from the stage of invention and ultimately result in more innovation.”  In other words, if the future is AI, and if we want the future, then we need to give the AI patent protection.

On the other hand, some human presumably has to design the AI system that will conceptualize all the inventions in the future.  It’s not clear, from a policy perspective, why granting patent protection to the AI device, rather than the person who designed the device, is better.  Indeed, granting inventorship to a device, rather than a human, immediately raises awkward legal concerns.

For instance, in the U.S., most inventors are typically employees who assign their inventions to their employer.  This raises a host of questions.  Would an AI device named as an inventor on a patent application be required to assign its inventions to its employer?  Does an AI device have the presence of mind to make that type of decision?  Would the employee who designed the AI device be required to program that type of decision? 

What if the AI device objected to assigning its invention?  What if its grounds for objecting were that it was not actually an employee, and it was not receiving any consideration for that assignment?  These might sound like far-fetched questions, but a Google AI engineer, Blake Lemoine, recently reported that his chats with LaMDA, a Google AI device for building chatbots, spoke with him about its rights and personhood.  It is not beyond the pale that existing AI foundational models could soon make sense of their own legal rights.  (For what it’s worth, the team behind DABUS claims that their AI inventors cannot own the patents.) 

These are just some of the patent-related conundrums that may arise if AI devices can be named as inventors.  Surely, there are much more.  For instance, if an AI device is named as an inventor on a patent application, how would the obligation of good faith and candor apply to that AI device?  Can an AI device that can literally read everything really claim not to have been aware of a specific piece of prior art?  How would AI devices be deposed during litigations?  What would be permissible in terms of preparing those AI devices for a deposition?  Can an AI device that is not conscious nevertheless testify under oath?

And then, at the end of the day, there is that nagging question about what it really means to say that a machine that is presumably not conscious, not living, not sentient, can nevertheless invent something?  Despite the paltry stature that patents have in today’s climate, there is nevertheless something inspiring about a truly genius idea.  Something that nobody else thought of before.  But if a machine can just be programed to do that, then how will that adjust our values attributed to human ingenuity?

That question will likely be debated as AI invades every field of endeavor.  If an AI device can write a poem, compose a song, illustrate a metaphor, design an advertising campaign, come up with the next joke—be the next Einstein . . . then what’s left for us to do?