The Federal Circuit has issued six decisions since December 1, 2015, all of course invalidating the patents in suit, four per curiam (Clear With Computers v. Altec Indus; Cloud Satchel v. Amazon.com; Wireless Media Innovations v. Maher Terminals; and Priceplay.com. v. AOL Advertising.) and two opinions in Vehicle Intelligence v. Mercedes-Benz USA, and Mortgage Grader, Inc. v. First Choice Loan Servs. Inc..
Vehicle Intelligence involved U.S. Patent 7,394,392, written by a patent attorney, on the use of expert systems to determine whether an equipment operator–e.g., the driver of a car–was impaired from intoxication, fatigue, physical disability, or other factors.
The patent specification is very general, and does not actually describe the implementation of any specific expert system. The general idea appears to be using “screening modules”
Claim 8 recited a method:
selectively testing said equipment operator when said screening of said equipment operator detects potential impairment of said equipment operator; and
controlling operation of said equipment if said selective testing of said equipment operator indicates said impairment of said equipment operator, wherein said screening of said equipment operator includes a time-sharing allocation of at least one processor executing at least one expert system.
The Federal Circuit held that this was nothing more than “the abstract idea of testing operators of any kind of moving equipment for any kind of physical or mental impairment.” But what makes testing equipment operators for impairments abstract? Certainly this is not a purely mental process, since it deals with detecting physical impairments, chemical impairments and so forth using physical devices to obtain the data for the test. And it results in a physical control of machines: “controlling operation of said equipment.” The court does not explain how testing for impairments is abstract.
Under the Alice test, if the abstract idea is really this general, then using an expert system is “significantly more.” An expert system is not a native component or functionality of a generic computer system, but a highly specific type of artificial intelligence–different in both design, architecture and application from other types of AI systems. And if the game of patent eligibility is played on the borderless field of analogy, it is easy to argue this claim is like the claim in Diamond v. Diehr, in that it involves continuously measuring a physical variable (screening the equipment operator here, measuring the temperature in the rubber mold in Diehr) and then performing a control action in response to the result (controlling operation of the equipment here, opening the rubber mold in Diehr). If Diehr was eligible so too is this.
What’s really going on then is that the court is using Section 101 here as a proxy for enablement under Section 112. What bothers the court is the lack of implementation details:
Yet the ’392 patent is completely devoid of any explanation of what these hardware and software differences are, let alone any explanation how to implement them using the existing equipment modules.
Raise your hand if this sounds like enablement to you.
That said, the court is definitely on to something. The court is right here: the details here are insufficient. The hard part of expert system development is not figuring out that one can apply an expert system to a given problem domain. Rather, the hard part is what is called knowledge engineering, the representation of the specific knowledge in a knowledge base about the application domain and the rules used by an inference engine that describe how the knowledge is applied to specific situations. The knowledge representation deals with how the objects and facts the domain are represented, and the rules describe the relationship between the facts, typically in the form of “IF…THEN” expressions.
In the context of a system for controlling equipment operation based on the large variety of possible impairments mentioned in the patent(chemical, physical, mental, etc.) there should be a description of the body of facts representing attributes of an operator (e.g., such the various attribute mentioned above), as well as the facts that describe the equipment being operated: its various controls, state (speed, direction, steering angle, throttle position, brake position, etc.). Second, one would need a set of specific interrelated rules that correspond to the “logic” that an expert would use to determine that the operator is impaired based on those facts, and then what specific actions to take on the controls of the equipment. As the court notes, the patent is essentially silent on these points: indeed, the term “rule,” which one would always expect to find in an expert system patent, is not even mentioned. All of what would be described above is glossed over in a two sentences:
The court however trips over itself later on when pursuing the ineligiblity-via-enablement: The court states
This is problematic for two reasons. First, it’s not the job of the claim to explain the how, if the how itself it not what the applicant invented. In other words, the how (the species) matters when the what (the genus) is already known. If it was already known to use an expert system to test the operator of a vehicle and then control the vehicle as a result, then how it was done here would matter–but only for purposes of novelty and non-obviousness, not eligibility.
Second, the court is wrong on the facts: claim 4 does in fact recite “what tests to select from”
Let us assume that there was an independent claim that recited a specific test from this list, as well as a specific control operation, something like this:
Would the court have come to a different outcome? I doubt it. As to first and second sets of limitations, it would have called those mere data gathering of the obvious or necessary data for this application, and the third set of limitations would have been equally dismissed as mere post-solution activity of the kind necessary to control the equipment when the operator is impaired. The court’s suggestion that these details would have matters offers a cold, false hope for other patentees.
If the current approach of the USPTO in overgeneralizing the holdings of Federal Circuit decisions continues, we’ll see patent examiners use the holding here to reject patent applications on expert systems, machine learning, and artificial intelligence with increased vigor. We may not care so much about how that impacts companies filing patents on using machine learning to place advertisements on web pages, but we will very much care how it impacts inventions using machine learning to identify new drugs, predict and prevent automobile collisions in autonomous vehicles, or identify and warn a person of a heart attack hours before one happens.
Let us return to the court’s confirmation of the alleged abstract idea:
How does the court determine that this is the correct level of generalization? Why limit to moving equipment? Why is it not:
And why limit to operators of equipment? Why not:
testing humans for any kind of physical or mental impairment.
And if you can go “up” in levels of abstract, why can’t you go down? Why isn’t the idea:
testing operators of any kind of moving equipment for any kind of physical or mental impairment that impacts the ability to safely operate the vehicle?
After all, you can have lots of impairments that have no impact on the ability to operate vehicle—in fact, everyone has some impairments. I have near sightedness, bad short term memory, and a host of others. I’m sure you have yours. What’s important to this invention is solving the problem of identifying operators who are so impaired as to be unsafe–not merely impaired per se.
As you probably know, the notion of the “inventive concept” comes from Parker v. Flook. There Justice Stevens set forth the framework which was adopted and refashioned by Breyer in Mayo v. Prometheus. Steven’s approach to eligibility was based on the “point of novelty”: identifying that which the inventor considered his invention–the inventive concept–and then evaluating whether that was an abstract idea, law of nature, etc. Indeed, Stevens criticized the majority in Diehr precisely because he felt that they “fail to understand or completely disregard the distinction between the subject matter of what the inventor claims to have discovered—the § 101 issue—and the question of whether that claimed discovery is in fact novel—the § 102 issue.” Diehr, 450 U.S. 175, 211-213 (Stevens, J. dissenting). It is not hard to imagine Stevens making exactly the same critique against the present Court in its reliance on whether something is routine, conventional, or well known.
Now I don’t agree with Stevens that the point of novelty approach is correct–I’m a died-in-wool-claim-as-whole-Diehrian–but I believe that he was correct in separating the categorical question of eligibility from the qualitative question of inventiveness. The Alice framework asks whether the claim is directed to an abstract idea–that is, what did the applicant believe he invented, as expressed by the claim. Indeed the Court cautions precisely against excessively boiling down of the invention to the broadest possible generalization:
Kevin Roe, the inventor here, did not purport to invent or even claim, testing for “any kind” of impairment as suggested by the Federal Circuit. And he certainly did not intend that the invention operates before or after you use the vehicle. Logically, the invention does not cover testing before the operator gets in the car or after she exits. The invention was directed to testing in real time as the operator uses the vehicle. So perhaps idea is:
testing operators of any kind of moving equipment in real time, during operation of the moving equipment, for any kind of physical or mental impairment that impacts the ability to safely operate the vehicle
Now when you get here, you have a specific category of problem to be solved, and it’s a technical problem. And Mr. Roe solved it using a technical solution–he did not purport to invent using any kind of testing system, but specifically an expert system. Thus, perhaps the further refinement is:
testing operators of any kind of moving equipment in real time, during operation of the moving equipment, for any kind of physical or mental impairment that impacts the ability to safely operate the vehicle using an expert system to evaluate the operator.
Now this is likely the closest correct generalization of the invention. And if that’s “abstract idea” then likely anything is abstract.
And lo and behold: the U.S. Patent Classification system has a number of classifications for inventions that do this:
Class 340,Electronic Communications, Subclass 576: Drive capability: This subclass is indented under subclass 573.1. Subject matter responsive to the capability of a person to operate a vehicle; e.g., intoxication.
Class 180, Motor Vehicles, Subclass 272: Responsive to absence or inattention of operator, or negatively reactive to attempt to operate vehicle by person not qualified mentally or physically to do so: This subclass is indented under subclass 271. Vehicle wherein the means either (a) initiates action to safeguard the vehicle or an occupant (e.g., applies brake, closes throttle, sounds alarm, etc.) when the operator is not at his usual station or fails to positively indicate his presence and/or his attentiveness (e.g., a “deadman-type” control), or (b) foils an attempt to start or drive the vehicle if the would-be operator is unable to demonstrate his mental or physical capacity to do so (e.g., a “coded” ignition lock for dissuading one who is inebriated).
These are, not surprisingly, the very classifications that the patent examiner used to classify the Roe patent. In other words, the patent examiner got it right. And it shows that the proper level of abstraction is how one of skill in the art would understand the invention–not how a lay court without any technological expertise would. There are over 1.000 patents in these classifications–that is evidence not that the concept of testing operators is abstract, but rather that this is a technological problem recognized by technological experts in the domain and for which there are technological solutions.
What the Federal Circuit has done, along with the majority of district courts, is confuse the notion that you can categorize an invention into some broader class with the notion of whether the invention itself is abstract. I have to return to my favorite quote:
The difficulty which American courts…have had…goes back to the primitive thought that an “invention” upon which the patent gives protection is something tangible. The physical embodiment or disclosure, which, in itself is something tangible is confused with the definition or claim to the inventive novelty, and this definition or claim or monopoly, also sometimes called “invention” in one of that word’s meanings is not something tangible, but is an abstraction. Definitions are always abstractions. This primitive confusion of “invention” in the sense of physical embodiment with “invention” in the sense of definition of the patentable amount of novelty, survives to the present day, not only in the courts, but among some of the examiners in the Patent Office.
E. Stringham, Double Patenting, Washington D.C., Pacot Publications(1933) (emphasis added).
Patent claims are definitions and definitions are necessarily abstractions-as-generalizations from the specific instances that they define. That the confusion Stringhman spoke of in 1933 continues today only emphasizes how little progress has been made by the courts in understanding the nature of patents.
There’s one final point about this decision that is odd, and that is what it does not do. There is no mention here of the first Federal Circuit case to address the patent eligibility of expert systems, SmartGene, Inc. v Advanced Biological Labs, 555 Fed. Appx. 950 (Fed. Cir. 2014). Full disclosure: I was appellate counsel for ABL. In SmartGene, the Federal Circuit ruled that claims for an expert system for selecting treatment regimens (i.e., complex combinations of drugs and therapeutics) for medical conditions such as HIV, were not patent eligible because “every step is a familiar part of the conscious process that doctors can and do perform in their heads.” The problem however is that ABL’s patent did disclose the “details about how the “expert system” works” which they found lacking in Vehicle Intelligence’s patents.
Since all expert systems work in essentially the same way–they encode the “rules” and “expert knowledge” that human experts use to make decisions in complex domains–the mental steps analysis that the SmartGene court used to invalidate ABL’s patents sweeps too broadly and wipes out all inventions of new applications of expert systems. However, if the specific details that would have saved Vehicle Intelligence’s patents, they should have saved ABL’s. And if ABL’s patents were invalid as merely being “mental steps” then this should have been the reason Vehicle Intelligence’s were as well, and discussion of the “details” is beside the point. In short, these two decisions cannot be squared–perhaps that’s why the Vehicle Intelligence panel (Moore,Clevenger, and Reyna) did not cite SmartGene (Taranto, Lourie, Dyk). It does not bode well for companies developing new artificial intelligence systems and applications that two panels of the Federal Circuit take inconsistent approaches to the eligibility of this type of technology.