By: Robert R. Sachs
Over the past two months, the trends I've discussed in my previous blogs on AliceStorm have continued and become more entrenched. In particular, the Federal Circuit has been quite active, issuing nine decisions since late June. These decisions lay out a theory of patent eligibility that in my view is divorced from both scientific reality and how innovation actually occurs. I'll discuss those points below, but first let's do the numbers.
June, July and August showed an uptick in the number Section 101 decisions from April and May, the majority of these being motions to dismiss and judgments on the pleadings.
The rates of invalidity holdings continue to be steady: 70% overall, and 66.3% in the district courts. Success on motions on the pleadings is up to 68.1%.
We've recently started tracking ITC proceedings as well, as shown above in the last row. Three of the five holdings of invalidity recorded above involved direct competitors and counterparties, Fitbit and Jawbone. In March 2016, Fitbit invalidated Jawbone's fitness tracking patents in an ITC proceeding brought by Jawbone (ITC 337-TA-963). In July, Jawbone returned the favor and successfully invalidated Fitbit's patents (ITC 337-TA-973); the ITC judge in the latter decision even relied upon Fitbit's arguments that it made in its own motion against Jawbone.
As I've argued, the Supreme Court did not intend AliceStorm and specifically did not intend the impact on software patents. Yet the onslaught continues. Here is the current update on the types of patents being litigated under Section 101:
Here's the updated "scorecard" for the Federal Circuit:
I find it interesting that Judges Hughes and Taranto, neither of whom has a scientific or technological background, are leading the court in framing the boundaries of patent eligibility. Together these two judges are on 15 of the 23 written opinions. Patents describe technological advances, often based on applying scientific developments to particular problems. I believe that a correct theory of patent eligibility requires not only a deep understanding of science, but also of how technology develops—issues involving the nature of human creativity and innovation. Since the role of the patent law, as defined by the Constitution, is to promote the progress of useful arts, it is necessary for those who define the patent law to understand how such progress in fact happens.
There is an entire literature of innovation and creativity studies that explores how this process occurs, and there are many different views and theories of what constitutes creativity and how humans invent. And when it comes to questions of what constitutes abstract ideas or laws of nature, there is again a wide range of views. Yet as far as I can tell from reading Federal Circuit opinions, the court is entirely unaware of these various fields of knowledge.
Several years ago I discussed the Mayo decision with retired Chief Judge Paul Michel. I explained that the debate about laws of nature has a long tradition in philosophy of science—the term itself goes back to Roger Bacon, who first used the phrase in 1250, over 750 years ago!—and that there are well developed theoretical explanations for what constitutes a law of nature. Judge Michel candidly admitted: "We [the Federal Circuit] had no idea," speaking of the period when the court authored its opinion in Mayo. Indeed, and that's the problem. The court, in decisions like Ariosa, Rapid Litigation Management, Exergen, is setting forth patent eligibility rulings that do not cohere with modern scientific thinking. The result can only distort the development of the patent law.
The problem is not confined the life sciences. When it comes to software, the court does not appear to understand how software works or is developed. In DDR (Chen and Wallach), for example, contrary to the court's impression that the invention did not "overcome a problem specifically arising in the realm of computer networks," the notion of "the near-instantaneous transport between these locations [is] made possible by standard Internet communication protocols, which introduces a problem that does not arise in the 'brick and mortar' context." This so-called "problem," that clicking on a link can load a remote website, is not a problem at all; the problem in DDR resided not in Internet protocols, but in the typical way that e-commerce websites were designed. The invention then was a solution in website design, not internet engineering. The invention did not change how the Internet worked—HTTP, TCP-IP, HTML, ASP, all of the standard Internet protocols were used by the invention in their normal manner, working exactly as they were intended to. The invention did not "improve the functioning" of the computer system in any technical sense; the improvement was only an improvement in usability. What the invention changed was simply which graphical assets were combined with which product assets to create a "hybrid webpage." (To be clear, I think this is entirely eligible subject matter.) And the court's notion that there is "near instantaneous transport between locations" is a misleading metaphor. There is no more instantaneous transport when you click on links than when you change channels from watching a local broadcast station to watching the BBC. You would not say that you're "transported" to a site in England when you turn on the BBC's Downton Abbey, any more than you would say you're transported between remote locations when you change radio stations.
In Electric Power, the panel (Taranto, Bryson, Stoll) created a distinction that does not exist in Alice: "the distinction made in Alice between, on one hand, computer-functionality improvements and, on the other, uses of existing computers as tools in aid of processes focused on 'abstract ideas'." First, Alice drew no such distinction; Alice only held that claiming the application of an identified abstract idea with generic computer elements does not provide eligibility. There is no discussion in Alice that using a computer as a tool is insufficient to be eligible.
Second, such a reductionist view marginalizes the power and utility of computers and software, and is inconsistent with general principles of eligibility. The computer has been developed over the years precisely to be a general purpose tool that can be programmed for specific new uses, uses which themselves can be inventions. The Nobel Prize-winning scientist Herbert Simon explains in Sciences of the Artificial:
No artifact devised by man is so convenient for this kind of functional description as a digital computer. It is truly protean, for almost the only ones of its properties that are detectable in its behavior (when it is operating properly!) are the organizational properties. The speed with which it performs its basic operations may allow us to infer a little about its physical components and their natural laws; speed data, for example, would allow us to rule out certain kinds of "slow" components. For the rest, almost no interesting statement that one can make about an operating computer bears any particular relation to the specific nature of the hardware. A computer is an organization of elementary functional components in which, to a high approximation, only the function performed by those components is relevant to the behavior of the whole system.
What makes the computer a useful tool—and computer programming a useful art—is precisely its ability to have its functional capability changed by programming, for it is the functionality, not the underlying structure, that is what is useful to solving the problem. No other tool in the history of human development is as flexible and re-purposeful as a computer. And yet this very capability is now used as evidence of ineligibility by the Federal Circuit.
Section 101 in its own terms focuses on utility as the hallmark of eligibility, and new use of a computer created by programming should be eligible by default. Indeed, Section 100(b)—"the term 'process' means process, art or method, and includes a new use of a known ..., machine,"—sanctions the use of known machines—motors as well as computers—in new processes. However, in Electric Power, the Federal Circuit has essentially abrogated Section 100(b) by holding that using "readily available" "off-the-shelf, conventional computer, network, and display technology" is “insufficient to pass the test of an inventive concept". While we may find that a patent claim that recites the use of an off-the-shelf motor to motorize a mechanical process is obvious, we would never say it is ineligible because it merely uses the motor as a "tool." And yet the Federal Circuit's rationale, if applied generally—as any principle of patent law should be—would result in exactly that outcome.
No theoretical explanation of the Federal Circuit's expansion of Alice is given, nor can there be because it is entirely contrary to the nature of both the patent law (at least Section 100(b)) and human invention. Humans invent by taking conventional, off-the-shelf, readily available components and combining them into new structures and functions. In this sense, using a computer as a tool is no different from using any other physical device as a tool for some purpose, some other function. All inventions make use of existing components and use them as "tools" in their ordinary way to achieve a desired function; sometimes the function is known, sometimes it is new, but it is (almost) always useful. Any complex machine is made of components—whether nuts and bolts, motors and relays, integrated circuits, batteries, etc., etc.—that function precisely as they are intended: if the parts did not function as intended, we could not build anything. Similarly, we program conventional computers, using conventional languages and methods, to result in new functions—just like we create new machines by combining conventional parts using known engineering approaches.
We would not hold that an invention made of existing off-the shelf parts, used entirely for their intended purposes, was not patent-eligible simply because the function performed by the parts was known. We would not say that the function was an abstract idea, and the application of that idea using conventional parts failed to provide an inventive contribution sufficient to be eligible. Any such analysis would clearly be seen as an obviousness analysis masquerading as eligibility. Yet, this is precisely what the Federal Circuit is guiding the lower courts, and the USPTO, to do.
Basing eligibility on the "unconventionality" of underlying physical components of software invention is precisely what Diehr said was wrong: eligibility is based on the invention as a whole, not the component parts. It is beyond dispute that every element in Diehr was conventional: the rubber molds, the temperature-sensing thermocouples, the Arrhenius equation, even the computer. The only difference over the prior art was the repetitive calculation of the Arrhenius equation—obviously using a computer precisely in the manner for which it was designed. Opening the mold based on the results of the calculation was at best "post-solution" activity. Yet, Diehr's claim was eligible, because it was a new use of an existing tool. The justification that Diehr had solved a problem in the real "technology" of rubber molding is wrong on two fronts. Eligibility is not based on the problem: it is based on the solution, that which is claimed. Second, without a definition of "technology" this answer is provides no guidance.
I return then to the question of understanding the nature of technological innovation. I subscribe to the Holmsian Maxim: "The life of the law has not been logic; it has been experience." Without actual experience with technology, and with how technologists actually solve problems and invent, the entire process of invention itself, and thus patent eligibility, is just an abstraction to lay judges. As a result, the "rules" of patent eligibility may appear to be logical to a judge but are simply word games that bear little if any grounding in technological practice or scientific theory.
Of course, the courts are not going to sit down and learn technology, work with inventors, and start developing the life experience to inform their decision-making. Thus, while I believe is not enough (!) to simply read the literature on creativity, innovation, software development, and so forth, to transform one's understanding of eligibility, it will have to do to. Here is a small sample of my favorites.
It Began with Babbage, also by Dasgupta. There’s a lot of history of the development of computers, but this one is different because it ties the historical developments to Dasgupta’s model of innovation.
Sparks of Genius: The Thirteen Thinking Tools of the World's Most Creative People, an analysis of the “tools” that are commonly found in creative individuals.
Creativity Flow and the Psychology of Discovery and Invention, a classic in the field of creativity studies.
Metaphors We Live By, a classic in the field of language and cognition by George Lakoff. Explains the theory of conceptual metaphors. Necessary reading to learn to avoid confusing metaphors with actual reality—a common failing in patent eligibility jurisprudence.
Making Truth: Metaphor in Science: Applies Lakoff’s theory of conceptual metaphor to specific scientific discoveries, including the atomic model, protein folding, and global warming. Demonstrates how scientific innovation depends on metaphorical reasoning. Abstraction and modeling are part of creativity thinking, not evidence of ineligible concepts.
Readings on Laws of Nature, a compilation of the most important essays on laws of nature, covering the range of different approaches in contemporary philosophy.
How Nature Works: Physicist Per Bak’s explanation of Self-Organized Criticality. Covers the fundamental lawful relationships between sandpiles, quasars, earthquakes, evolution, and even traffic jams (with a hint why autonomous vehicles won’t eliminate traffic jams).