By: Robert R. Sachs
As explored in my previous post, the USPTO has focused upon the inclusion of glossaries in patent applications as a potential mechanism for addressing ambiguity in patent claims and discussed the idea at length during last month’s Software Partnership Meeting in Berkeley.
Unfortunately, the Office has developed this proposal without sufficiently considering how glossaries function in ordinary discourse. A glossary is a collection of textual “glosses” of technical or difficult words. A gloss is an explanation or definition of a word or expression. The assumption then is that the words of some patent claims are sufficiently “technical” or “difficult” to understand that some additional explanation is required, above and beyond their use either generally, or more particularly, in the context of the patent specification.
But there are inherent, inescapable problems that will arise from the Office’s proposed use of glossaries to provide “explanations or definitions.” These problems result from confusion about how we understand the meaning of words in language and how glossaries or “definitions” function. There are numerous sources of a word’s meaning, but two primary ones are its “sense” and its “reference.” A word’s reference is the set of things in the world to which the word refers. The “reference” of cat is the group of animals that we refer to when using the word.
A word’s sense is its dictionary definition (or more generally, the set of its definitions). Webster’s Third New International Dictionary provides this definition of cat:
A long-domesticated carnivorous mammal that is usually regarded as a distinct species (Felis catus syn. F. domestica) though probably ultimately derived by selection from among the hybrid progeny of several small Old World wildcats (as the Kaffir cat and the European wildcat), that occurs in several varieties distinguished chiefly by length of coat, body form, and presence or absence of tail, and that makes a pet valuable in controlling rodents and other small vermin but tends to revert to a feral state if not housed and cared for.
Now obviously, when you use the word cat, unless you are a felinologist, you certainly do not have this definition in your head. Even this definition, lengthy as it is, admits of a certain amount of open-endedness: Cats “probably” derived from certain Old World species, but not necessarily; they are distinguished “chiefly” but not solely by certain physical attributes (which are not at all specified here); they “tend” to revert to a wild state, but not always. What we see here is typical of definitions: a general outline of features that are useful for identifying things that are cats, but which themselves are not fixed.
Perhaps you use a more simple definition, such as “a small domesticated carnivorous mammal with soft fur, a short snout and retractile claws. It is widely kept as a pet or for catching mice.” But that’s no good either because we know that some cats—that is, animals that are referenced by the word—do not meet this definition, such as various breeds of hairless cats or cats without claws. We would not say these animals are not cats—nor would we say that the definition is wrong. And note that this definition specifies particular physical attributes—soft fur, short snout—that are absent from Webster’s. Is the definition in Webster’s wrong then? Can they both be correct? How does one decide? Given even this simple example, it is clear that we do not understand language simply by reliance on dictionary definitions. How then do we understand what people mean when they use the word cat? This is a question that linguists and philosophers of language have discussed and debated for hundreds of years, and I do not propose to answer that question here.
For our purposes, it is enough to note that any dictionary definition is an incomplete source of the meaning of a word. The dictionary definition of almost any word has two notable features. First, most words have multiple meanings, sometimes related, sometimes not. This is called polysemy. Typically, the more common the word, the more meanings it has. The word rose has more than a dozen different meanings, including the flower, a color, a smell, a part of a compass, a virtuous person, a gemstone cut and a type of window—as well as the past tense of the verb rise. The complete definition of cat includes the animal, as well as a male jazz enthusiast, a boat and a malicious woman. Even something as apparently simple as on has multiple different meanings: there are at least 28 distinctive meanings of on, depending “on” (that’s one of them) how it is used. Second, the individual definitions vary in their precision, and often are expressed in terms of a thing’s attributes, qualities, functions, relationships and so forth, as well as synonyms and exemplary usages.
In linguistic terms, the formal dictionary definition is the main source of the semantics of a word. But as we can see, we do not understand words in a sentence simply by their dictionary meanings. The definitions are understood as guides to the meaning of the word built upon a larger context of knowledge and experience. That is, dictionary definitions do not “stand alone” but instead they assume the use of two other key elements: syntax, the rules that govern the form of expressions, and pragmatics, the principles describing the use of a word in practice. To grasp the meaning of a word, the reader must have an understanding of the context of the word, a certain knowledge about the world, and an understanding of the syntax and grammar of the language.
Just as individual words are susceptible to multiple definitions, so too are most sentences. During the Roundtable at Berkeley, it was suggested that claims should have “one plausible meaning.” This is simply not a tenable goal. Every claim has multiple plausible meanings, but we quickly rule out a majority of them and settle on the one that is most probable, again relying on the lexical and experiential context to guide our selection.
The Office has proposed that glossaries “stand alone”: “The glossary definitions must “stand alone” and cannot simply refer to other sections or text within the specification or incorporate by reference a definition (or portion) from another document.” Using glossaries in this manner is contrary to how they are typically used, and how speakers of a language use their words. Dependence on glossaries further ignores the essential role that syntax and grammar play in providing meaning in claims.
Assume that you needed to know the scope of the word cat in a patent claim (this is not unreasonable, as there are thousands of patents addressing the needs of cats), and you consulted Webster’s dictionary definition, set forth above. First, there is the obvious problem that you know the meaning of other words in a definition to understand the definition at all. Thus, you have to know what carnivorous and domesticated mean before you understand the definition of cat. Second, you need to understand the cultural custom of humans keeping animals as pets—that a pet is not merely a domesticated animal, but something more intimate, like a companion. Cows, horses, goats and donkeys are domesticated but are not typically considered to be pets. Someone from a culture that does not keep pets would not understand why one would do so and would fail to understand an important aspect of cathood. Finally, you need to know that this kind of pet sometimes (but not always) kills mice, rats and so forth, and why this activity is “valuable” to the human keeper. But not all cats kill rats—and we would not say that Fluffy is not a cat just because he does not catch mice. Obviously then, dictionary definitions are not meant to express the complete meaning of a word “standing alone,” but within the framework of knowledge and experience that the reader is expected to have. This framework provides the experiential context for a word’s meaning.
However, knowledge and experience of the world is insufficient to provide the complete meaning of a word or expression, since the presence of other words in the sentence itself—or more importantly in the larger context of the document—are necessary. If I say to you, “that cat picked up the rose,” my meaning depends on the lexical context of our conversation—what our prior words were up to the point of my utterance. Are we discussing kittens playing with flowers in the park or are we people watching at Birdland in Manhattan?
Another aspect of lexical context is the syntax of the expression containing the term of interest. Syntax is an essential element to understanding meaning:
The apparatus has a rod that is connected by a frame to a bar.
The apparatus has a frame that is connected by a bar to a rod.
The same words are in both sentences, but the different orders result in entirely different meanings. A glossary defining rod, frame and bar, would provide no assistance in determining the meaning of these sentences.
The above sentences are perfectly “normal” English, but unacceptable as claim limitations. Patent claims have syntactical forms that consistently violate the ordinary rules of grammar. Consider the following clauses:
generating a control signal from an input signal using a regulation signal;
generating using a regulation signal a control signal from an input signal;
from an input signal, generating a control signal using a regulation signal;
from an input signal, generating using a regulation signal a control signal;
These clauses each use the same words, but have different meanings, some of which are ambiguous. Normal English speakers—such as judges and juries—would wince at most, if not all of them, but patent drafters and examiners think nothing of drafting in this manner.
The definitions in a dictionary and conventional glossaries are guides, not strict masters like Humpty Dumpty. We would draw a perverse stare if, in discussing our affection for Maine Coons with our neighbor, we insisted that his Ukrainian Levkoy was not a cat because it was hairless, declawed, and never caught or killed a rat. Experiential context matters. Likewise, we would be mystified if our neighbor inquired whether our tomcat preferred Thelonious Monk to Sun Ra. Lexical context matters.
The very fact that you knew immediately and unquestionably that a Ukrainian Levkoy was a type of domesticated cat kept as a pet—and neither a species of tiger nor a lover of the European free jazz style—without looking in a dictionary, even though you’ve likely never heard of it before, demonstrates that we do not rely on dictionaries to provide the meaning of words we do not know: we use lexical and experiential context, and only if that fails do we turn to the dictionary.
This is not to say that the meaning of a word is entirely dependent on its context. Words do have core meanings, and often resist use in contexts in which such meaning is violated. For example, I can say “Adam filled the glass with water,” but I cannot say “Adam poured the glass with water.” This has to do with the causal differences between filling something (a container) and pouring something (content). Similarly, I can say “Adam sprayed water on the driveway,” but I cannot say “Adam sprayed cat on the driveway,” because linguistically cat is a count noun, not a mass noun like water, and there is no shared experiential context in which we use cats as liquids.
There is a further, and equally important, reason we do not constrain our understanding of the meaning of words to dictionary definitions, and that is the use of tropes. Tropes are the figurative use of words or phrases in a manner where the literal meaning of the words is not true or does not make sense, but the context of usage provides a non-literal meaning that does make sense. Tropes include metaphor, simile, hyperbole, metonymy, synecdoche and others. Of particular relevance to patents is the use of metaphor. Metaphor is a key mechanism by which inventors describe inventions that previously did not exist, and hence for which there are no precise words or phrases that encapsulate the inventive concepts. The use of metaphor in science and technology is well documented. See, Brown, Making Truth, Metaphors in Science, University of Illinois Press (2003); Lakoff & Johnson, Metaphors We Live By, University of Chicago Press (1980); Schon, Generative metaphor, and Kuhn, Metaphor in Science, both in Ortony (ed.), Metaphor and Thought, Cambridge University Press (1993); Dasgupta, Technology and Creativity, Oxford University Press (1996); and Gentner & Jerzioski, Historical shifts in the use of analogy in science, in Gholson et. al. (Eds.) The Psychology of Science: Contributions to Metascience (1989).
One well-known example of metaphor in the development of technology is the commonly used “desktop” metaphor for the user interface of a computer, along with “folders,” “windows,” “trash cans,” “menus” and the like. These features are now so ingrained in our daily use of computers that we entirely forget that they are metaphorical constructs: there is no “desktop” inside your computer, let alone a “trash can” or a “window.” If the inventors of these features were limited to the literal dictionary definitions of these familiar words at the time, it would have been impossible to so fully capture and express the nature of these inventions in such succinct words. Metaphors work precisely because they allow the speaker to leverage relevant conceptual aspects from a source domain to a target domain (that real desktops are used to organize documents into folders) while ignoring the irrelevant aspects (that desktops are made of wood). Thus, by using these words in a decidedly non-literal sense, inventors are able to leverage the experiential context associated with the literal meanings into a new and different lexical context. Steven Pinker, in The Stuff of Thought, provides this explanation of how metaphors are used to express inventive ideas:
Scientists constantly discover new entities that lack an English name, so they often tap a metaphor to supply the needed label: selection in evolution, kettle pond in geology, linkage in genetics, and so on. But they aren’t shackled by the content of the metaphor, because the word in its new scientific sense is distinct from the word in the vernacular (a kind of polysemy). As scientists come to understand the target phenomenon in greater depth and detail, they highlight the aspects of the metaphor that ought to be taken seriously and pare away the aspects that should be ignored….The metaphor evolves into a technical term for an abstract concept that subsumes both the target phenomenon and the source phenomenon. It’s an instance of something that every philosopher of science knows about scientific language and that most laypeople misunderstand: scientists don’t “carefully define their terms” before beginning an investigation. Instead they use words loosely to point to phenomenon in the world, and the meanings of the words become gradually more precise as the scientists come to understand the phenomenon more thoroughly.
This ability to extend the meaning of words through metaphorical uses is an essential feature of human language and creativity. The very act of inventing is to create something new, something that has not been known before; in many cases there simply are no existing words or expressions that describe the invention, and so the inventor must either invent an entirely new word—the term escalator was coined by its inventor Charles Seeberger—or use an existing word in a metaphorical sense. In either case, language plays a critical role in patent specifications and claims that cannot be exhausted fully by the use of dictionary definitions.
Thus, we cannot determine the meaning of a given word or phrase alone by looking it up in a dictionary or a glossary, no matter how artfully constructed. A word’s meaning in a claim depends every bit as much on the words that surround it in the claim, the syntax of the claim, its use in the patent specification itself—in sum, its lexical context—as well as its use in the community of relevant readers, namely those of ordinary skill in the art, to explain a new concept—its experiential context.
We can now see that a “stand-alone” glossary would operate in a manner entirely contrary not just to how dictionaries are typically used, but also to how speakers of a language use words to both communicate and create new meanings. A fully decontextualized hyper-literal approach that treats the “definition” provided for a term as precise and absolutely limiting, would improperly ignore the lexical context of the patent specification or other claim language, the experiential context of the relevant community, and the expressive, metaphorical nature of inventive speech.
The foregoing points address some of the fundamental linguistic problems that arise from the underlying assumptions the Office appears to make about the use of glossaries. In the final installment of this three-part series, I will address the specific implementation problems and strategic behavior among patent prosecutors that would arise as a result of required patent glossaries, as well as the methodological problems of the USPTO’s study design.
 The role of language in the law is itself a deep field of study, starting at least as far back as Jeremy Bentham’s A Fragment on Government (1776). Modern scholars include John Austin, H.L.A. Hart, and Ronald Dworkin. An introduction to the issues in law and the use of language is found at Law and Language, Stanford Encyclopedia of Philosophy, at http://plato.stanford.edu/entries/law-language/#1.