Monday, April 16, 2018

Comprehensive Data about Federal Circuit Opinions

Jason Rantanen (Iowa) has already blogged about his new article, but I thought I would mention it briefly has well. He has created a database of data about Federal Circuit opinions. An article describing it is forthcoming in the American University Law Reviw on SSRN and the abstract is here:
Quantitative studies of the U.S. Court of Appeals for the Federal Circuit's patent law decisions are almost more numerous than the judicial decisions they examine. Each study painstakingly collects basic data about the decisions-case name, appeal number, judges, precedential status-before adding its own set of unique observations. This process is redundant, labor-intensive, and makes cross-study comparisons difficult, if not impossible. This Article and the accompanying database aim to eliminate these inefficiencies and provide a mechanism for meaningful cross-study comparisons.

This Article describes the Compendium of Federal Circuit Decisions ("Compendium"), a database created to both standardize and analyze decisions of the Federal Circuit. The Compendium contains an array of data on all documents released on the Federal Circuit's website relating to cases that originated in a federal district court or the United States Patent and Trademark Office (USPTO)-essentially all opinions since 2004 and all Rule 36 affirmances since 2007, along with numerous orders and other documents.

This Article draws upon the Compendium to examine key metrics of the Federal Circuit's decisions in appeals arising from the district courts and USPTO over the past decade, updating previous work that studied similar populations during earlier time periods and providing new insights into the Federal Circuit's performance. The data reveal, among other things, an increase in the number of precedential opinions in appeals arising from the USPTO, a general increase in the quantity-but not necessarily the frequency-with which the Federal Circuit invokes Rule 36, and a return to general agreement among the judges following a period of substantial disuniformity. These metrics point to, on the surface at least, a Federal Circuit that is functioning smoothly in the post-America Invents Act world, while also hinting at areas for further study.
The article has some interesting details about opinions and trends, but I wanted to point out that this is a database now available for use in scholarly work, which is really helpful. The inclusion of non-precedential opinions adds a new wrinkle as well. Hopefully some useful studies will come of this

Tuesday, April 10, 2018

Statute v. Constitution as IP Limiting Doctrine

In his forthcoming article, "Paths or Fences: Patents, Copyrights, and the Constitution," Derek Bambauer (Arizona), notices (and provides some data to support) a discrepancy in how boundary and limiting issues are handled in patent and copyright. He notes that, for reasons he theorizes, big copyright issues are often "fenced in" by the Constitution - that is the constitution limits what can be protected. But patent issues are often resolved by statute, because the Constitution creates a "path" which Congress may follow. Thus, he notes, we have two types of IP emanating from the same source, but treated differently for unjustifiable reasons.

The article is forthcoming in Iowa Law Review, and is posted on SSRN. The abstract is here:
Congressional power over patents and copyrights flows from the same constitutional source, and the doctrines have similar missions. Yet the Supreme Court has approached these areas from distinctly different angles. With copyright, the Court readily employs constitutional analysis, building fences to constrain Congress. With patent, it emphasizes statutory interpretation, demarcating paths the legislature can follow, or deviate from (potentially at its constitutional peril). This Article uses empirical and quantitative analysis to show this divergence. It offers two potential explanations, one based on entitlement strength, the other grounded in public choice concerns. Next, the Article explores border cases where the Court could have used either fences or paths, demonstrating the effects of this pattern. It sets out criteria that the Court should employ in choosing between these approaches: countermajoritarian concerns, institutional competence, pragmatism, and avoidance theory. The Article argues that the key normative principle is that the Court should erect fences when cases impinge on intellectual property’s core constitutional concerns – information disclosure for patent and information generation for copyright. It concludes with two examples where the Court should alter its approach based on this principle.
The article is an interesting theory piece that has some practical payoff.

Wednesday, April 4, 2018

Tun-Jen Chiang: Can Patents Restrict Free Speech?

Guest post by Jason Reinecke, a 3L at Stanford Law School whose work has been previously featured on this blog.

Scholars have long argued that copyright and trademark law have the potential to violate the First Amendment right to free speech. But in Patents and Free Speech (forthcoming in the Georgetown Law Journal), Professor Tun-Jen Chiang explains that patents can similarly restrict free speech, and that they pose an even greater threat to speech than copyrights and trademarks because patent law lacks the doctrinal safeguards that have developed in that area.

Professor Chiang convincingly argues that patents frequently violate the First Amendment and provides numerous examples of patents that could restrict speech. For example, he uncovered one patent (U.S. Patent No. 6,311,211) claiming a “method of operating an advocacy network” by “sending an advocacy message” to various users. He argues that such “advocacy emails are core political speech that the First Amendment is supposed to protect. A statute or regulation that prohibited groups from sending advocacy emails would be a blatant First Amendment violation.”

Perhaps the strongest counterargument to the conclusion that patents often violate free speech is that private enforcement of property rights is generally not subject to First Amendment scrutiny, because the First Amendment only applies to acts of the government, not private individuals. Although Professor Chiang has previously concluded that this argument largely justifies copyright law’s exemption from the First Amendment, he does not come to the same conclusion for patent law for two reasons.

Monday, April 2, 2018

Masur & Mortara on Prospective Patent Decisions

Judicial patent decisions are retroactive. When the Supreme Court changed the standard for assessing obviousness in 2007 with KSR v. Teleflex, it affected not just patents filed after 2007, but also all of the existing patents that had been filed and granted under a different legal standard—upsetting existing reliance interests. But in a terrific new article, Patents, Property, and Prospectivity (forthcoming in the Stanford Law Review), Jonathan Masur and Adam Mortara argue that it doesn't have to be this way, and that in some cases, purely prospective patent changes make more sense.

As Masur and Mortara explain, retroactive changes might have benefits in terms of imposing an improved legal rule, but these changes also have social costs. Most notably, future innovators may invest less in R&D because they realize that they will not be able to rely on the law preserving their future patent rights. (Note that the private harm to existing reliance interests from past innovators is merely a wealth transfer from the public's perspective; the social harm comes from future innovators.) Moreover, courts may be less likely to implement improvements in patent law from the fear of upsetting reliance interests. Allowing courts to choose to make certain changes purely prospectively would ameliorate these concerns, and Masur and Mortara have a helpful discussion of how judges already do this in the habeas context.

The idea that judges should be able to make prospective patent rulings (and prospective judicial rulings more generally, outside habeas cases) seems novel and nonobvious and right, and I highly recommend the article. But I had lots of thoughts while reading about potential ways to further strengthen the argument:

Wednesday, March 28, 2018

Oracle v. Google Again: The Unicorn of a Fair Use Jury Reversal

It's been about two years, so I guess it was about time to write about Oracle v. Google. The trigger this time: in a blockbuster opinion (and I never use that term), the Federal Circuit has overturned a jury verdict finding that Google's use of 37 API headers was fair use and instead said that said reuse could not be fair use as a matter of law. I won't describe the ruling in full detail - Jason Rantanen does a good job of it at Patently-O.

Instead, I'll discuss my thoughts on the opinion and some ramifications. Let's start with this one: people who know me (and who read this blog) know that my knee jerk reaction is usually that the opinion is not nearly as far-reaching and worrisome as they think. So, it may surprise a few people when I say that this opinion may well be as worrisome and far-reaching as they think.

And I say that without commenting on the merits; right or wrong, this opinion will have real repercussions. The upshot is: no more compatible compiler/interpreters/APIs. If you create an API language, then nobody else can make a competing one, because to do so would necessarily entail copying the same structure of the input commands and parameters in your specification. If you make a language, you own the language. That's what Oracle argued for, and it won. No Quattro Pro interpreting old Lotus 1-2-3 macros, no competitive C compilers, no debugger emulators for operating systems, and potentially no competitive audio/visual playback software. This is, in short, a big deal.

So, what happened here? While I'm not thrilled with the Court's reasoning, I also don't find it to be so outside the bounds of doctrine as to be without sense. Here are my thoughts.

Tuesday, March 27, 2018

Are We Running out of Trademarks? College Sports Edition

As I watched the Kansas State Wildcats play the Kentucky Wildcats in the Sweet Sixteen this year, it occurred to me that there are an awful lot of Wildcats in the tournament (five, to be exact, or nearly 7.5% of the teams).  This made me think of the interesting new paper by Jeanne Fromer and Barton Beebe, called Are We Running Out of Trademarks? An Empirical Study of Trademark Depletion and Congestion. The paper is on SSRN, and is notable because it is the rare a) IP and b) empirical paper published by the Harvard Law Review. The abstract of the paper is here:
American trademark law has long operated on the assumption that there exists an inexhaustible supply of unclaimed trademarks that are at least as competitively effective as those already claimed. This core empirical assumption underpins nearly every aspect of trademark law and policy. This Article presents empirical evidence showing that this conventional wisdom is wrong. The supply of competitively effective trademarks is, in fact, exhaustible and has already reached severe levels of what we term trademark depletion and trademark congestion. We systematically study all 6.7 million trademark applications filed at the U.S. Patent and Trademark Office (PTO) from 1985 through 2016 together with the 300,000 trademarks already registered at the PTO as of 1985. We analyze these data in light of the most frequently used words and syllables in American English, the most frequently occurring surnames in the United States, and an original dataset consisting of phonetic representations of each applied-for or registered word mark included in the PTO’s Trademark Case Files Dataset. We further incorporate data consisting of all 128 million domain names registered in the .com top-level domain and an original dataset of all 2.1 million trademark office actions issued by the PTO from 2003 through 2016. These data show that rates of word-mark depletion and congestion are increasing and have reached chronic levels, particularly in certain important economic sectors. The data further show that new trademark applicants are increasingly being forced to resort to second-best, less competitively effective marks. Yet registration refusal rates continue to rise. The result is that the ecology of the trademark system is breaking down, with mounting barriers to entry, increasing consumer search costs, and an eroding public domain. In light of our empirical findings, we propose a mix of reforms to trademark law that will help to preserve the proper functioning of the trademark system and further its core purposes of promoting competition and enhancing consumer welfare.
The paper is really well developed and interesting. They consider common law marks as well as domain names. Also worth a read is Written Description's own Lisa Larrimore Ouellette's response, called Does Running Out of (Some) Trademarks Matter?, also in Harvard Law Review and on SSRN.

Wednesday, March 21, 2018

Blurred Lines Verdict Affirmed - How Bad is It?

The Ninth Circuit ruled on Williams v. Gaye today, the "Blurred Lines" verdict that found infringement and some hefty damages. I've replied to a few of my colleagues' Twitter posts today, so I figured I'd stop harassing them with my viewpoint and just make a brief blog post.

Three years ago this week, I blogged here that:
People have strong feelings about this case. Most people I know think it was wrongly decided. But I think that copyright law would be better served if we examined the evidence to see why it was wrongly decided. Should the court have ruled that the similarities presented by the expert were simply never enough to show infringement? Should we not allow juries to consider the whole composition (note that this usually favors the defendant)? Should we provide more guidance to juries making determinations? Was the wrong evidence admitted (that is, is my view of what the evidence was wrong)?
But what I don't think is helpful for the system is to assume straw evidence - it's easy to attack a decision when the court lets the jury hear something it shouldn't or when the jury ignores the instructions as they sometimes do. I'm not convinced that's what happened here; it's much harder to take the evidence as it is and decide whether we're doing this whole music copyright infringement thing the right way.
My sense then was that it would come down to how the appeals court would view the evidence, and it turns out I was right. I find this opinion to be...unremarkable. The jury heard evidence of infringement, and ruled that there was infringement. The court affirmed because that's what courts do when there is a jury verdict. There was some evidence of infringement, and that's enough.

To be clear, I'm not saying that's how I would have voted were I on the jury. I wasn't in the courtroom.

So, why are (almost) all my colleagues bent out of shape?

First, there is a definite view that the only thing copied here was a "vibe," and that the scenes a faire and other unprotected expression should have been filtered out. I am a big fan of filtration; I wrote an article on it. I admit to not being an expert on music filtration. But I do know that there was significant expert testimony here that more than a vibe was copied (which was enough to avoid summary judgment), and that once you're over summary judgment, all bets are off on whether the jury will filter out the "proper" way. Perhaps the jury didn't; but that's not what we ask on an appeal. So, the only way you take it from a jury is to say that there was no possible way to credit the plaintiff's expert that more than a vibe was copied. I've yet to see an analysis based on the actual evidence in the case that shows this (though I have seen plenty of folks disagreeing with Plaintiff's expert), though if someone has one, please point me to it and I'll gladly post it here. The court, for its part, is hostile to such "technical" parsing in music cases (in a way that it is not in photography and computer cases). But that's nothing new; the court cites old law for this proposition, so its hostility shouldn't be surprising, even if it is concerning.

Second, the court seems to double down on the "inverse ratio" rule:
We adhere to the “inverse ratio rule,” which operates like a sliding scale: The greater the showing of access, the lesser the showing of substantial similarity is required.
This is really bothersome, because just recently, the court said that the inverse ratio rule shouldn't be used to make it easier to prove improper appropriation:
That rule does not help Rentmeester because it assists only in proving copying, not in proving unlawful appropriation, the only element at issue in this case
I suppose that you can read the new case as just ignoring Rentmeester's statement, but I don't think so. First, the inverse ratio rule, for better or worse, is old Ninth Circuit law, which a panel can't simply ignore. Second, it is relevant for the question of probative copying (that is, was there copying at all?), which was disputed in this case, unlike Rentmeester. Third, there is no indication that this rule had any bearing on the jury's verdict. The inverse ratio rule was not part of the instruction that asked the jury to determine unlawful appropriation (and the Defendants did not appear to appeal the inverse ratio instruction), nor was the rule even stated in the terms used by the court at all in the jury instructions:
The defendants appealed this instruction, but only on filtration grounds (which were rejected), and not on inverse ratio type grounds.

In short, jury determinations of music copyright is messy business. There's a lot not to like about the Ninth Circuit's intrinsic/extrinsic test (I'm not a big fan, myself). The jury instructions could probably be improved on filtration (there were other filtration instructions, I believe).

But here's where I end up:
  1. This ruling is not terribly surprising, and is wholly consistent with Ninth Circuit precedent (for better or worse)
  2. The ruling could have been written more clearly to avoid some of the consternation and unclarity about the inverse ratio rule (among other things)
  3. This ruling doesn't much change Ninth Circuit law, nor dilute the importance of Rentmeester
  4. This ruling is based in large part on the evidence, which was hotly disputed at trial
  5. If you want to win a copyright case as a defendant, better hope to do it before you get to a jury. You can still win in front of the jury, but if it doesn't go your way the appeal will be tough to win.

Tuesday, March 20, 2018

Evidence on Polarization in IP

Since my coblogger Lisa Ouellette has not tooted her own horn about this, I thought I would do so for her. She, Maggie Wittlin (Nebraska), and Greg Mandel (Temple, its Dean, no less) have a new article forthcoming in UC Davis L. Rev. called What Causes Polarization on IP Policy? A draft is on SSRN, and the abstract is here:
Polarization on contentious policy issues is a problem of national concern for both hot-button cultural issues such as climate change and gun control and for issues of interest to more specialized constituencies. Cultural debates have become so contentious that in many cases people are unable to agree even on the underlying facts needed to resolve these issues. Here, we tackle this problem in the context of intellectual property law. Despite an explosion in the quantity and quality of empirical evidence about the intellectual property system, IP policy debates have become increasingly polarized. This disagreement about existing evidence concerning the effects of the IP system hinders democratic deliberation and stymies progress.
Based on a survey of U.S. IP practitioners, this Article investigates the source of polarization on IP issues, with the goal of understanding how to better enable evidence-based IP policymaking. We hypothesized that, contrary to intuition, more evidence on the effects of IP law would not resolve IP disputes but would instead exacerbate them. Specifically, IP polarization might stem from "cultural cognition," a form of motivated reasoning in which people form factual beliefs that conform to their cultural predispositions and interpret new evidence in light of those beliefs. The cultural cognition framework has helped explain polarization over other issues of national concern, but it has never been tested in a private-law context.
Our survey results provide support for the influence of cultural cognition, as respondents with a relatively hierarchical worldview are more likely to believe strong patent protection is necessary to spur innovation. Additionally, having a hierarchical worldview and also viewing patent rights as property rights may be a better predictor of patent strength preferences than either alone. Taken together, our findings suggest that individuals' cultural preferences affect how they understand new information about the IP system. We discuss the implications of these results for fostering evidence-based IP policymaking, as well as for addressing polarization more broadly. For example, we suggest that empirical legal studies borrow from medical research by initiating a practice of advance registration of new projects-in which the planned methodology is publicly disclosed before data are gathered-to promote broader acceptance of the results.
This work follows Lisa's earlier essay on Cultural Cognition in IP.  I think this is a fascinating and interesting area, and it is certainly seems to be more salient as stakes have increased. I am not without my own priors, but I do take pride in having my work cited by both sides of the debate.

The abstract doesn't do justice to the results - the paper is worth a read, with some interesting graphs as well. One of the more interesting findings is that political party has almost no correlation with views on copyright, but relatively strong correlation with views on patenting. This latter result makes me an odd duck, as I lean more (way, in some cases) liberal but have also leaned more pro-patent than many of my colleagues. I think there are reasons for that, but we don't need to debate them here.

In any event, there is a lot of work in this paper that the authors tie to cultural cognition - that is, motivated reasoning based on priors. I don't have an opinion on the measures they use to define it, but they seem reasonable enough and they follow a growing literature in this area. I think anyone interested in current IP debates (or cranky about them) could learn a few things from this study.

Tuesday, March 13, 2018

Which Patents Get Instituted During Inter Partes Review?

I recently attended PatCon 8 at the University of San Diego Law School. It was a great event, with lots of interesting papers. One paper I enjoyed from one of the (many) empirical sessions was Determinants of Patent Quality: Evidence from Inter Partes Review Proceedings by Brian Love (Santa Clara), Shawn Miller (Stanford), and Shawn Ambwani (Unified Patents). The paper is on SSRN and the abstract is here:
We study the determinants of patent “quality”—the likelihood that an issued patent can survive a post-grant validity challenge. We do so by taking advantage of two recent developments in the U.S. patent system. First, rather than relying on the relatively small and highly-selected set of patents scrutinized by courts, we study instead the larger and broader set of patents that have been subjected to inter partes review, a recently established administrative procedure for challenging the validity of issued patents. Second, in addition to characteristics observable on the face of challenged patents, we utilize datasets recently made available by the USPTO to gather detailed information about the prosecution and examination of studied patents. We find a significant relationship between validity and a number of characteristics of a patent and its owner, prosecutor, examiner, and prosecution history. For example, patents prosecuted by large law firms, pharmaceutical patents, and patents with more words per claim are significantly more likely to survive inter partes review. On the other hand, patents obtained by small entities, patents assigned to examiners with higher allowance rates, patents with more U.S. patent classes, and patents with higher reverse citation counts are less likely to survive review. Our results reveal a number of strategies that may help applicants, patent prosecutors, and USPTO management increase the quality of issued patents. Our findings also suggest that inter partes review is, as Congress intended, eliminating patents that appear to be of relatively low quality.
 The study does a good job of identifying a variety of variables that do (and do not) correlate with whether the PTO institutes a review of patents. Some examples of interesting findings:
  • Pharma patents are less likely to be instituted
  • Solo/small firm prosecuted patents are more likely to be instituted
  • Patents with more words in claim 1 (i.e. narrower patents) are less likely to be instituted
  • Patents with more backward citations are more likely to be instituted (this is counterintuitive, but consistent with my own study of the patent litigation)
  • Patent examiner characteristics affect likelihood of institution
There's a lot of good data here, and the authors did a lot of useful work to gather information that's not simply on the face of the patent. The paper is worth a good read. My primary criticism is the one I voiced during the session at PatCon - there's something about framing this as a generalized patent quality study that rankles me. (Warning, cranky old middle-age rambling ahead) I get that whether a patent is valid or not is an important quality indicator, and I've made similar claims. I just think the authors have to spend a lot of time/space (it's an 84 page paper) trying to support their claim.

For example, they argue that IPRs are more complete compared to litigation, because litigation has selection effects both in what gets litigated and in settlement post-litigation. But IPRs suffer from the same problem. Notwithstanding some differences, there's a high degree of matching between IPRs and litigation, and many petitions settle both before and after institution.

Which leads to a second point: these are institutions - not final determinations. Now, they treat institutions patents where the claims are upheld as non-instituted, but with 40% of the cases still pending (and a declining institution rate as time goes on) we don't know how the incomplete and settled institutions look. More controversially, they count as low quality any patent where any single claim is instituted.  So, you could challenge 100 claims, have one instituted, and the patent falls into the "bad" pile.

Now, they present data that shows it is not quite so bad as this, but the point remains: with high settlements and partial invalidation, it's hard work to make a general claim about patent quality. To be fair, the authors point out all of these limitations in their draft. It is not as though they aren't aware of the criticism, and that's a good thing. I suppose, then, it's just a style difference. Regardless, this paper is worth checking out.

Friday, March 9, 2018

Sapna Kumar: The Fate Of "Innovation Nationalism" In The Age of Trump

One of the biggest pieces of news last week was that President Trump will be imposing tariffs on foreign steel and aluminum because, he tweets, IF YOU DON'T HAVE STEEL, YOU DON'T HAVE A COUNTRY.  Innovation Nationalism, a timely new article by Professor Sapna Kumar at University of Houston School of Law, explains the role that innovation and patent law play in the "global resurgence of nationalism" in the age of Trump. After reading her article, I think Trump should replace this tweet with: IF YOU DON'T HAVE PATENTS, YOU DON'T HAVE A COUNTRY.

Tuesday, March 6, 2018

The Quest to Patent Perpetual Motion

Those familiar with my work will know that I am a big fan of utility doctrine. I think it is underused and misunderstood. When I teach about operable utility, I use perpetual motion machines as the type of fantastic (and not in a good way) invention that will be rejected by the PTO as inoperable due to violating the laws of thermodynamics.

On my way to a conference last week, I watched a great documentary called Newman about one inventor's quest to patent a perpetual motion machine. The trailer is here, and you can stream it pretty cheaply (I assume it will come to a service at some point):
The movie is really well done, I think. The first two-thirds is a great survey of old footage, along with interviews of many people involved in the saga. The final third focuses on what became of Newman after his court case, leading to a surprising ending that colors how we should look at the first part of the movie. The two acts work really well together, and I think this movie should be of interest to anyone, and not just patent geeks.

That said, I'd like to spend a bit of time on the patent aspects, namely utility doctrine. Wikipedia has a pretty detailed entry, with links to many of the relevant documents. The federal circuit case, Newman v. Quigg, as well as the district court case, also lay out many of the facts. The claim was extremely broad:
38. A device which increases the availability of usable electrical energy or usable motion, or both, from a given mass or masses by a device causing a controlled release of, or reaction to, the gyroscopic type energy particles making up or coming from the atoms of the mass or masses, which in turn, by any properly designed system, causes an energy output greater than the energy input.
Here are some thoughts:

First, the case continues what I believe to be a central confusion in utility. The initial rejection was not based on Section 101 ("new and useful") but on Section 112 (enablement to "make and use"). This is a problematic distinction. As the Patent Board of Appeals even noted: "We do not doubt that a worker in this art with appellant's specification before him could construct a motor ... as shown in Fig. 6 of the drawing." Well, then one could make and use it, even if it failed at its essential purpose. Now, there is an argument that the claim is so broad that Newman didn't enable every device claimed (as in the Incandescent Lamp case), but that's not what the board was describing. The section 101 defense was not added until 1986, well into the district court proceeding. The district court later makes some actual 112 comments (that the description is metaphysical), but this is not the same as failing to achieve the claimed outcome. The Federal Circuit makes clear that 112 can support this type of rejection: "neither is the patent applicant relieved of the requirement of teaching how to achieve the claimed result, even if the theory of operation is not correctly explained or even understood." But this is not really enablement - it's operable utility! The 112 theory of utility is that you can't enable someone to use and invention if it's got no use. But just about every invention has some use. I write about this confusion in my article A Surprisingly Useful Requirement.

Second, this leads to another key point of the case. The failed claim was primarily due to the insistence on claiming perpetual motion. Had Newman claimed a novel motor, then the claim might have survived (though there was a 102/103 rejection somewhere in the history). One of the central themes of the documentary was that Newman needed this patent to commercialize his invention, so others could not steal the idea. He could not share it until it was protected. But he could have achieved this goal with a much narrower patent that did not claim perpetual motion. That he did not attempt a narrower patent is quite revealing, and foreshadows some of the interesting revelations from the end of the documentary.

Third, the special master in the case, William Schuyler, had been Commissioner of Patents. He recommended that the Court grant the patent, finding sufficient evidence to support the claims. It is surprising that he would have issued a report finding operable utility here, putting the Patent Office in the unenviable position of attacking its former chief.

Fourth, the case is an illustration in waiver. Newman claimed that the device only worked properly when ungrounded. More important, the output was measured in complicated ways (according to his own witnesses). Yet, Newman failed to indicate how measurement should be done when it counted: "Dr. Hebner [of National Bureau of Standards] then asked Newman directly where he intended that the power output be measured. His attorney advised Newman not to answer, and Newman and his coterie departed without further comment." The court finds a similar waiver with respect to whether the device should have been grounded, an apparently key requirement. These two waivers allowed the courts to credit the testing over Newman's later objections that the testing was improperly handled.

I'm sure I had briefly read Newman v. Quigg at some point in the past, and the case is cited as the seminal "no perpetual motion machine" case. Even so, I'm glad I watched the documentary to get a better picture of the times and hooplah that went with this, as well as what became of the man who claimed to defy the laws of thermodynamics.

Monday, March 5, 2018

Intellectual Property and Jobs

During the 2016 presidential race, an op ed in the New York Times by Jacob S. Hacker, a professor of political science at Yale, and Paul Pierson, a professor of political science at the University of California, Berkeley, asserted that "blue states" that support Democratic candidates, like New York, California, and Massachusetts, are "generally doing better" in an economic sense than "red states" that support Republican candidates, like Mississippi, Kentucky, and (in some election cycles) Ohio. The gist of their argument is that conservatives cannot honestly claim that "red states dominate" on economic indicators like wealth, job growth, and education, when the research suggests the opposite. "If you compare averages," they write, "blue states are substantially richer (even adjusting for cost of living) and their residents are better educated."

I am not here to argue over whether blue states do better than red states economically. What I do want to point out is how professors Hacker and Pierson use intellectual property – and in particular patents – in making their argument. Companies in blue states, they write, "
do more research and development and produce more patents[]" than red states. Indeed, "few of the cities that do the most research or advanced manufacturing or that produce the most patents are in red states." How, they ask rhetorically, can conservatives say red states are doing better when most patents are being generated in California?*

Hacker and Pierson's reasoning, which is quite common, goes like this. Patents are an indicator of innovation. Innovation is linked to economic prosperity. Therefore, patents – maybe even all forms of intellectual property – are linked to economic prosperity.

In my new paper, Technological Un/employment, I cast doubt on the connection between intellectual property and one important indicator of economic prosperity: employment.

This post is based on a talk I gave at the 2018 Works-In-Progress Intellectual Property (WIPIP) Colloquium at Case Western Reserve University School of Law on Saturday, February 17.

Saturday, March 3, 2018

PatCon8 at San Diego

Yesterday and today, the University of San Diego School of Law hosted the eighth annual Patent Conference—PatCon8—largely organized by Ted Sichelman. Schedule and participants are here. For those who missed it—or who were at different concurrent sessions—here's a recap of my live Tweets from the conference. (For those who receive Written Description posts by email: This will look much better—with pictures and parent tweets—if you visit the website version.)

Friday, March 2, 2018

Matteo Dragoni on the Effect of the European Patent Convention

Guest post by Matteo Dragoni, Stanford TTLF Fellow

Recent posts by both Michael Risch and Lisa Ouellette discussed the recent article The Impact of International Patent Systems: Evidence from Accession to the European Patent Convention, by economists Bronwyn Hall and Christian Helmers. Based on my experience with the European patent system, I have some additional thoughts on the article, which I'm grateful for the opportunity to share.

First, although Risch was surprised that residents of states joining the EPC continued to file in their home state in addition to filing in the EPO, this practice is quite common (and less unreasonable than it might seem at first glance) for at least three reasons:
  1. The national filing is often used as a priority application to file a European patent (via the PCT route or not). This gives one extra year of time (to gain new investments and to postpone expenses) and protection (to reach 21 years instead of 20) than merely starting with an EPO application.
  2. Some national patent offices have the same (or very similar) patenting standards as the EPO but a less strict application of those standards de facto when a patent is examined. Therefore, it is sometimes easier to obtain a national patent than a European patent.
  3. Relatedly, the different application of patentability standards means that the national patent may be broader than the eventual European patent. The validity/enforceability of these almost duplicate patents is debatable and represents a complex issue, but a broader national patent is often prima facie enforceable and a valid ground to obtain (strong) interim measures.

Wednesday, February 28, 2018

How Difficult is it to Judge Patentable Subject Matter?

I've long argued that the Supreme Court's patentable subject matter jurisprudence is inherently uncertain, and that it is therefore nearly impossible to determine what is patentable. But this is only theory (a well grounded one, I think, but still). A clever law student has now put the question to the test. Jason Reinecke (Stanford 3L) got IRB approval and conducted a survey in which he asked patent practitioners about whether patents would withstand a subject matter challenge. A draft is on SSRN, and the abstract is here:
In four cases handed down between 2010 and 2014, the Supreme Court articulated a new two-step patent eligibility test that drastically reduced the scope of patent protection for software inventions. Scholars have described the test as “impossible to administer in a coherent, consistent way,” “a foggy standard,” “too philosophical and policy based to be administrable,” a “crisis of confusion,” “rife with indeterminacy,” and one that “forces lower courts to engage in mental gymnastics.”
This Article provides the first empirical test of these assertions. In particular, 231 patent attorneys predicted how courts would rule on the subject matter eligibility of litigated software patent claims, and the results were compared with the actual district court rulings. Among other findings, the results suggest that while the test is certainly not a beacon of absolute clarity, it is also not as amorphous as many commentators have suggested.
This was an ambitious study, and getting 231 participants is commendable. As discussed below, the results are interesting, and there's a lot of great results to takeaway from it. Though I think the takeaways depend on your goals for the system, no matter what your priors, this is a useful survey.

Tuesday, February 27, 2018

Tribal Sovereign Immunity and Patent Law, Part II: Lessons in Shoddy Reasoning from the PTAB

Guest post by Professor Greg Ablavsky, Stanford Law School

Per Lisa's request, I have returned to offer some thoughts on the PTAB's tribal sovereign immunity decision (you can find my earlier post here and some additional musings coauthored with Lisa here). I had thought I had retired my role of masquerading as an (entirely unqualified) intellectual property lawyer, but, as the PTAB judges clearly haven't relinquished their pretensions to be experts in federal Indian law, here we are.

The upshot is that I find the PTAB's decision highly unpersuasive, for the reasons that follow, and I hope to convince you that, however you feel about the result, the PTAB's purported rationales should give pause. I should stress at the outset that I have no expertise to assess the PTAB's conclusion that Allergan is the "true owner" of the patent, which may well be correct. But the fact that this conclusion could have served as entirely independent basis for the judgment makes the slipshod reasoning in the first part of the decision on tribal immunity all the more egregious. Here are some examples—I hope you'll forgive the dive into Indian law and immunity doctrine:
1. Supreme Court Precedent: The tenor of the PTAB's decision is clear from its quotation of isolated dicta from Kiowa, where, in the process of considering off-reservation tribal sovereign immunity, the Supreme Court expressed some sympathy for the viewpoint of the dissenting Justices: "There are reasons to doubt the wisdom of perpetuating the [tribal immunity] doctrine." But the PTAB omits the key language that came at the end of the Court's discussion of this issue: "[W]e defer to the role Congress may wish to exercise in this important judgment," leaving the decision as to whether to abrogate tribal sovereign immunity—which Congress may do under its "plenary power"—to the legislature. In short, although you wouldn't know it from the PTAB's cherry-picked quotations, Kiowa actually determined that the right approach in the face of uncertainty was to uphold the doctrine of tribal sovereign immunity.
Nor was the 20-year-old Kiowa case the last word on this question. Astonishingly, the PTAB's decision never discusses the facts, holding, or reasoning of Bay Mills, even though the Court decided the case, unquestionably its most important recent statement on tribal sovereign immunity, in 2014. There, the Court rejected another effort to invalidate tribal sovereign immunity, stating that "it is fundamentally Congress's job, not ours, to determine whether or how to limit tribal immunity." This rule, the Court held, applied even more forcefully after Congress had had twenty years to revisit the holding in Kiowa and declined to eliminate tribal sovereign immunity. Id.
Arguably, the PTAB should give at least equal deference to congressional determinations as the Supreme Court, especially given the existence of pending legislation abrogating tribal immunity in this context. Or, setting the bar even lower, one would hope that the PTAB would at some point grapple with recent Supreme Court decisions directly on point. But they don't—in part because, as I'll discuss now, they mischaracterize the question as one of first impression.

Thursday, February 22, 2018

Contigiani, Barankay & Hsu on the Innovation Costs of Inevitable Disclosure Doctrine

For those looking on more trade secret empirics after Michael's post on Tuesday: Researchers at the Wharton School—Andrea Contigiani, Iwan Barankay, and David Hsu—have an interesting empirical study of the inevitable disclosure doctrine in trade secret law: Trade Secrets and Innovation: Evidence from the 'Inevitable Disclosure' Doctrine. This controversial doctrine allows employers to prevent former employees from taking a new job that will "inevitably" require them to use trade secrets. The doctrine has been rejected in California and many other states, and the federal Defend Trade Secrets Act of 2016 allows states to make this public policy choice. This new paper from Contigiani et al. provides some support for the California approach. Here is the abstract:
Does heightened employer-friendly trade secrecy protection help or hinder innovation? By examining U.S. state-level legal adoption of a doctrine allowing employers to curtail inventor mobility if the employee would "inevitably disclose" trade secrets, we investigate the impact of a shifting trade secrecy regime on individual-level patenting outcomes. Using a difference-in-differences design taking unaffected U.S. inventors as the comparison group, we find strengthening employer-friendly trade secrecy adversely affects innovation. We then investigate why. We do not find empirical support for diminished idea recombination from suppressed inventor mobility as the operative mechanism. While shifting intellectual property protection away from patenting into trade secrecy appears to be at work, our results are consistent with reduced individual-level incentives to signaling quality to the external labor market.
By "innovation" they mean citation-weighted patent counts, and this paper should be read with all of the usual caution and caveats for causal empirical studies. But I haven't seen a paper that has attempted this particular empirical approach before, so I thought it was interesting and worth a read by trade secrets scholars.

Tuesday, February 20, 2018

Data on the first year of the Defend Trade Secrets Act

In preparing for the Evil Twin Debate on the DTSA, David Levine (Elon) and Chris Seaman (Washington & Lee) were kind enough to share a draft of their empirical study of cases arising under the first year of the Defend Trade Secrets Act. Now that the article is forthcoming in Wake Forest Law Review and on SSRN, it only makes sense to share their latest draft. Here is the abstract:
This article represents the first comprehensive empirical study of the Defend Trade Secrets Act (“DTSA”), the law enacted by Congress in 2016 that created a federal civil cause of action for trade secret misappropriation. The DTSA represents the most significant expansion of federal involvement in intellectual property law in at least 30 years. In this study, we examine publicly-available docket information and pleadings to assess how private litigants have been utilizing the DTSA. Based upon an original dataset of nearly 500 newly-filed DTSA cases in federal court, we analyze whether the law is beginning to meet its sponsors’ stated goals of creating more robust and efficient litigation vehicles for trade secret misappropriation victims, thereby helping protect valuable American intellectual property assets.
We find that, similar to state trade secrets law, the paradigm misappropriation scenario under the DTSA involves a former employee who absconds with alleged trade secrets to a competitor. Other results, however, raise questions about the new law’s ability to effectively address modern cyberespionage threats, particularly from foreign actors, as well as the purpose (or lack thereof) of trade secret law more broadly. We conclude by discussing our data’s implications for trade secret law and litigation, as well as commenting on the DTSA’s potential impact on the broader issues of cybersecurity and information flow within our innovation ecosystem.
I found this to be an interesting, thorough, and insightful article. I think that the takeaways from the data will differ based on one's views of the DTSA. I suspect my view of the data is different from the view that Levine & Seaman have. Regardless, having the data to work with is immensely useful.

That said, there's still work to be done. The next step for anyone studying this area will be the next layer - looking at the most difficult concerns. For example, one of the most concerning aspect were  seizures; it would be helpful to know a) how often they are sought, b) how often they are granted, and c) what the circumstances were that led to granting. This article gives a good template for how to proceed with followup projects, and I am hopeful that Levine & Seaman keep it going!

Wednesday, February 14, 2018

Hall & Helmers on the European Patent Convention's Impact on Patent Filings and Foreign Direct Investment

The Impact of International Patent Systems: Evidence from Accession to the European Patent Convention, which Michael Risch posted about yesterday, caught my eye as well. As Michael explained, economists Bronwyn Hall and Christian Helmers examined the impact on patent filings and foreign direct investment (FDI) for fourteen countries that joined the European Patent Convention (EPC) between 2000 and 2008. (The countries are Bulgaria, Czech Republic, Estonia, Croatia, Hungary, Iceland, Lithuania, Latvia, Norway, Poland, Romania, Slovenia, Slovakia, and Turkey.)

They find only a small change in patenting by a country's domestic entities. Foreign entities, however, rapidly switched to filing at the EPO, causing their filings in national offices to drop by over 90%. This figure nicely illustrates the effect:


There was not a similar change to FDI: "Despite the clear impact on patent filings, using firm-level data on FDI, we find only very weak evidence that non-residents changed their investment in accession countries following accession to the EPC."

Hall and Helmers argue that these results show "the differential effect of accession to a regional patent system on residents and non-residents of the mostly smaller, less developed accession countries in our sample. Non-residents certainly benefit from the expansion of the regional patent system given their strong reaction, but the net effect on residents is a lot less clear." In other words, joining the EPO creates costs for these countries (because there are more foreign patents, with the resulting deadweight loss) without a strong corresponding gain in domestic innovation or FDI.

In his post yesterday, Michael said it wasn't clear to him why we would have expected stronger IP rights to increase FDI, but this has been one of the main arguments for why developing countries might benefit from joining patent treaties such as TRIPS. For just a few of many articles laying out these arguments—and noting the weak evidence base behind them—see the seminal works by Edith Penrose in 1951 and 1973 or the 1998 Duke symposium articles by Carlos Braga & Carsten Fink and by Keith Maskus.

Studying the impact of changes in patent law through cross-country studies is incredibly difficult (as I have previously explained), but I thought this was a nice empirical design with appropriately nuanced conclusions, and it is certainly worth a download for anyone interested in the impact of the internationalization of the patent system.

Tuesday, February 13, 2018

How Does Country Consolidation Affect Patenting?

Just a short entry today about an interesting new NBER paper by Bronwyn Hall (Berkeley Economics) and Christian Helmers (Santa Clara Business) (behind a paywall, sorry, though most academics can download for free). The question is what happened to patenting activity when the ability to consolidate patenting in a single super-entity comes into play. Hall and Helmers consider this question in the context of the European Patent Convention, which allowed inventors to file with a single entity (the EPO) that granted patents good in any one of several member countries.

Here's the abstract of what they found:
We analyze the impact of accession to the regional patent system established by the European Patent Convention (EPC) on 14 countries that acceded between 2000 and 2008. We look at changes in patenting behavior by domestic and foreign applicants at the national patent offices and the European Patent Office (EPO). Our findings suggest a strong change in patent filing behavior among foreigners seeking patent protection in the accession states, substituting EPO patents for domestic patents immediately. However, there is little evidence that accession increased FDI by patenting foreign companies in accession countries. Moreover, there is no discernible reaction among domestic entities in terms of domestic filings, although we do find some evidence that applicants in accession states increased their propensity to file patents with the EPO post-accession. Inventor-level information suggests that the underlying inventions originate in the accession states.
Let's unpack this a little bit. First, for those who were in EPC countries, they continued to file in their home countries and the EPO at the same rate. It's unclear why - perhaps they wanted the extra chance at protection, or perhaps it was for vanity.

Second, in EPC countries, the rate of invention (measured by patent filings) went up, but only a small amount. But because the rates were pretty low, even a small change was a real change.

Third, foreign patent filing shifted to the EPO almost wholesale. Whereas EPC filers chose both, foreign filers seemed to appreciate the ability to get one patent to cover all countries. The implication I take from this is that EPC filers had some strong reason for that national coverage rather than some worry about overlapping protection if one patent were invalidated.

Finally, the foreign filings did not lead to much increased foreign direct investment. In other words, the EPC appears to have allowed for cost savings for foreigners, cost increases for locals (by their choice, mind you), and not much else. From the discussion in the article, one takeaway from this is that strengthening of IP rights did not necessarily increase foreign investment. It's not clear to me why we would have expected this. While strengthening IP in the way that the EPC did should make it cheaper to obtain protection, it is unclear why companies would move R&D that they already have underway to take advantage of it. After all, they are already happy with the R&D they have; the continued national filings of EPC firms imply this. The cost efficiencies alone should be enough (note also that a single source may be cheaper IP, but it may not be stronger - it may be easier to invalidate a single patent in multiple countries than multiple patents in multiple countries).

That said, if formation of the EPC were grounded on claims that, if only it were easier to get broad protection, everyone would start doing more R&D in, say, Portugal, then those claims were misguided.

Thursday, February 8, 2018

Kevin Soter on Causation in Reverse Payment Antitrust Claims

Readers of this blog are likely familiar with the Supreme Court's 2013 FTC v. Actavis decision, which concluded that certain "reverse payment" pharmaceutical patent litigation settlements could violate the antitrust laws and that "it is normally not necessary to litigate patent validity to answer the antitrust question." Actavis had plenty of academic input before it was decided and has continued to spark vigorous scholarly debates, such as an article by Edlin, Hemphill, Hovenkamp & Shapiro, a response by Harris, Murphy, Willig & Wright, and a reply from the original group.

But until I read Kevin Soter's forthcoming Stanford Law Review Note, Causation in Reverse Payment Antitrust Claims, I wasn't aware of the developing circuit split over reverse-payment antitrust suits brought by private individuals rather than the government.

Unlike the government, private individuals must establish "antitrust standing," including the need to show causation of injury-in-fact, which limits enforcement to groups like drug purchasers, consumer groups, or insurers that might actually be harmed by the settlement. Under the approach to causation adopted by the Fifth and Third Circuits, "plaintiffs must prove precisely how, absent the illegal settlement agreement, generic entry would have happened earlier," which can require litigation of patent invalidity or noninfringement. Other courts—including the California Supreme Court, three district courts, and perhaps the Second Circuit—use the same inference as in Actavis, "reasoning that a plaintiff who has shown an antitrust violation based on a reverse payment settlement agreement has necessarily shown an agreement to delay generic entry beyond the otherwise expected date of generic entry."

Soter sides with the latter approach, arguing that the causation inquiry for private plaintiffs is no different from the inquiry over anticompetitive effects at issue in Actavis, for which litigation of patent validity is unnecessary. And he notes that a burden-shifting approach could address lingering concerns by allowing defendants to rebut the inference of causation.

The best part of teaching at Stanford is having extraordinarily talented students who can produce works like this, and I thought it was worth highlighting for anyone who has been following the pharmaceutical patent litigation antitrust debates.

Tuesday, February 6, 2018

Can You Copyright a Pose?

An interesting case caught my eye this week, and piqued my interest enough to explore further. In Folkens v. Wyland Worldwide the Ninth Circuit considered whether Wyland's depiction of crossing dolphins copied from Folkens's original. Below is a reproduction from the complaint, but it doesn't really do them justice. Better versions of Folkens (pen and ink) and Wyland (color) highlight the similarities and differences. [UPDATED to include the closely related Rentmeester v. Nike]


Folkens v. Wyland
Folkens (left) v. Wyland (right)

The differences between these two are relatively clear: coloring, "lighting," background, and so forth. But there are undeniable similarities, and the primary similarity is the dolphin "pose," which is strikingly similar. It is this similarity (and the Ninth Circuit's treatment of it) that I'd like to explore. Nothing in this analysis, however, should be taken to mean that I think Folkens should necessarily win here. My concern is only with how the court got there, as I discuss below.

Friday, February 2, 2018

Beebe & Hemphill: Superstrong Trademarks Should Receive Less Protection

I have taught the multifactor test for trademark infringement four times now (using the 9th Cir. Sleekcraft test), and each time, some student has questioned which way the "strength of the mark" factor should cut. As a matter of current doctrine, stronger marks receive a broader scope of protection. But smart Stanford Law students who are not yet indoctrinated with longstanding trademark practices ask: in practice, isn't there less likely to be confusion with a strong mark?

In their new article, The Scope of Strong Marks: Should Trademark Law Protect the Strong More than the Weak?, Barton Beebe and Scott Hemphill expand on this intuition: "We argue that as a mark achieves very high levels of strength, the relation between strength and confusion turns negative. The very strength of such a superstrong mark operates to ensure that consumers will not mistake other marks for it. Thus, the scope of protection for such marks ought to be narrower compared to merely strong marks."

The doctrinal relationship between trademark strength and protection was not always as clear as it is today. For example, Beebe and Hemphill point to a 1988 decision by Judge Rich of the Federal Circuit: "The fame of a mark cuts both ways with respect to likelihood of confusion. The better known it is, the more readily the public becomes aware of even a small difference." This more nuanced approach to consumer confusion also finds support in many foreign trademark cases.

To be sure, Beebe and Hemphill are really making an empirical claim about consumer perceptions, and the evidence base is quite limited (though they cite some related studies at notes 102-03). But as they note, the current doctrine relies "on a jumble of untested empirical assertions," and their argument makes a good deal of intuitive sense. At the very least, this article should spur trademark scholars, practitioners, and judges to reexamine their understanding of the relationship between strength and protection. And the next time one of my students asks about this, I'm glad I'll be able to send Beebe and Hemphill's work their way.

Tuesday, January 30, 2018

Saying Goodbye to Chief Wahoo?

A couple years ago, my youngest son was “drafted” onto the Indians Little League team. It was cringeworthy. The name was bad enough (and I’m thankful my alma mater had the good sense to abandon it nearly 50 years ago), but right there on the hat was Chief Wahoo. Needless to say, among the many baseball caps we have in our family, that one hasn’t seen the light of day since the season ended.

Yesterday, the Cleveland team and Major League Baseball announced that they are retiring the logo from uniforms in a year (they had already removed it from in and around the stadium apparently). It’s unclear why it cannot be done sooner, but I’ll give them the benefit of the doubt that manufacturing for next season is already underway and cannot be changed. Good riddance.

Buried in this news is an interesting IP theory and policy tidbit worth discussion. The team is not abandoning the logo altogether. To maintain trademark rights, it will continue to sell Chief Wahoo merchandise in the Cleveland area. That’s right, trademark law is forcing the team to keep selling merchandise with an offensive logo that it claims to no longer be using.

As I discuss below, this is an area where I expect folks will be torn.

Tuesday, January 23, 2018

Evidence of Peer Group Influence on Patent Examiners

Michael Frakes and Melissa Wasserman have gotten a lot of mileage out of their micro data set on patent examiner behavior over time. Prior work includes examination of grant incentives, agency funding, time availability, and user fees.

Their latest paper tackles peer group influence - that is, the effect that both peers at the same level and supervisory examiners have on grant rates. The draft is on SSRN and the abstract is here:
Using application-level data from the Patent Office from 2001 to 2012, merged with personnel data on patent examiners, we explore the extent to which the key decision of examiners — whether to allow a patent — is shaped by the granting styles of her surrounding peers. Taking a number of methodological approaches to dealing with the common obstacles facing peer-effects investigations, we document strong evidence of peer influence. For instance, in the face of a one standard-deviation increase in the grant rate of her peer group, an examiner in her first two years at the Patent Office will experience a 0.15 standard-deviation increase in her own grant rate. Moreover, we document a number of markers suggesting that such influences arise, at least in part, through knowledge spillovers among examiners, as distinct from peer-pressure mechanisms. We even find evidence that some amount of these spillovers may reflect knowledge flows regarding specific pieces of prior art that bear on the patentability of the applications in question, as opposed to just knowledge flows regarding general examination styles. Finally, we find evidence suggesting that the magnitude of these peer examiner influences are just as strong, or stronger, than the influence of the examination styles of supervisors.
I'll admit that I was skeptical upon reading the abstract. After all, I would expect that grant rates would rise and fall together in any given art unit, based on either technology or the trends of the day. Indeed, the effect is not so large as to rule some other influences.

But by the end, I was convinced. Here are a couple of the findings that were most persuasive (in addition to the fact that I think they specified fixed effects nicely):
  1. The effect is more present during the early years, and tends to get "locked in" with experience
  2. The effect is more present with peers than with supervisory examiners
  3. The effect is more present for examiners who do not telecommute - this, to me, was the best robustness check
  4. Examiners who do not telecommute tended to behave similarly in obviousness (v. novelty) and also to cite the same prior art (that was not cited as frequently by those to telecommute)
This paper's framing is interesting. I read it, of course, because it is a patent paper, but Frakes & Wasserman open with a more generalized pitch that this is about employment peer effects. I suppose it is about both, really, and it is worth taking a look at if you are interested in either area.

Monday, January 22, 2018

What happened in patent law in the past year?

Last Thursday I gave a 25-min recap patent law update to judges and practitioners at the Northern District Practice Program Patent Law Symposium, and I thought blog readers might be interested in my recap of highlights from the past year:

Patent Case Filings and Procedure: Venue, PTAB, and Stays

Lex Machina reports that there were 4057 cases filed in 2017, down 10% from the 4529 in 2016. The biggest procedural change was to venue. As I have explained, in its May 2017 decision in TC Heartland, the Supreme Court held that for purposes of the patent venue statute, a corporation only "resides" in its state of incorporation. The Federal Circuit has since held that this was a change in law, so the venue defense was not "available" under FRCP 12(g)(2), allowing district courts in pending cases to consider venue arguments that were not previously raised by defendants. And the Federal Circuit has offered guidance on the other possibility for proper venue—"where the defendant has committed acts of infringement and has a regular and established place of business"—saying that this requires (1) a fixed, physical presence that (2) is regular and established (not transient) and that (3) is a place of the defendant (not merely of an employee).

TC Heartland is likely responsible for the decline in cases filed in E.D. Tex. and the uptick in districts like D. Del. and N.D. Cal., though in neither of the latter have filings reached pre-2015 levels:

Saturday, January 20, 2018

Crowdsourced Bibliography on IP and Distributive Justice

Professor Estelle Derclaye recently sparked a terrific email thread among IP professors about articles tackling IP from a distributive justice perspective. Here is a lightly edited list of the suggested works, roughly in chronological order, with links (open access, where possible) and, for somewhat arbitrarily selected works, short quotations or descriptions. If you have additions or corrections, feel free to email me or add them to the comments.

Tuesday, January 16, 2018

A New Trade Secrets Survey of In-House Counsel

It feels like all trade secrets all the time these days, but the hits keep coming. I've got some patent scholarship queued up, but this new survey caught my eye. David Almeling and Darin Snyder have provided some quality empirical analysis of trade secret cases in the past. Their two articles (written with others) cover both state and federal courts, and provided solid empirical support for the proposition that most trade secret cases involve ex-employees rather than strangers.

They have now extended this work with a new study (co-authored with Carolyn Appel) that surveys in-house counsel about trade secret usage.  The study is here, though it is behind the Law360 paywall, which is unfortunate. It is available on Lexis, I believe, or through a free preview.

The authors surveyed 81 in-house counsel from a variety of industries; however, they acknowledge that their sample is self-selected, which means that those who care most about trade secrets may have answered. They did overyield (another 27 people were not such in-house counsel), which lends some support for the idea that answers were not simply driven by those who cared the most. On the other hand, most respondents worked for large, multi-state companies, which makes one wonder why more in-house counsel for smaller companies did not participate and whether their answers would be any different.

In my prior post on the DTSA and in the Evil Twin debate, I ask why there is a sudden push for the DTSA. This survey gives us some answers about the political economy - 75% of respondents said that trade secrets had grown more at risk in the last ten years, and 50% said they were at much more risk. This fear may or may not be well grounded, but if this is the perception, it will certainly drive policy. Relatedly, respondents reported that patent law changes were not driving use of trade secrets -- only 30% reported using trade secrets instead of patenting. Most, I suspect, want more of both.

A whopping 70% reported that their company had been a victim of trade secret misappropriation. Of those, employees or ex-employees were the perceived culprits 90% of the time, confirming (again) that most misappropriation is not stranger misappropriation.

The most surprising finding of the survey, in my view, was a question about whether the DTSA should preempt the UTSA. Non-preemption allows both to stand, which can not only create conflict, but also allows plaintiffs to choose the most favorable law. In my discussions with people after the debate, some thought non-preemption was the part of the DTSA that most showed a desire to expand trade secret's reach.

So, the surprising result was a nearly even three-way split between supporting preemption, opposing preemption, and not caring one way or the other. While academics seem to think that lack of preemption is a big deal, this self-selected group of in-house counsel seem to not care one way or the other. This finding could actually drive policy choices in the future.

I'll conclude with that brief recap - while the article is short, there is more to see, about the types of secrets, the role in innovation, and the cost of misappropriation. I will end on this note, however: the costs borne by most companies from misappropriation were investigation and litigation. This is to be expected, as everyone investigates and litigation costs are high. But the other costs of misappropriation were spread out among price erosion, loss of sales, increased costs of protection (my own personal theory), and even none. I think this shows two things. First, when messaging in this area is not consistent, it may be that companies are perceiving the problem in their own ways. Second, it may be that enforcement efforts wind up dwarfing the actual harm of misappropriation in some cases.

Monday, January 15, 2018

Is the Defending Trade Secret Act Defensible? The Movie

As noted a couple weeks ago, Orly Lobel (San Diego) and I debated the DTSA at the AALS Conference. As promised, I'm posting video of that conference here.


Wednesday, January 10, 2018

The Powerful Effects of Copyright Reversion

A common type of client I've seen in practice is the founder who sold IP (or company) to another, only to see the creation buried for one reason or another. The client usually wanted the rights back, so as to see the work grow. We invariably had to give the bad news: there was little to do but negotiate for a return (which we sometimes achieved). [Practice tip: build reversion rights into the sales contracts, though the buyer often chokes on such language].

Of course, we explored copyright reversion, which allows for reversion after 35 years for post 1978 works. But in the software area, 3 years might as well be forever. Few software products last 35 years (is Linux a work made for hire? Uh oh).

Paul Heald (Illinois) has done some really useful work in this area. His prior work shows the U-shape curve of books available on Amazon. Recent books are available, and books in the public domain (before the 1920s) are available, but books in copyright but not recent are not available, even those published as few as 20 years ago.

One theme of this work is obviously that copyright terms should be shorter, and that may well be true. But one of my initial takes was that the publishers are to blame - they are sitting on books that authors may well want to publish. Reversion rights are a way to handle this - authors can take over those books and get them published if they want.

In a new article, Paul Heald again looks at this market in a draft article called Copyright Reversion to Authors (and the Rosetta Effect): An Empirical Study of Reappearing Books (located here on SSRN). Here is the abstract:
Copyright keeps out-of-print books unavailable to the public, and commentators speculate that statutes transferring rights back to authors would provide incentives for the republication of books from unexploited back catalogs. This study compares the availability of books whose copyrights are eligible for statutory reversion under US law with books whose copyrights are still exercised by the original publisher. It finds that 17 USC § 203, which permits reversion to authors in year 35 after publication, and 17 USC § 304, which permits reversion 56 years after publication, significantly increase in-print status for important classes of books. Several reasons are offered as to why the § 203 effect seems stronger. The 2002 decision in Random House v. Rosetta Books, which worked a one-time de facto reversion of ebook rights to authors, has an even greater effect on in-print status than the statutory schemes.
Heald gathers three different data sets: bestselling authors, bestselling books, general population of reviewed books. He looks at whether they were available, who published them (big publisher v. independent), and where (paper or ebook). In the rest of the post, I'll briefly discuss the findings and some thoughts.

Thursday, January 4, 2018

Extraterritorial Reach Of The Defend Trade Secrets Act: How Far Did Congress Go?


In the aftermath of the Defend Trade Secrets Act (DTSA), a little discussed, but potentially quite significant, issue is whether civil trade secret plaintiffs can now use federal trade secret law to reach misappropriation that occurs in other countries pursuant to DTSA Section 1837. See 18 U.S.C. § 1837.  This post is a follow-up to my prior post on presentations at last spring's conference "The New Era of Trade Secret Law: The DTSA and other Developments", hosted by the IP Institute at Mitchell/Hamline School of Law. Professor Rochelle Dreyfuss spoke at the conference about her work-in-progress with Professor Linda Silberman, discussed herein.

Tuesday, January 2, 2018

Defending the DTSA

I'm excited to be a participant in the annual Evil Twin debate, coming this Friday in San Diego in connection with the AALS conference. The debate is sponsored by the University of Richmond Law School and will take place at 4:30 at the Thomas Jefferson Law School.

The topic this year is: "Is the Defend Trade Secrets Act Defensible?" I'm taking the "yes" side. My Evil Twin is Orly Lobel, the Don Weckstein Professor of Labor and Employment Law at the University of San Diego Law School.

As a prelude to give her a head start, I thought I would share a recent essay by Professor Lobel: The DTSA and the New Secrecy Ecology, available on SSRN. The abstract is here
The Defend Trade Secrets Act (“DTSA”), which passed in May 2016, amends the Economic Espionage Act (“EEA”), a 1996 federal statute that criminalizes trade secret misappropriation. The EEA has been amended several times in the past five years to increase penalties for violations and expand the available causes of action, the definition of a trade secret, and the types behaviors that are deemed illegal. The creation of a federal civil cause of action is a further expansion of the secrecy ecology, and the DTSA includes several provisions that broaden the reach of trade secrets and their protection. This article raises questions about the expansive trajectory of trade secret law and its relationship to entrepreneurship, information flow, and job mobility. Lobel argues that an ecosystem that supports innovation must balance secrecy with a culture of openness and exchanges of knowledge. This symposium article is based on Professor Orly Lobel’s keynote presentation at the March 10, 2017 symposium entitles “Implementing and Interpreting the Defend Trade Secrets Act of 2016,” hosted by the University of Missouri School of Law’s Center for Intellectual Property and Entrepreneurship and the School’s Inaugural Issue of the Business, Entrepreneurship & Tax Law Review.
The essay lays out a good background of the DTSA and points to some of its key drawbacks. It's a useful read for anyone looking for a relatively balanced synopsis of concerns about the DTSA some experience with it.