Wednesday, February 28, 2018

How Difficult is it to Judge Patentable Subject Matter?

I've long argued that the Supreme Court's patentable subject matter jurisprudence is inherently uncertain, and that it is therefore nearly impossible to determine what is patentable. But this is only theory (a well grounded one, I think, but still). A clever law student has now put the question to the test. Jason Reinecke (Stanford 3L) got IRB approval and conducted a survey in which he asked patent practitioners about whether patents would withstand a subject matter challenge. A draft is on SSRN, and the abstract is here:
In four cases handed down between 2010 and 2014, the Supreme Court articulated a new two-step patent eligibility test that drastically reduced the scope of patent protection for software inventions. Scholars have described the test as “impossible to administer in a coherent, consistent way,” “a foggy standard,” “too philosophical and policy based to be administrable,” a “crisis of confusion,” “rife with indeterminacy,” and one that “forces lower courts to engage in mental gymnastics.”
This Article provides the first empirical test of these assertions. In particular, 231 patent attorneys predicted how courts would rule on the subject matter eligibility of litigated software patent claims, and the results were compared with the actual district court rulings. Among other findings, the results suggest that while the test is certainly not a beacon of absolute clarity, it is also not as amorphous as many commentators have suggested.
This was an ambitious study, and getting 231 participants is commendable. As discussed below, the results are interesting, and there's a lot of great results to takeaway from it. Though I think the takeaways depend on your goals for the system, no matter what your priors, this is a useful survey.

I believe that I took a version of this survey sent to professors, but I think my results were not included, as I was not in the target group described in the draft. I can definitively say that it was hard. Some of the rulings were pretty easy, but others were not. I think this is reflected a bit in the results - where the court found a claim invalid, survey takers were more likely to guess correctly. This is completely consistent with the view that Nothing is Patentable.

Instead, it was the edge cases that had the most uncertainty. This can be a problem. First, it means that it was difficult for attorneys to plan for whether something would be valid. Second, it is unclear whether District Courts will always get the edge cases correct. Third, the number of cases on the edge grows as the Federal Circuit reverses some of the district court invalidity rulings (and affirms other validity rulings). This would have the effect of increasing, rather than decreasing, uncertainty.

In other words, as more than nothing is patentable, it becomes harder to sort. Indeed, this study starts with cases after the Federal Circuit's ruling in McRo affirming validity. Had I taken it before McRo, I could have gotten it 100% correct - just say it's invalid. That's actually an overstatement. According to Bilskiblog, just before McRo, 66% of claims were found invalid. Between that time and April 2017 (when this study cuts off), 50% of claims were found invalid. As more were valid, picking the valid ones gets harder. While the draft discusses some of these nuances, a slightly more robust discussion of statistical issues might be useful.

An interesting takeaway from this study was that prosecutors were more accurate (sometimes much more so) than patent litigators. My initial thought was that perhaps litigators were being obtuse, but it occurs to me that two other things may be going on. First, prosecutors are in the business of getting patents (a point the paper makes) and thus have a completely different perspective on validity. In other words, they may have validity bias (that was where the primary differences were) whereas litigators (mostly defense, as the paper notes) might think everything is invalid. (OK, maybe they were being obtuse). Second, prosecutors and litigators typically use different interpretation standards. Interestingly, this cuts the other way, because the broadest reasonable construction, in my view, lends itself to more invalidity findings.

Finally, takeaways from this paper depends on what you want from the system. Predictions were between 60% and 67% accurate overall (with much more uncertainty in the valid patents). The paper argues that this isn't so bad, especially when you consider that respondents spent little time and didn't have priority date or other context. I'm not so sure. The best results were picking invalid claims, but that's easy because the invalid ones pop out at you. Instead, the failure to pick valid claims means that a) folks think eligible claims are not eligible, b) they spend money trying to show it, and c) there are potential appellate issues. This isn't great. And it means that when writing patents, prosecutors know what won't fly, but will be wrong half the time or more when they think something will fly. That's not so great in my view. But that's my view. If your view is that PSM is no better than a coin flip, then attorneys were definitely able to beat that, especially in rooting out invalid claims.

No comments:

Post a Comment