Predictive or Responsive? LegalTech was all about changing the review process

Friday, February 10, 2012

In case anyone reading this was on the moon last week – LegalTech 2012 went down in NYC. Like every LegalTech, there was a key buzz topic or focus – this year (as last’s) was predictive tagging/coding. There were quite a few panels that touched on this – directly and indirectly. Needless to say the exhibit hall overflowed with vendors explaining their pitch and view on the topic. For coverage on other things non-predictive see Brendan McKenna’s article on Law.com.

So what was new with the topic this year? To be honest, not much. Sure there were some new advances in technology and some great marketing campaigns. And plenty of conversation and discussion on just what it is, how to use it and how to defend the use of it to opposing counsel or a judge. This is not to suggest in any way that this year’s LegalTech was a rehash of last year. Rather it demonstrates that this process and/or technology is still finding its way into the actual practice of ediscovery. And like many things in the law – it just takes time to change.

While we are all bearing witness to new technological advances in the discovery realm as well as a corresponding growth of process and procedure maturity surrounding the practice of discovery/e-discovery – there is still no panacea for 100% accurate and efficient document review. Well, at least not one that is contained in a software box or an easy to deploy new best practice. Rather the best solution for this is, what it has always been, a review of the material by a human (lawyer or not) and a diligent and thorough quality check system. Both can be assisted by a computer or technology.

One of the key studies (pdf) most cited in this arena is by Maura Grossman and Gordon Cormack. In it, the findings state that total human review is inferior to machine review. But it goes on to state that the best solution is a combination of both. The way in which this new technology works is based initially on human input. In one instance the technology fields a number of “seed” documents that the human then reviews and refines for further “seeding” or groupings. In another version the technology “learns” from a series of review calls the human makes and then extrapolates them across the entire review set. Therefore we are not at the point yet where this process does not need human input. And we are not just talking about someone to flip the proverbial “on/off” switch on the machine. This process still demands substantive review by human eyes. Can technology help focus the human? Yes. Can it help streamline the process? Yes. Can it even help make some preliminary calls as to issues or relevance? Some types can, sure. Will it replace human reviewers altogether? Nope. That is, not until the very basis of our legal system changes. Meaning we (the profession) still highly attribute counseling and lawyerly work to be that of a human being. It requires interpretation and judgment calls based on the special circumstance of the matter and known nuances. These do not have to directly relate to the review material at hand but perhaps to opposing counsel’s reactions to prior productions or the magistrate or judge’s proclivity towards certain issues relating to the underlying matter. All of these realities are used by humans to form an opinion or “gut” feeling about something. Intuition is not so easily programmed or even captured by this technology and therefore will be limited in its use and effectiveness.

So while the marketers and proclaimers of this technology label it “predictive” perhaps the better term would be “responsive” – as in the computer is still responding to the direction of a human. Now some label this technology “computer assisted” which is more in line with what actually happens but that sounds too generic – too obvious. Isn’t almost everything we do these days computer assisted? So for the time being maybe we just go with something like “Dynamic Review.” Or because everyone loves a great acronym we could use HIRT for “Human Input – Responsive Technology.” Whatever you call it – human reviewers are not going away any time soon.

ACEDS Nuix Relativity Brainspace