13 July 2012Print This Post

The next generation of legal review

By John Hudson, electronic consultant at Kroll Ontrack Legal Technologies

Hudson: IRT offers many benefits

Today’s surge of electronically stored information (ESI) has forced companies involved in document-intensive litigation to adopt a different approach to document review. Lawyers need to search and review enormous volumes of ESI in short time periods to assess the merits of their own case and to produce relevant data to opposing parties.

The most expensive part of many cases has become the disclosure exercise, where costs continually sky rocket, even with the assistance of electronic filters to reduce volume. Cost-effective and reliable document review requires lawyers to find key data quickly and accurately and protect privileged and confidential data, without cutting corners.

Where data has outpaced the feasibility of manual review and the capabilities of keyword searching, a new generation of intelligent review technologies (IRT) are now available to help reduce the expense and burden of document review. These technologies automatically identify, prioritise and tag the documents that are most likely to be relevant to a case and help lawyers to identify the important documents faster, reducing human effort and cost.

Used properly, IRT offers many benefits and having piqued the interest of the courts and the public at large, guidance is beginning to emerge about how to use it defensibly.

How does IRT work?

Some of the new technologies that have emerged intelligently prioritise documents as they pass through a workflow. Those documents most likely to be responsive are moved to the front of the queue in the course of a linear review. Other technologies go a step further, and recommend how documents should be categorised, indicating the degree of confidence for these recommendations (intelligent categorisation).

What these technologies have in common is that they learn from a sample set of documents, reviewed by competent reviewers. The observed logic is then applied to the remainder, which can save time and cost, but can also increase accuracy in the review process. There is a growing acknowledgement that human review is flawed by inaccuracies and inconsistencies in decisions and so IRT helps to address that.

The distinguishing characteristics of IRT are:

  • Workflow automation, which minimises human work and inconsistencies in the distribution and routing of documents to members of a review team at different stages in the review process.
  • Supervised learning, which learns from a sample set of manually reviewed documents and automatically produces statistical models for the prioritisation and categorisation of the remaining documents in a large document collection.
  • Statistical quality control, particularly the use of sampling, which is used to monitor the progress and effectiveness of the prioritisation, categorisation and review decisions. This can also be used to support and defend decisions to stop reviewing.

Past v present

The huge amount of data now encountered in disclosure, even after traditional filtering technologies such as date and keyword filters have been applied, makes a linear document-by-document review economically untenable. Keyword searching is recognised as something of a blunt instrument: it can produce false hits and yield either too many or too few documents. It is also time consuming to conduct the iterations needed to get accurate results. Bridging the gap between searching and human review, IRT helps lawyers and companies to work productively and reach the same goals.

What do the courts think?

In the high-profile US case of Da Silva Moore v Publicis Groupe & MSL Group 11 Civ 1279, Judge Peck approved the use of predictive coding technology in e-discovery for the first time. The plaintiffs objected and appealed, stating that it lacked generally accepted reliability standards.

The appeal judge held that it would be extremely difficult to definitively ascertain whether predictive coding was less reliable than traditional keyword searching and stated that “there is simply no review tool that guarantees perfection”. Discovery is currently stayed pending the outcome of various other decisions which may impact on it.

In April, in Global Aerospace Inc v Landow Aviation, a court in Virginia approved the defendant’s use of predictive coding subject to objections by the plaintiffs. After traditional data filtering methods such as de-duplication and keyword searches, the defendants had two million documents which they estimated would take approximately 20,000 hours and $2m to review. In a detailed protocol which no doubt aided their cause, they outlined how they would train the system and use statistical sampling to identify their technology’s recall.

In both of these cases, the court has placed emphasis on methodology rather than the inner workings of the technology, unlike in Kleen Products. The judge in this anti-trust case, ongoing in the Northern District of Illinois, has requested formal expert reports and evidence on the adequacy and sufficiency of keyword searching and predictive coding.

The Civil Procedure Rules in England and Wales encourage the use of technology to ensure efficient document management and to help reduce the burden of going through large quantities of data. In Goodale v Ministry of Justice [2009] EWHC B41 (QB), Senior Master Whitaker said he was aware of prioritisation technology. It appears from recent public comments made that the judiciary in the UK are likely to endorse the use of technology like this and predictive coding because it offers a pragmatic and proportional approach to edisclosure.

Promising results

In a recent project conducted by Kroll Ontrack’s Ontrack Inview, two lawyers ‘trained’ the system to intelligently prioritise and then categorise a set of 92,000 documents. They reviewed 33,000 documents using intelligent prioritisation and then intelligent categorisation until they were satisfied that the system was producing accurate results.

When it came to reviewing the remaining 59,000 documents, the lawyers sampled the categorisations determined by Ontrack Inview, expecting there to be an error rate comparable to human review (8%). The results were very positive: using this method (which has arguable parallels to managing a team of temporary reviewers), the error rate was shown to be 0.8 %, suggesting greater accuracy by a factor of 10 when compared to a standard team of human reviewers.

Clearly the technology must not be judged by the results of one project (whether positive or negative), but cases like this one show that IRT offers much to be excited about.

Future best practice

IRT helps litigants to contain the cost of disclosure, strengthen the defensibility of their approach to document review and assess the merits of a case early on, all of which helps keep the overall cost of litigation in check. This is the future and clearly an area where lawyers and technical experts will need to work together to develop an effective but defensible approach and the scientific evidence to justify the approach taken.

By admin

Tags: