Legal AI Misses the Bullseye

Posted by Tom O'Connor | Mon, Aug 16, 2021

Digital WarRoom created and updates GIST, a TAR 2.0 coding tool, often characterized as an AI tool. In the last five years, Digital WarRoom customers have primarily used GIST as a document review prioritization tool.

In this article, Tom O'Connor, our favorite critic, debunks heavy reliance on tools that claim to have AI, noting that legal strategy must necessarily rely on attorney intelligence (AI).



Last week I wrote an article comparing understanding AI in the legal world to making sausages and, in that discussion, compared the accompanying vendor hype around AI to the posterior of equine mammals.  ( I received several comments, most of them positive. Several took exception to my analysis, and as always, when I receive thoughtful criticism, I took another look at my post with the question in mind, was I exaggerating? 

The question resolved itself to a certain degree because, in the short time since I wrote the article, two new separate pieces on the subject were published. They seem to make the same case I did, although not quite as stridently as my post. 

First was an Aug. 4 report in the PinHawk Technology Digest, (a source I utilize daily and used for a different cite in my first post) about an article by WSJ technology journalist Christopher Mims entitled Why Artificial Intelligence Isn’t Intelligent. ( 


The article is behind a pay-per-view firewall, so you may not be able to read it, but here are a few relevant quotes. 

“A funny thing happens among engineers and researchers who build artificial intelligence once they attain a deep level of expertise in their field. Some of them—especially those who understand what actual, biological intelligence are capable of—conclude that there’s nothing “intelligent” about AI at all.” 


“In a certain sense, I think that artificial intelligence is a bad name for what it is we’re doing here,” says Kevin Scott, chief technology officer of Microsoft. “As soon as you utter the words ‘artificial intelligence to an intelligent human being, they start making associations about their own intelligence, about what’s easy and hard for them, and they superimpose those expectations onto these software systems.” 


“But the muddle that the term AI creates fuels a tech-industry drive to claim that every system involving the least bit of machine learning qualifies as AI and is therefore potentially revolutionary. Calling these piles of complicated math with a narrow and limited utility “intelligent” also contributes to wild claims that our “AI” will soon reach human-level intelligence. These claims can spur big rounds of investment and mislead the public and policymakers who must decide how to prepare national economies for new innovations.” 


“When AI researchers say that their algorithms are good at “narrow” tasks, what they mean is that, with enough data, it’s possible to “train” their algorithms to, say, identify a cat. But unlike a human toddler, these algorithms tend not to be very adaptable. For example, if they haven’t seen cats in unusual circumstances—say, swimming—they might not be able to identify them in that context. And training an algorithm to identify cats generally doesn’t also increase its ability to identify any other kind of animal or object. Identifying dogs means more or less starting from scratch.” 


“Mr. Scott describes AI is similarly mundane terms. Whenever computers accomplish things that are hard for humans—like being the best chess or Go player in the world—it’s easy to get the impression that we’ve “solved” intelligence, he says. But all we’ve demonstrated is that in general, things that are hard for humans are easy for computers, and vice versa.” 


“AI algorithms, he (Mr. Scott) points out, are just math. And one of math’s functions is to simplify the world so our brains can tackle its otherwise dizzying complexity. The software we call AI, he continues, is just another way to arrive at complicated mathematical functions that help us do that.” 


“Some experts in AI think its name fuels confusion and hype of the sort that led to past ‘AI winters’ of disappointment. Once we liberate ourselves from the mental cage of thinking of AI as akin to ourselves, we can recognize that it’s just another pile of math that can transform one kind of input into another—that is, software.” 


“In its earliest days, in In the mid-1950s, there was a friendly debate about what to call the field of AI. And while pioneering computer scientist John McCarthy proposed the winning name—artificial intelligence—another founder of the discipline suggested a more prosaic one.” 

“Herbert Simon said we should call it ‘complex information processing,’ ” says Dr. Melanie Mitchell ( an AI researcher and professor at the Santa Fe Institute with more than a quarter-century of experience in the field.)  “What would the world be like if it was called that instead?” 


The second article was this morning, the 6th of August, on and is entitled Analytics and Predictive Coding Technology for Corporate Attorneys: Demystifying The Jargon. ( ) 


It is also behind a firewall so again here are a few salient quotes from the three authors Jennifer Swanton, Legal Director and Discovery Counsel for Medtronic Inc., a medical device company: John Del Piero, Vice President, Global eDiscovery Solutions at eDiscovery vendor Lighthouse and Shannon Capone Kirk, E-Discovery Counsel  at Boston law firm Ropes & Gray and, parenthetically, one of my favorite writers given her stylistic clarity and lack of jargon. 


“ … tech folks tend to forget that the majority of their clients don’t live in the world of developing and evaluating new technology, day in and day out. Thus, they may use terms that are often confusing to their legal counterparts (and sometimes use terms that don’t match what the technology is capable of in the legal world).” 


This was a major point in the article I released earlier this month, where I decried the pontification by techies who “…spoke math as a first language.”


“It is important to remember though that the term AI can refer to a broad range of technology with very different capabilities. “AI” in the legal world is currently being used as a generalized term, and legal consumers of such technologies should press for specifics—not all “AI” is the same, or, in several cases, even AI at all.” 


Suggested coding by complex algorithms is not a legal strategy. It may be a tactic to meet disclosure deadlines and obligations, but it falls way short of predicting the best strategy to allow you client to prevail.


“Also, predictive coding is a bit of a misnomer, as the tools don’t predict or code anything. A human reviewer is very much involved. Simply put, it refers to a form of machine learning, wherein humans review documents and make binary coding calls: what is responsive and what is non-responsive. 

“The key takeaway from these definitions is that even though all the technology described above may technically fall into the “AI” bucket, there is an important distinction between predictive coding/TAR technology and advanced analytics technology that uses AI and NLP. The distinction is that predictive coding/TAR is a much more technologically limited method of ranking documents based on binary human decisions, while advanced analytics technology can analyze the context of human language used within documents to accurately identify a wide variety of concepts and sentiment within a dataset. Both tools still require a good amount of interaction with human reviewers, and both are not mutually exclusive. In fact, in many investigations, it is often very efficient to employ both conceptual analytics and TAR simultaneously in a review.”


After reading these two articles, I called several people in the vendor community to get their opinion.  Most were not willing to speak on the record, but the comment below from a source who asked to remain anonymous, was typical: 


“Artificial Intelligence” is a trigger word.  What exactly does each vendor mean when they say “AI”?  Is it as simple as “deduplication”, “threading”, “clustering”, “predictive coding”, “natural language processing”, “sentiment analysis”?  Each case can use any number or all of these things depending on the goals of the project.” 



One person who was willing to speak on the record was Bill Gallivan, COO, CFO and co-founder of Digital WarRoom. Bill believes that “Legal AI requires specificity of scope, and the scope of current applications is NOT very wide.” 

What does he mean by that? He expounded by saying: “AI in Discovery may have value in a precise application or scenario, e.g., disclosure obligations when budget and time are restricted.  (At DWR we use what we call marks to handle this function.) And AI has no equal when interpreting documents to mold and shape case strategy in areas such as IP claim construction.” 


“But AI cannot analyze the impact of disclosed documents on a point of law that has yet been contemplated for your client's matter.  (where DWR, like many programs, uses issue codes for this function). And AI has not proven very useful in a core Attorney service, that of case strategy formulation and refinement. 

“For example, AI can find documents on root vegetables planted last fall.  It may find not just names of root vegetables, but also farmers and planting methods not always known but still returned as highly responsive”. 


“But the real intelligence in such a matter comes from attorney learned intelligence.  Were the carrots incorrectly planted before the sweet potatoes by contract farmers claiming to be experts in untreated acidic soil, thereby causing significant loss of crops and resulting in damages in excess of $10 Million.” 


That’s the intelligence that only a human can bring to bear. As I said in my column on this subject in 2019, AI should be an acronym for attorney intelligence. (  

Otherwise, as Kevin Scott of Microsoft pointed out above, we’re just left with “piles of math.” 

Topics: Trends, Features