Legal

Should Artificial Intelligence Practice Law?

Artificial Intelligence Practice Law

Simon Krauss
Deputy General Counsel

Jun 27, 2018

As in many other professions, artificial intelligence (AI) has been making inroads into the legal profession. A service called Donotpay uses AI to defeat parking tickets and arrange flight refunds. Morgan Stanley reduced its legal staff and now uses AI to perform 360,000 hours of contract review in seconds and a number of legal services can conduct legal research (e.g., Ross Intelligence), perform contract analysis (e.g., Kira Systems, LawGeex, and help develop legal arguments in litigation (e.g., Case Text).

Many of these legal AI companies are just a few years old; clearly, there are more AI legal services to come. Current laws allow only humans that passed a bar exam to practice law. But if non-humans could practice law, should we have AI lawyers? The answer may depend on how we want our legal analysis performed.

AI Thinking

Today, when people talk about AI, they often refer to machine learning. Machine learning has been around for many years, but because it is computationally intensive, it has not been widely adopted until more recently. In years past, if you wanted a computer to perform an operation, you had to write the code that told the computer what to do step-by-step. If you wanted a computer to identify cat pictures, you had to code into the computer the visual elements that make up a cat, and the computer would match what it “saw” with those visual elements to identify a cat.

With machine learning, you provide the computer with a model that can learn what a cat looks like and then let the computer review millions of cat (and non-cat) pictures, stimulating the model when it correctly discerns a cat, and correcting it when it doesn’t properly identify a cat. Note that we have no idea how the computer structured the data it used in identifying a cat—just the results of the identification. The upshot is that the computer develops a probabilistic model of what a cat looks like, such as “if it has pointy ears, is furry, and has eyes that can penetrate your soul, there is a 95 percent chance that it is a cat.” And there is room for error. I’ve known people who fit that cat description. We all have.

Lawyer Thinking

If a lawyer applies legal reasoning to identifying cat pictures, that lawyer will become well versed in the legal requirements as to what pictorial elements (when taken together) make up a cat picture. The lawyer will then look at a proposed cat picture and review each of the elements in the picture as it relates to each of the legally cited elements that make up a cat and come up with a statement like, “Because the picture shows an entity with pointy ears, fur, and soul-penetrating eyes, this leads to the conclusion that this is a picture of a cat.”

In machine learning, the room for error does not lie in the probability of the correctness of the legally cited cat elements to the proposed cat picture. The room for error is in the lawyer’s interpretation of the cat elements as they relate to the proposed cat picture. This is because the lawyer is using a causal analysis to come to his or her conclusion—unlike AI, which uses probability. Law is causal. To win in a personal injury or contracts case, the plaintiff needs to show that a breach of duty or contractual performance caused damages.

For criminal cases, the prosecutor needs to demonstrate that a person with a certain mental intent took physical actions that caused a violation of law. Probability appears in the law only when it comes to picking the winner in a court case. In civil cases, the plaintiff wins with “a preponderance of the evidence” (51 percent or better). If it is a criminal case, the prosecution wins if the judge or jury is convinced “beyond a reasonable doubt” (roughly 98 percent or better). Unlike in machine learning, probability is used to determine the success of the causal reasoning, and is not used in place of causal reasoning.

Lawyer or Machine?

Whether a trial hinges on a causal or probabilistic analysis may seem like a philosophical exercise devoid of any practical impact. It’s not. A causal analysis looks at causation. A probabilistic analysis looks at correlation. Correlation does not equal causation. For example, just because there is a strong correlation between an increase in ice cream sales and an increase in murders doesn’t mean you should start cleaning out your freezer.

I don’t think we want legal analysis to change from causation to correlation, so until machine learning can manage a true causal analysis, I don’t think we want AI acting like lawyers. However, AI is still good at a lot of other things at Kyrio and CableLabs. Subscribe to our blog to learn more about what we are working on in the field of AI at CableLabs and Kyrio.


SUBSCRIBE TO OUR BLOG