Why Big Data & How Does it Work?

In the art and science of trying cases, for too long we’ve focused almost entirely on the art. We are committed to building the science.

There is an art to trying cases.

Knowing when to show jurors a critical piece of evidence, or whether to show it at all, requires intuition and instinct. Cross-examination could be planned, but ultimately putting someone through the crucible of cross examination requires a nimble mind. An opening statement, or even more a closing argument, is an act closer to theater than science.

Law is a human endeavor, and so lawyers have appropriately spent time on the art of trying a case. But for a variety of reasons, relatively little has been done to focus on the science of trying a case. And so, while Google, DeepMind, Facebook, Apple and others began to reimagine the internet, language recognition, how cars are driven, how CTs are read, and much more, the law continued to be a field filled with lessons from sages, common-truths spread at conferences, and instincts honed from – if a lawyer is very lucky – two or three trials in a year.

Campbell Law is based on a claim that a few years ago was unthinkable, and even now is controversial. Cases should not be decided solely on gut instinct or even experience alone. Cases require scientific investigation, big data, and actual studies using mock jurors. Cases require the absolute best information possible, developed rigorously from large samples, culled by reliable statistical methods. Because when talented lawyers have better information, they get even better results.

If we must put more science in law, how do we do it?

This starts with a simple, but for many, new truth. All questions in your case are empirical.

An empirical question is one with a definite answer. “Is lemon pie better than coconut pie,” is not an empirical question. But “Do more people prefer lemon pie or coconut pie,” is an empirical question. We can gather 100 people and ask them, then total the results and report them. Recognizing the difference between the two questions matters.

Is my client a good witness? That’s an empirical question. Present your case. As part of that case, play 10 minutes of your client’s video deposition (including some good and bad answers). Ask if your client is likable. Ask if your client is credible. You’ll have specific percentages. If 80% say your client is credible, then you know that, playing the odds, your client is doing ok.

And don’t stop there. Make clients vote on liability after hearing the case and seeing your client. Look at the win rate (the percentage of jurors who voted for liability). If among the jurors who find your client credible, 75% vote for liability, and among those who don’t find your client credible, 75% vote for liability, what did you learn? You learned your client’s testimony isn’t that important. Whether jurors believe her or not, they vote the same on liability. So quit worrying about your client. Her testimony isn’t driving results. If on the other hand, you see that among those who find your client is credible you win 80% of the time, but when your client is not viewed as credible you win 30% of the time, then your client’s credibility matters a great deal.

How about this question:

Does the defense animation hurt my case? We could study this lots of ways. One might be an A/B test. An A/B test just means some jurors see version A and some see Version B. So, we present the case to two sets of jurors. Show group A the case with the defense animation. Then present the same exact case, but omit the animation. Measure the win rate, damages, and fault. Which one is better? If Version A (with defense animation) produces a lower win rate, then the animation hurts. If it produces a higher win rate, then the animation actually helps you, in which case you thank the defense in your head, and lick your lips at the idea of them playing it in trial.

These questions, and many, many more are empirical questions. Take a look at the list below. All are empirical questions. Do you understand why? Have you ever actually tested any of these questions before?

  • Should I dismiss the individual defendant?
  • How much fault will my client receive?
  • How much should I ask for in damages?
  • If the court excludes my OSIs, will I still win my case?
  • Should I begin with the conduct of the defendant or the severity of my client’s injury?
  • Does the fact my client needs an interpreter alter how jurors see her?
  • What perspective/angle of my animation is most effective?
  • Should I waive the medical bills or some other economic claim?
  • Do my “rules of the road” gain agreement with the vast majority of jurors?
  • Are my experts better than the defense experts?
  • Does the “smoking gun document” make sense to jurors?
  • Would I do better to cut two of my theories and submit only on one?

If you are looking at this list and thinking you could certainly try a better case if you had concrete answers to the pressing questions in your case, get in touch with us. We’ve been working for the last decade at figuring out how to get the best information so we can all work cases smarter and try them better.

Contact us for more information.

 

Science Graphic