In the midst of intense Cold War hostilities, the Soviet system monitoring US activity sounded an alert. Five nuclear missiles were on their way; the onset of World War III was in the hands of Soviet Air Defence Forces Lieutenant-Colonel Stansilav Petrov. Petrov’s judgment of the situation – based on the nature of the detected threat and the severity of the consequences of retaliating – held him back. The alert turned out to be a false positive. Disaster averted.
As we head towards 2020, it is both anachronistic and telling that this story – from 1983 – so effectively demonstrates both the potential and limitations for automating human judgement.
Every day, we look for new ways to take humans out of the loop.
Call centres - once a stalwart of the modern age - may soon become relics of past, with their crackly phone lines, disjointed sentences and interminable wait times.
Google’s recent announcement of Duplex, its natural language Assistant, demonstrated an astonishing ability to make phone calls, schedule appointments, and converse naturally with humans on the other end of the line. And when they say natural, they mean it: from parsing broken, ungrammatical sentences to using pauses, ums and ahs.
The potential for applications, both domestic and professional, is wide-ranging. Helpdesks, sales calls, tutoring - if these can be automated away, what place do humans have in an electronic future? What value do we actually bring to the table?
As Petrov's experience shows, systems are only as good as the humans who build them.
It's important to remember that even the most sophisticated AI assistants - like Duplex - are not able to employ emotional or moral discretion.
Despite incredible advances in machine learning, human judgment and discretion remain complex and poorly understood.
They involve subconscious, interdependent layers of assessing other humans. We constantly balance risks and benefits in some unknown way, employ empathy while maintaining a sense of broader context. We are susceptible to bias, vested interests, and error; at the same time, we employ emotional intelligence, nonverbal communication and cultural mores. All of these things make our capacity for judgment and discretion uniquely human. Or so we tend to think.
Increasingly, we can simulate certain aspects of what we would consider a human's emotional capacity. With access to social media profiles, we can automate personality judgments better than a person’s friends. “Artificial emotion” is being actively developed and researched to process and interpret facial expression and tone. Researchers are also starting to explore the nuanced relationship between prediction (in the conventional sense of statistical decision theory) and judgment (in the sense of making decisions when the payoff is uncertain).
While the call centre might be an endangered species, certain realities of professional life will stay constant for the foreseeable future.
Legal disagreements aren't going away any time soon. Counterparties will continue to endlessly change their mind about transaction structures. Commercial considerations will ebb and flow day-to-day, and no system is even close to being able to respond intelligently to that.
That's not to say there won't be improvements. You'll find yourself able to respond to these changes faster than ever - in fact, that's what we do at Lexico, by helping you generate review and draft documentation as fast as possible.
But ultimately, it's you who's responding, not the computer. You're the one considering the consequences of those changes. You're the one making the assessment of your client's strategic position and appetite for risk. You weigh the relevant short, medium and long term goals, and determine whether the counterparty’s motives can be trusted. That, ultimately, is why you will remain truly valuable to your clients.