The last one to the party: why AI has left legal departments behind

Artificial intelligence is having a profound impact on the tech world, across a range of disciplines. While in-house IP departments have been slow on the uptake, the potential of this new technology is astounding

In March 2016 a computer program called AlphaGo defeated Lee Sedong, the world champion of Go, in a five-game match. For decades, many experts argued that computers would never be able to master anything as complex as Go – an ancient board game exponentially more complicated than chess in which two players try to capture territory on a board – because doing so was believed to require too much non-programmable human intuition. However, AlphaGo exceeded all expectations and defeated the most advanced human Go players by relying primarily on a field of machine learning called deep learning, which draws inspiration from the natural functions of the human brain. Beyond the game of Go, deep learning and other advanced analytical frameworks have dramatically improved performance in a number of industries, including search engines, self-driving cars, shopping recommendations, fraud detection, online advertisements and medical diagnostics.

While many industries have already figured out ways to harness the power of sophisticated analytical tools, the legal industry is lagging behind. Despite access to large amounts of clean data and impressive computing power, there are few examples of law firms or corporate legal departments that have meaningfully leveraged data. If you survey the IP departments of Fortune 500 companies and ask them about the return on investment (ROI) they have generated from data analytics, you are likely to get puzzled looks. Perhaps this is why there has been a lack of venture capital investment or highly valued start-ups in the analytics space for intellectual property, even though the IP industry generates billions of dollars every year.

By now, thousands of articles have been written about innovative approaches to leveraging data and computation, but the terminology has become increasingly blurred. Read enough articles and blogs and you will likely hear a handful of terms which are rarely defined and often used synonymously: ‘artificial intelligence’ (AI), ‘big data’, ‘statistics”, ‘data analytics’, ‘data science’, ‘deep learning’, ‘machine learning’ and ‘predictive analytics’. At the most general level, these all attempt to convey the concept of leveraging data and computation to perform a task better, where ‘better’ connotes faster, cheaper, more accurately or any combination thereof. For the sake of consistency, this article uses the term ‘data science’ to refer to any technique which applies mathematical operations to electronic data, ranging from the most basic (eg, charts of revenue over time) to the most advanced (eg, automated image classification using convolutional neural networks distributed across thousands of processors).

Why are legal departments failing to leverage data science?

Many legal practitioners believe that they could improve efficiency and deliver better outcomes if they only had more engineering support or expensive analytical tools. They are wrong. Just talk to legal departments with large IT teams and big budgets for tool subscriptions. They (and the authors) will tell you that the biggest obstacles have nothing to do with a lack of technical proficiency, algorithmic sophistication or computational resources. The widespread availability of cloud-based data science services means that for less than $1,000, anyone can spin up a distributed deep learning algorithm. The question is whether such tools will actually influence behaviour and improve outcomes. Most of the time, this does not happen for four main reasons.

Efficiency – public enemy number one

In many instances, data science can help departments to streamline processes and reduce costs by surfacing inefficiency. However, this benefit is lost on an industry where law firms continue to bill clients by the hour. The billable hour model teaches lawyers to measure the value of their work by the amount of time spent rather than the impact delivered to the client; reducing the amount of billable time per matter is thus counterproductive. That is why law firms are often the last ones to purchase subscriptions to productivity tools or software to automatically flag defects in work product (eg, contracts or patent applications), and hesitant to invest in modern information technology. This culture is deeply ingrained in many lawyers at law firms and corporate legal departments. Lawyers have never had a reason to learn Six Sigma techniques or Agile principles, and books such as The Lean Start-up were not required reading in law school. As long as profits per partner reign supreme, lawyers will continue to resist data science and carry that mindset into their corporate roles.

Nobody is keeping score

In Signal and the Noise, Nate Silver discusses how political pundits usually perform no better than random chance when it comes to predicting outcomes of elections or other political events, yet they continue to be regarded as experts. Why? Because nobody is actually looking at how often the pundits are correct or incorrect. In the same way, the concept of measuring performance beyond hours billed is scarce in the legal sector. Few legal departments systematically measure the performance of their law firms and even fewer measure the performance of their internal teams. It is true that some departments track and report certain metrics to upper management, but these are usually so-called ‘vanity metrics’ which are easy to manipulate and often bear little relationship to the outcomes that really matter. For example, most IP departments regularly track the number of filed and issued patents and the overall size of their portfolios compared to other companies; but those numbers are neither important nor actionable. How many departments measure the ROI for their patent portfolio? Why is it that ROI is routinely evaluated for every corporate asset except the company’s patent portfolio? Without the measurement of key metrics, there can be no accountability or improvement over time.

Figure 1. Establish a baseline

Getting the right people in the right seats

Law firms and corporate legal departments have not changed for a long time. They largely have the same structures, the same experience and educational backgrounds, and the same approaches for planning and execution. This may have worked under the old paradigm, but it is not suitable for organisations that want a data-driven culture. Author Jim Collins described the first step in building a great organisation as “getting the right people on the bus, the wrong people off the bus and the right people in the right seats”. Most legal departments have lawyers and paralegals, but you would be hard pressed to find any departments with even one full-time data scientist. They typically come from non-legal industries and possess a unique combination of technical skills (statistics, maths and programming), often with a strong background in academic research. They understand data (eg, database design, pipelines and extract, transform, load), are well versed in the scientific method and can apply this to most business problems. Because the core concepts and lingo of data science are foreign to legal professionals, implementing them requires a significant amount of education and buy-in from stakeholders. This means that it is insufficient merely to hire a data scientist – he or she must also be empowered to change existing processes and challenge the status quo. This is easier said than done in traditional legal organisational structures, where the most senior lawyers have the ultimate say. It requires a commitment from leadership to be open to new ideas and test longstanding assumptions and paradigms. Senior lawyers must be willing to defer to the data scientist in areas where they are not domain experts. Even the most talented data scientists will fail in an environment where lawyers are permitted to circumvent the process or dictate technical solutions.

Living in a world of ambiguity

In all fairness to the legal profession, certain inherent characteristics make it difficult to leverage data science. These include:

  • a lack of objective evaluation criteria;
  • unstructured data;
  • a preponderance of confidential information; and
  • subjective and inconsistent judges and juries.

Any experienced trial lawyer will tell you that courtroom success is a function of both your story and how you tell it. You need to craft your narrative, fine tune your argument, develop themes and ultimately persuade a judge or jury. This is the antithesis of applications with clean objective outcomes, such as labelling an image or declaring the victor in a game of chess (or indeed Go). The challenge is exacerbated by the unpredictability of the judicial system. In 2016, 80% of district court decisions in patent cases were appealed, while 53% of appealed cases were modified in some way (PwC’s 2016 patent litigation survey). The fact that a matter goes to trial at all means that uncertainty exists and it usually takes several years to reach a full resolution. This is not to say that there is no uncertainty in other domains. For example, ImageNet – the primary dataset used to compare machine vision algorithms – must also deal with ambiguity. While three experts might disagree on a type of river cat shown in an image, these tend to be borderline cases and dropped, resulting in millions of labelled examples with perfectly objective labels. We do not have that luxury in legal for many desired use cases, such as optimising patent quality or deploying the best litigation strategy.

Core principles

While we are firm believers in the power of data science to transform legal departments, generating value in the legal industry is challenging for all the reasons stated above. How can legal departments overcome these significant challenges and begin to realise the efficiency gains and improved outcomes from applying data science? The first step is to hire (or contract) the right people: people who understand data. Once data scientists have been hired and placed in roles in which they are empowered to make change, the next step is to establish a playbook for implementation. From our numerous discussions (and our own experience) with law firms, corporate legal departments and diverse non-legal verticals, we have identified a set of core principles which every department should follow as a condition to developing or purchasing any data science tool. These principles do not include actual tool building (ie, how to choose optimal databases, algorithms and design). Rather, these core principles provide a framework within which tools should be developed. We use the generic term ‘tool’ to refer to any deliverable intended for repeated use that leverages data, statistics or machine learning. When followed, these five principles will dramatically increase the likelihood that the tool will add value to your organisation and generate a positive ROI:

  • Define your outcome of interest – what metric are you trying to improve?
  • Establish a baseline – how well has your team been performing relative to the outcome of interest?
  • Test your assumptions – what are the easiest, fastest or cheapest ways to improve performance?
  • Set falsifiable goals – what is your clearly defined performance goal?
  • Track performance – how effective was the tool once it was deployed?

To illustrate the application of these core principles, we will walk through an example that is relevant to every IP department that manages a patent portfolio: the process of collecting and evaluating an invention disclosure form (IDF).

Most companies with IP departments follow a similar process for converting ideas into intellectual property. An inventor creates a novel idea which he or she believes should be patented and submits that idea to the IP team by filling out an IDF. A member of the IP team (usually an in-house patent attorney) reviews this and decides whether to draft and file an application based on the idea described in the IDF. The sooner this decision is made, the sooner the application can be filed with the relevant patent office. In highly competitive domains, the IP team has a strong incentive to dispose of IDFs as soon as possible in order to obtain the earliest priority date for the company.

Outcome of interest

In theory: Before initiating any data science tool development, it is critical to specify at least one ‘outcome of interest’ – the concept or metric which you believe the tool can improve. In other domains, this is known by different names: ‘target’ in the machine learning community, ‘dependent variable’ in the sciences and ‘performance metrics’ in many business contexts. Some examples of common outcomes of interest in other domains include monthly revenue for e-commerce companies, click-through rate for online advertising and waiting times for call centres. When defining your outcome of interest, keep in mind three useful tips:

  • Ambiguous concepts need extra attention – certain outcomes of interest, such as revenue or billable hours, are fairly straightforward, with longstanding measurement rules and little debate around core definitions. Other concepts are more challenging. Consider, for example, an attempt to measure the quality of patent claims during prosecution. As a rule of thumb, the more subjective the outcome of interest, the more time you should take to clearly define it.
  • Measurement rules should be as objective as possible – in a perfect world, your team should be able to articulate objective rules (ie, rules with no ambiguity in how to interpret or apply them) which explain how to measure the outcome of interest. This is important not only to reduce the likelihood of disagreement between stakeholders later in the process, but also to facilitate conversion of the rules into machine-readable code.
  • All stakeholders must sign off – before proceeding to the second core principle, it is critical that all stakeholders (ie, generally one representative from each group of people whose support is needed for the success of the tool) agree to the measurement rules. This is analogous to agreeing to the rules of the game before starting to play.

In practice: In the running example, we defined our outcome of interest as IDF disposition timeliness. Even though timeliness may seem like a straightforward outcome of interest, it actually requires considerable deliberation. For example, what actions count as ‘disposition’? What is the definition of ‘timeliness’? Should the goal be binary (ie, all IDFs disposed of after 30 days are late) or continuous (ie, in-house counsel lose a point for every day that an IDF persists without being disposed of)? Are we interested in percentages (ie, dispose of 80% of IDFs in a timely manner) or raw counts (ie, no more than 10 late IDFs per month per counsel)? After many hours of debate with stakeholders, we settled on the following definition: the number of IDFs which are late at any point in a given month, with ‘late’ defined as undisposed of 30 days after the inventor submission date. For example, an IDF submitted on March 10 and disposed of on July 2 in the same year would count as late in April, May, June and July. In addition to writing the logic in prose, we wrote the logic in structured query language (SQL) to provide full measurement objectivity and transparency.

Figure 2. Set falsifiable goals

Establish a baseline

In theory: After defining your outcome of interest (including clear measurement rules and stakeholder sign-off), the next step is to establish a baseline which will reflect historic team performance with respect to the specific outcome of interest. In other industries, important outcomes of interest, such as quarterly revenue or number of subscribers, are rigorously tracked from a company’s inception. However, this is seldom the case with IP-related outcomes of interest. Establishing a baseline reflecting historic performance is critical for three main reasons:

  • Tests data and definitions – to establish a baseline, you must possess important data in a machine-readable format and be able to apply your measurement rules to the outcome of interest. This will expose any shortcomings in your data, pipelines or measurement rules, and will enable you to address these sooner rather than later.
  • Guides prioritisation – often, teams do not actually know how they are performing relative to an outcome of interest. Without a clear understanding of the team’s actual performance in different functional areas, it is extremely difficult to prioritise which areas to focus on. If it turns out that historic performance regarding a particular outcome of interest is already in line with desired targets, it may not need further attention. Establishing a baseline helps teams to focus on the outcomes of interest which need improvement.
  • Provides framework for ROI – most importantly, establishing a baseline provides a clean framework for estimating the ROI of the project after it is completed because it enables a straightforward before and after picture.

In practice: In order to establish a baseline according to the measurement rules pursuant to the first core principle, we needed to execute logic in the form of SQL against our IP management system (IPMS) database, with additional code modifications to account for historic time segments. After doing so, we were able to see month-on-month IDF disposition timeliness scores for individual team members, as well as for the entire team. The numbers were in line with our expectations that the team was not meeting performance targets. This meant that the outcome of interest for IDF disposition timeliness warranted attention, including potential tool development.

Test your assumptions

In theory: At this point in the process, we have defined an outcome of interest and established a baseline. In some instances, teams may decide that the potential tool is no longer needed. When this occurs, it is generally for one of three reasons:

  • The stakeholders could not agree on key definitions and measurement rules for the outcome of interest;
  • Insufficient data was available to establish a baseline; or
  • Baseline performance was actually better than expected.

In other instances, the team may decide that the proposed tool is worth pursuing. If this is the case, the next step is to test your assumptions. An assumption is anything that must hold true in order for the proposed tool to serve as the optimal way to improve the outcome of interest. Here, it is important to address two core questions which borrow heavily from the general principle of a minimal viable product (MVP) outlined in The Lean Start-up by Eric Ries:

  • Cheapest, fastest, easiest – given what we now know about the outcome of interest and the historic baseline performance, the primary assumption to test is whether the proposed tool is actually the optimal (ie, cheapest, fastest and easiest) solution for the problem. A useful exercise is to gather stakeholders and encourage them to brainstorm lighter-weight alternatives. The assumption that the proposed tool is the optimal solution can be validated only if stakeholders cannot poke holes in the proposed tool or find lighter-weight alternatives.
  • Prototype before building – almost all data science tools for legal departments must be integrated into existing workflows in order to add value. You do not need to wait until the tool is fully built to determine whether it will integrate cleanly into an existing workflow. Rather, explore mock-ups with fake data, or wireframe or whiteboard with end users to determine whether they would use the tool as intended. If you build out the tool with real data at this stage and it turns out that the tool does not deliver the intended result, you will have wasted time and effort.

Figure 3. Tracking progress

In practice: Our core assumption was that if team members had access to IDF timeliness data, they would be able to easily monitor their performance and improve IDF disposition time. To test our core assumption, we first conducted a series of meetings with stakeholders and brainstormed alternative ways to drive down the number of late IDFs. Throughout the brainstorming session, the team continued to cite the lack of access to information. Historically, patent counsel simply did not have a straightforward way to quickly determine the number of pending IDFs without writing an SQL query. The team hypothesised that a dashboard with basic information about each patent counsel’s IDF docket would help to reduce lateness. At this point, rather than building a dashboard, we first drew a basic mock-up on a whiteboard and met with counsel. We asked them whether the information could be easily integrated into their existing docket management workflow and whether it would provide sufficient information to enable them to decrease their rate of late IDF dispositions. All of the feedback was positive and we left these sessions with a high degree of confidence that building an intuitive dashboard would be the cheapest, fastest and easiest way to improve IDF timeliness.

Set falsifiable goals

In theory: The final step before commencing tool development is to set falsifiable goals regarding the extent to which the outcome of interest should improve after the release of the tool. It is imperative that the conditions for achieving or falling short of the goal be unequivocal so that there is no room for debate regarding the success or failure of the tool. This step is important for three primary reasons:

  • Addresses potential ROI – by setting a realistic goal in terms of improvements in the outcome of interest over the historic baseline, leadership can make a more informed decision about whether there is sufficient ROI to build (or purchase) the tool.
  • Improves contingency planning – when teams declare ahead of time what success and failure look like, they are more likely to surface potential pitfalls that threaten success. This level of preparation and planning facilitates contingency plans and minimises the likelihood that something will go wrong.
  • Helps team morale – outside of licensing negotiations or litigation, there are few opportunities for IP teams to declare unarguable team wins. Setting falsifiable goals with respect to an important outcome of interest provides an opportunity for the team to rally around well-defined objectives and celebrate victories.

In practice: Before commencing work on the tool, we spent time articulating realistic goals for the proposed IDF dashboard to help reduce late IDF disposition. We set monthly targets for the number of late IDF dispositions, which amounted to a 60% reduction within one quarter, and we also brainstormed the potential pitfalls that threatened to derail the project. We decided that team buy-in, prioritisation and training were the most critical components. As a result, we spent considerable time working with cross-functional teams to address these issues. We found that this helped us to execute a smooth deployment with clean integration into existing workflows.

Track performance

In theory: By this point, you have not only completed the first four core principles, but also built and deployed the tool to your team. This likely required many days of work, overcoming technical challenges and maintaining close communication between data scientists and end users. It is critical at this point to resist the urge to deem the mission accomplished. Remember: the goal was not to simply build a tool, but rather to improve performance with respect to the outcome of interest over the baseline. Thus, you must continue to track performance of your outcome of interest after you release the tool. If the tool is having the intended effect, you should see improvements in your outcome of interest in line with your falsifiable goals. If not, something unforeseen occurred that should be immediately diagnosed and rectified.

In practice: After deploying our interactive dashboard, which enabled patent counsel to quickly visualise IDF dockets, we continued to track the number of late IDF dispositions. Over the course of three months, we saw a dramatic decrease in late IDF dispositions which exceeded our team’s goals.

Potential applications of data science in intellectual property

In our running example, the optimal tool to improve IDF disposition timeliness turned out to be a dashboard built on a relational database and SQL. This approach is not fancy. It did not require cutting-edge tools, advanced engineering or machine learning; but it was the most efficient way to improve performance for an important task common to most IP departments. However, this is not to say that there are no opportunities for more advanced data science applications.

By following the five core principles outlined above, departments can establish a solid foundation in which data science can consistently add measurable value. Continuing with the example of IDF disposition timeliness, patent counsel spend considerable time deciding whether to approve or reject IDFs. By leveraging natural language processing frameworks – which have already proven revolutionary in domains such as language translation – much of the work involved in evaluating IDFs could be replaced with automation. Imagine a tool which takes the text of an IDF as input and compares it to a reference set comprising all previous IDFs which have been submitted in your organisation, all published scholarly articles and all published patent applications across all major global jurisdictions. The tool outputs a score reflecting the IDF’s degree of novelty relative to all documents in the reference set and provides links to the most relevant reference documents. This could dramatically reduce the time that patent counsel spend reviewing IDFs and also improve the quality of their decisions.

After an IDF is approved, converted into a patent application and filed with a patent office, the application is assigned to an examiner and prosecution of the application commences. ‘Prosecution’ is the correspondence between an applicant and an examiner in the course of procuring a patent grant. Consider the difficulty in evaluating the quality of patent prosecution at scale (ie, maintaining the quality of patent claims while achieving high grant rates and accounting for time and cost). Advanced data science applications could be built using existing technology to analyse changes made to claim language throughout prosecution and cross-reference these findings with all of the relevant data – correspondence between the applicant and the examiner, and metadata on the examiner, jurisdiction and art unit – to help measure and track prosecution performance. Over time, these data science applications could identify sufficiently strong patterns in the data to suggest optimal claim language for a grant or the arguments most likely to overcome examiner rejections. There are already commercially available tools that utilise neural networks to recommend words that will maximise the probability of a patent application being assigned to a particular art unit with a higher grant rate. Given all of the available patent and litigation data, the potential for data science applications in intellectual property seems limitless.

Realise the transformative power of data science

Five months before AlphaGo’s historic victory over Lee Sedong, the program was taking down another opponent, Fan Hui, a three-time European Go champion. Like Sedong, Hui was confident that he would easily defeat AlphaGo … until he did not. After losing his match, the DeepMind team asked Hui to join the project as an adviser and he agreed to become AlphaGo’s practice partner. As Hui continued to play AlphaGo and watch its performance improve, he also noticed dramatic improvements in his own play. When he started training with AlphaGo, Hui was ranked 633rd in the world; but after only a few months, he climbed into the 300s. This kind of improvement would normally take years for a professional Go player. In the same way that AlphaGo helped Hui become a better player, data science is poised to help legal departments make better decisions and deliver better outcomes.

Before any change can occur, legal departments first need to overcome a number of organisational and cultural barriers which stand in the way of success. They need to allocate headcount for data scientists and empower them to educate team members and challenge the old ways of doing things. Legal departments must also establish and adhere to a set of core principles which will provide the foundation for successful implementation. Many other industries have already realised the transformative power of data science. It is about time the legal industry joined the party.

Action plan

Legal departments have had a difficult time leveraging data science, for a number of cultural and organisational reasons. They need to undergo a mental shift and consider the following points in order to reap the benefits of a data-centric culture:

  • Get the right people – invest in hiring people who have a deep understanding of data and how to apply it in different contexts. Such people often come from non-legal industries and are not the traditional legal hire.
  • Get the right people in the right seats – hire data scientists into the right roles so that they feel empowered to educate the team and challenge the status quo.
  • Keep an open mind – many elements of a data-driven culture contravene the traditional approaches which have existed in the legal industry for a long time. The team (especially leadership) must commit to learning and adapting to a new way of planning and execution.
  • Define your outcome of interest – figure out what metric you are trying to improve.
  • Establish a baseline – understand how well (or poorly) the team has been performing relative to the outcome of interest.
  • Test your assumptions – determine the best (easiest, fastest, cheapest) ways to improve performance.
  • Set falsifiable goals – articulate clear goals for performance so that there is no room for disagreement as to when these goals are met or missed.
  • Track performance – after deploying a tool, monitor changes in your outcome of interest to determine whether the tool has generated meaningful ROI.

Jay Yonamine is head of data science and Jeremiah Chan is legal director in the global patents department of Google Inc, Mountain View, United States

The views expressed in this article are those of the authors alone

Unlock unlimited access to all IAM content