Choosing the right tools for better IP decisions

There are many tools designed to help IP owners in the value creation process, but selecting the right ones can be a tough task. Considering the key issues can help to simplify the process

IP professionals can adopt various strategies when pursuing their business goals; however, these are often subject to various constraints, including:

  • the effort of managing a strong portfolio that is representative of their products and those of their competitors;
  • the cost of operations, including prosecution, maintenance, programme preparation and negotiations; and
  • the speed and quality of these operations necessary to ensure that opportunities for monetisation are realised.

At the heart of an IP strategy is the need to secure access to resources that help to achieve the rights holder’s goals. Patent analysis requires subject-matter experts and is therefore fairly subjective. To work within the constraints of an IP strategy, a balance needs to be struck between the skills of the experts engaged, the data that is available and the use of increasingly powerful analytical tools.

IP value defined

IP value is realised upon use of intellectual property in order to achieve business goals. To enable such use, the intellectual property must be created or acquired. Methods for developing IP value include:

  • applying R&D innovations for patents, trademarks and copyrights, or retaining trade secrets;
  • creating a well-maintained patent portfolio with strong litigation potential through inventions, transactions and culling;
  • understanding the economic impact of the patent portfolio, including:
    • its landscape of technologies, white space and over patenting;
    • its potential use and ease of detection; and
    • its position in the market; and
  • generating evidence of use for monetisation discussions.

It can take years of investment to create and nurture intellectual property that is valuable to a company. At the highest level, an IP team should aim to:

  • generate income;
  • maintain competitiveness;
  • improve portfolio quality;
  • reduce expenses; and
  • manage risk.

It is in pursuit of these goals that owners realise the value of their patents.

Traditional analysis with subject-matter experts

Focusing on patents, the most comprehensive method of evaluation is by way of a subject-matter expert, who is not only knowledgeable in the relevant technology but is also familiar with patent practice. There are many benefits to investing in such analysis, and some aspects of a patent owner’s goals may be achievable through only an experienced and imaginative subject-matter expert. Applying expert knowledge to a broad licensing programme can lead to a larger return on investment. In addition, if an expert reads the portfolio in full, he or she will be able to identify all high-value assets through which the patent owner can strengthen its portfolio.

Despite these benefits, investing in a subject-matter expert can be costly. Therefore, a combination of analytical tools and expert knowledge is usually the most efficient and effective approach.

Table 1Example use cases for IP analytics tools

Analytic use cases

Description

Portfolio ranking

Better than counting the number of patents

Portfolio landscape

Clusters of technologies, features and application areas

Portfolio sort

Automate a predefined taxonomy using classifiers

Portfolio model

Extract a taxonomy from the patents themselves

Market assessment

Overlap all companies in a technology space

Competitive assessment

Overlap all competitors in a market

Acquisition analysis

Overlap of buyer and acquisition portfolio

Crowded space or white space

Identify high and low-density patented areas within a portfolio

Find patents to buy and sell

Prioritise patents that are similar to models of patents you need

Save money

Identify patents with concerns

Evidence of use candidates

Prioritise which patents to read first

Document analysis

Analyse patents and other documents together

Efficient goal-driven analysis

Many analytical tools focus on patents and can help to create IP value by providing a wide variety of data in relation to use cases (see Table 1). Patent owners should select an IP tool that accounts for and uses all of their defined parameters and methodologies, and which ensures that the results provided correlate to real-world use cases. The onus is on the end user to determine how to achieve these.

Patent owners developing solutions for use cases should employ a combination of subject-matter experts, data, tools and methodology, and should consider the following:

  • Subject-matter experts have various degrees of skill and creativity; however, they may be limited by their data, tools and experience.
  • Data includes:
    • public metadata about the patent owner’s IP assets;
    • technical data about the products of interest; and
    • aggregate proprietary data about the patent owner’s previous success in managing a portfolio, finding evidence of use or conducting other business.
  • Companies need tools to process this data. In the English-language IP tools market, over 50 commercial IP tools are actively marketed, as well as a handful of generic big data analysis tools through which analysts can quickly create custom workflows and visualisations.
  • For each use case, effective methodologies differ according to the combination of data and tools. Each methodology produces an outcome, which is interpreted by the subject-matter expert and is subject to a particular quality level, depending on the likelihood of mistakes and omissions. As such, the cost of an outcome ranges according to its speed and quality.

IP tool requirements

IP tools vary according to their capability, data, search engine and programmability.

Capabilities

IP tools typically offer a broad range of functions, including the ability to:

  • normalise and refresh datasets;
  • identify facets by the datasets;
  • record ownership;
  • sort (cluster assets with equal weight);
  • classify multiple models by score;
  • prioritise patents with regard to known information (ie, what to read first);
  • analyse the text of a patent and its claims;
  • source similar patents;
  • match similar product information;
  • identify patents that are:
    • declared standard essential;
    • involved in litigation; or
    • available for transactions; and
  • produce visualisations.

Most IP teams require a commercial IP tool that includes, at minimum:

  • industry agnostics – the information and analytics apply for all markets and technologies;
  • global patent metadata – a database of patent and trademark office data from various jurisdictions;
  • semantic search – a search function that differentiates word meaning;
  • visualisations – either an assortment or a key set of visualised data and analytics, which conveys quick interpretation and understanding; and
  • projects – a method of storing user datasets and the parameters of the analytics workflow.

Beyond these features, IP tools can differ greatly. Most will specialise in one or two aspects that address a particular set of use cases.

Data

In addition to global patent metadata, an IP tool may cover various other datasets, including:

  • business metadata;
  • company associations, including:
    • mergers and acquisitions;
    • renamings; and
    • parent companies;
  • global patent reassignment metadata;
  • global prosecution metadata;
  • global litigation metadata;
  • public or licensed literature;
  • product information (eg, trademarks); and
  • royalty trends.

Tools can vary further according to their normalisation and refresh rate of these datasets.

Search engine

An IP tool’s search engine is crucial. Patents use jargon and are written in a variety of styles and languages, spanning many industries, fashions and best practices in IP matters over several decades. In addition, when a patent is written, the industry terms that will come to be associated with the innovation may not yet be established. The search capacity of an IP tool can range in sophistication, which affects the likelihood of providing incorrect or otherwise missed results.

Search methodologies can be roughly arranged in the following order of increasing sophistication:

  • keywords;
  • phrases;
  • n-grams;
  • Booleans;
  • taxonomies;
  • synonyms;
  • semantics;
  • lexicon (grammar);
  • lemmatisation (word inflection);
  • ontology;
  • jargon; and
  • language (eg, Russian, English or Chinese).

In addition, many IP tools now advertise their use of artificial intelligence (AI), including machine learning, neural networks and other big data algorithms. Datasets that use AI may include taxonomies, classifiers, natural language databases, concept databases and jargon databases.

Programmability enables IP tools to adjust their search parameters and algorithms, often with the aim of finding push-button solutions. Some use cases can be realised with such solutions, such as “find more patents like this patent”. However, to validate an IP tool, patent owners may need to experiment with each capability regarding the data, search engine and adjustments of weighting, algorithms, sensitivities and assumptions. Although some IP tools have adaptable analytics, users should consider whether there is an analytic view provided by an IP tool that addresses your use case. Perhaps there is no metadata or algorithm that can be correlated to your examples of successful outcomes. In any case, it is up to the end user of the IP tool to determine what patent metadata is correlated to its business goals with an acceptable level of accuracy.

Available tools

A recent inventory recorded over 50 commercial IP tools in the English-language market. These compete for different areas of IP practice, with two or three tools likely to dominate the market share in each area. The market is also divided by jurisdiction, particularly US and global analytics. New entrants continue to appear and either compete in existing areas or create new ones. Examples of these areas and the tools available in each are:

  • global patent database (IFI claims);
  • prosecution (Lexis Nexis, Anaqua and Wisdomain);
  • taxonomy (WAND, Derwent DWPI, Relecura and Cipher);
  • marketplace (Derwent, Innography, Questel, PatSnap, AcclaimIP, FreePatentsOnline and Google);
  • literature (InnovationQ and Innography NPL);
  • litigation (Lex Machina, Darts-IP and Docket Navigator);
  • discovery (Perceptiv, among many others);
  • royalties (ktMINE and RoyaltyRange); and
  • maintenance (QuantifyIP).

Valuable tools

A valuable IP tool is one that is validated to produce analytics that correlate to previous business success.

Keeping track of the variations in IP tools is considerable work, with existing tools continuing to improve as new tools enter the market. The amount of time that a company spends keeping track of this depends on its needs. For most common use cases, a company will find a solution that it trusts and continue to use it until it falters or the company’s needs change.

Validation is key to selecting an IP tool. While some tools may look snappy, the IP team must be convinced that the results are correct and meaningful. A starting point is to read the tool manufacturer’s product literature and watch its YouTube marketing content. Next, consider a demo. Prepare a test bench of data that is known to correlate to the company’s recent business success. Then consider a trial. Correlate the IP tool outcome based on the test bench and rate its price, accuracy, usability and performance. Ask the following questions:

  • Would the IP tool have accelerated your result?
  • Would it have been more economical than using only subject-matter experts?
  • Would the outcome have been as good as the one that you recently achieved using a different solution?

If the result is positive, license the tool for a while and run it simultaneously with existing operations for a variety of project corner cases in order to develop trust in the combination of data, tool and methodology.

Portfolio ranking or apportionment example

In some use cases patent owners may need to know how their portfolios are ranked in various areas of the market, in order to determine exposure, transaction opportunities or apportionment for licensing negotiations. Alternatively, the aim may be to obtain a broader understanding of the market in preparation for a marketing or financial report.

Traditionally, portfolio ranking involves:

  • creating taxonomies of market areas;
  • employing a subject-matter expert to brainstorm phrases in these areas;
  • running an algorithm to search for n-grams of these phrases in patent metadata; and
  • counting the number of patents from each portfolio found.

Once the dataset has been prepared, the process effectively creates a push-button solution.

However, this approach faces challenges in terms of quality and maintenance. As patents use jargon, the brainstorming of keywords may limit the patents found to the jargon of the subject-matter expert. The phrases must be regularly updated in order for models to remain up to date. Phrase searches have low accuracy, as many false positives occur, and this is addressed only by narrowing the phrases further and removing false positives using, for example, cooperative patent classification – thus adding to the workflow. In terms of breadth, machine learning can be applied to find candidate patents that match the phrases. However, to ensure accuracy, these patents must be validated by the subject-matter expert before they can be added to the taxonomy models. Therefore, subject-matter experts are continually needed to maintain such approach.

Another approach to portfolio ranking involves weighing many factors. Some commercial tools support this through the programming of a formula based on patent metadata. This can improve the ranking, assuming that the methodology can also overcome the challenges outlined above.

Let us consider an example formula that combines popular metrics with others that are not typically available in IP tools, such as:

  • patent age;
  • forward references;
  • fundamental patents;
  • previous licensing programmes;
  • global coverage; and
  • detectability.

These metrics can be combined with the phrase approach to create less of a push-button solution. However, the key to determining which metric to use depends on previous business success.

Patent age

The potential value that is associated with patent age depends on the speed of market adoption for the relevant technology area. For example, if the innovation contributes to the personalisation of emojis on a smartphone, the timeframe for market adoption will be less than one or two years, as this can be implemented in a neural network’s core using software. If the innovation is an automotive improvement, market adoption can take years, as the automobile model cycle is typically seven to 10 years. In addition, the peak value of the innovation is realised when market adoption hits a high volume.

Considering evidence of use documents at TechInsights, the average age of a patent when evidence of use is created is 12 years (Figure 1). This represents thousands of patents that read on semiconductor, electronic and software implementations.

Figure 1. Age (years) of patent when evidence of use is documented

An IP tool can provide the priority year of a patent. To use this metric more accurately, the curve may be created for a range of technologies. The sweet spot for the age of a patent in each technology would be used for the formula.

Forward references

Forward references are commonly used to measure a patent’s importance. Patents that are created early in a technology thicket are often referenced by patents that participate in the ongoing smaller refinements of the thicket. This is reflected in the patent value, as early patents in a technology thicket are apportioned more value in licensing negotiations. The rule of thumb is that roughly the first 10% of patents in a thicket comprise 90% of the thicket value.

Forward references should be reduced to non-self-citing references, in order to exclude results where a company references its own technology. In this way, the forward references are limited to the frequency of market references to the company’s innovations.

To determine whether forward references are a good measure, consider the data. Figure 2 reveals that in one study, more than 3,700 US assets involved in litigation over a five-year period were inspected. Only 8% of these had no forward references, and two-thirds of the patents asserted had 11 or more forward references.

Figure 2. Litigated US assets forward citations

Figure 3. Company portfolios’ forward citations

As such, forward references – ideally, non-self-citing forward references – can be used as an indication of potential patent value.

Fundamental patents

As an extension of forward references, it would be more accurate to consider individual thickets when assigning potential value. For example, how would you know what number of forward references represents the threshold for the earliest 10% of patents in a thicket?

One approach is to use an IP tool to sort candidate patents into thickets, then score the patents found in each thicket by their priority date. For example, Figure 4 represents a landscape of 18 technology thickets that form the basis of cloud technologies for six leading companies.

Figure 4. Technology peaks

By allocating higher scores to the patents with early priority dates in each thicket, the patent owners are ranked more appropriately than they would be by counting patents or generic forward references.

Part of a previous licensing programme

Companies typically know their best patents as these will have been part of a successful licensing negotiation or litigation and will have survived the usual pushback of a validity challenge.

The list of such patents tends to be short. Although the validity of a patent is subjective, other patents that are in similar art to the best patents may have higher potential value.

IP tools typically cannot access a list of these patents, as most patent licensing is conducted privately. Those that are publicly available are often involved in litigation and IP rights challenges. As such, this metric may be most effective through the private use of a company’s confidential information.

Global coverage

Companies should aim to negotiate a licence in one country before settling for a global licence. To do this, the licensed patents should be in families with patents in many jurisdictions.

These jurisdictions depend on the relevant market. For example, electronics major markets are typically considered to be the European Union, the United States, Japan, China, South Korea and Taiwan. For tobacco vaporisers, every jurisdiction is likely to be included.

IP tools can provide this data from simple families and their members. What constitutes global coverage may be based on the applicable market and what the company considers essential to negotiate a global licence. Table 2 demonstrates this in relation to the technology thickets highlighted in Figure 4.

Table 2Percentage of cloud patents held by leading technology corporations in at least two jurisdictions outside the United States

Portfolio subset among 18 cloud technology thickets

Percentage of these patents in at least two major jurisdictions outside the United States

Google

32%

Oracle

23%

Microsoft

22%

Amazon

21%

IBM

19%

VMware

3%

Detectability

When a subject-matter expert rates a patent, the aspects rated often include:

  • potential use in industry;
  • ease of detection;
  • ease of workaround;
  • validity concerns; and
  • economic value of the enabled feature.

Companies using IP tools can greatly benefit from historic scores of all of these aspects. However, those that can be most successfully modelled are detectability and validity concerns.

Detectability generally correlates to the group art. This contrasts with potential use in industry, which requires deeper analysis of the specific elements of each claim.

Using IP tools to score detectability requires methodology, data and an IP tool with a trainable classifier for the type of data in hand. The methodology must recognise that a different model will be required for each group art considered. The data must be created manually, either with historic ratings and phrases (which is problematic, as this will result in many false positives) or with patents (which is preferable, as this provides rich text and class codes).

There are existing tools on the market to assist with validity concerns, which can be used for prosecution, negotiation preparation and defence. These tools may deal with claim words unsupported by the specification, claim breadth, literature searches or other aspects.

Onus on the user

IP teams are required to deliver on a variety of business goals that are pursued through the development and use of potentially valuable intellectual property. Subject-matter experts are key to developing and assessing patent portfolios, which are subjective in nature, and IP tools can help them to increase efficiency and improve on decisions. Although some IP tools create push-button solutions, most are often combined with data and methodologies to address the use cases that further the aims of the owner. When selecting an IP tool, the onus in on the user to ensure that the data, analytics and visualisations convey understanding that correlates to its recent business success and advances its ability to make better IP decisions.

Action plan

It falls to the user of an IP tool to ensure that the data and analytics presented by the tool correlate to examples of previous success of the company’s use case and further advance its business goals. When selecting an IP tool, companies should consider the following steps:

  • Prepare a test bench of data known to correlate to recent business success.
  • For each IP tool candidate, read the white paper and watch its YouTube marketing content.
  • Consider a demo, then a trial. Correlate the trial outcome to the test bench and rate its price, accuracy and performance.
  • If the result is positive, license the tool for a while and run it simultaneously with existing operations for a variety of project corner cases in order to build trust in the combination of data, tool and methodology.
Martin Bijman is director, IP products, at TechInsights, Ottawa, Canada

Get unlimited access to all IAM content