Dissecting TCL v Ericsson – what went wrong?

Ericsson has a world-class portfolio of cellular SEPs which it is keen to license on FRAND terms, but the long-awaited decision of a California court in a case involving the Swedish company and TCL may not have correctly valued the portfolio

In December 2017 the District Court for the Central District of California issued its long-awaited judgment in the TCL v Ericsson case. Now that the dust has settled, this article examines the decision in more detail.

Ericsson’s SEPs

Ericsson’s portfolio of cellular SEPs is undoubtedly one of the strongest in the industry. The industry analyst Article One placed Ericsson fourth in its survey of 4G “Highly essential patents ranked based on ratio of high novelty patents”, a result that probably surprised no one. Article One found that Ericsson held 12% of highly essential and novel patents.

Article One also looked into what it regarded as key technologies. For LTE it identified advanced carrier aggregation as a particularly valuable contributor to the system and ranked Ericsson as a top patent owner in that area, finding that it held a 16% share of patents essential to that technology.

Fairfield Resources has looked at all cellular declared essential patents to determine which are in fact essential. It found that Ericsson held nearly 20% of patents that its reviewers judged essential to WCDMA and nearly 15% of those judged essential to LTE.

A more up-to-date report by PA Consulting into the essentiality of cellular declared patents is not publicly available, so its findings cannot be quoted, but the results also support the view that Ericsson, in terms of SEP ownership, is in the top tier.

Some critics allege that these studies were commissioned to enhance the standing of parties in negotiations and are thus biased. Article One’s study was funded by one of the major users of Ericsson’s patents, so if there is any bias it can be assumed that this party probably was not trying to do Ericsson any favours. Fairfield’s study was also not funded by Ericsson. PA Consulting’s studies are funded by selling copies of the report to the industry after the research has been done. It was not commissioned by any party: its impartiality is therefore its key selling point.

In light of the above, the decision in Unwired Planet ([2017] EWHC 2988) is surprising. It concluded that a blended global rate for Ericsson’s patents in multimode products was 0.8%. This seems, if anything, a bit on the low side: if it is assumed that the total aggregate royalty rate for a multi-mode product is around 10% of the average selling price – which is a conservative estimate – then holding between 12% and 20% of the judged essential SEPs should secure Ericsson somewhere between a 1.2% and 2% royalty. If it only gets 0.8%, then something does not add up.

In Unwired Planet, the explanation may be that the primary agreement that Mr Justice Birss used to extract the 0.8% figure was Ericsson’s licence with Samsung. Companies as large as Samsung have tremendous bargaining power in patent licensing negotiations. Patent licensing is subject to the same economic forces as any purchasing activity: anyone who is buying in large quantities and offers to pay upfront can drive a better bargain than the regular market participant. Questions can be asked as to whether bargaining power should exist in a world restrained by FRAND terms, but the FRAND obligation has so far only been applied by courts and regulators to restrain the upwards bargaining power of the licensor. It has not been used to constrain the downwards bargaining power of the implementer.

Samsung has also shown that it will litigate, asserting its own patents and attacking its opponents’ patents. When the sums of money at stake are in the billions, spending tens or even hundreds of millions on litigation becomes a worthwhile investment. Ericsson was a smaller company, whose share price was struggling at the time. Against that sort of opponent a carrot-and-stick approach, with a cash upfront offer backed against the threat of years of expensive litigation, is a particularly effective tactic to drive down prices. Therefore, when trying to derive a regular market price for Ericsson’s patents, a licence agreement with Samsung may be a low starting point.

The TCL v Ericsson decision was released at the end of 2017. Judge Selna awarded rates from 0.45% (4G in the United States) down to 0.09% (2G in the rest of the world). These were significantly lower than Birss’s 0.8% in Unwired Planet.

The two decisions are enlightening to compare because the evidence put before the court in both cases was the same or very similar. The methods that each judge adopted were similar. So why was the TCL v Ericsson decision so different? Some of the reasons include the following:

  • Selna applied a top-down analysis as his primary valuation method. Birss rejected this approach as unreliable and instead relegated it to a cross-check.
  • In his top-down analysis, Selna used patent data which had been generated in a way that tended to reduce the Ericsson share and made some further assumptions which exacerbated that effect. Although Birss had the same data, he recognised that it was inherently biased and significantly adjusted it before using it in his cross-check.
  • Selna’s analysis excluded licences with TCL’s closest competitors, ZTE and Coolpad. These licences included a running royalty structure, so were similar in structure to the licence awarded to TCL. Instead Selna used lump-sum cross-licence agreements between Ericsson and Apple and Samsung as a basis from which to reconstruct a running royalty agreement for TCL. Birss included ZTE and Coolpad in his comparison, but found that Apple was not a good comparable for lower-end manufacturers.
  • Selna unpacked licences by determining what rate each licensee was paying as a percentage of retail price. Selna then applied that same percentage royalty to TCL, but based on the wholesale price of TCL’s handsets, not the retail price.

Top-down analysis

In TCL v Ericsson Selna began with a top-down analysis. In Unwired Planet Birss used a lot of paper in a similar exercise, but in the end found that a top-down approach was unreliable. He did use it, but only as a sanity check for his FRAND determination based on comparable licences.

A top-down analysis works by working out what the total royalty rate for all SEPs should be on a mobile phone. This is then divided up between patent owners according to their share.

This does not seem unreasonable. The problem is that it is a bit like valuing real estate by starting with the total value of all the land in the country and then dividing that value between property owners according to the size of their plot. The source of the error is obvious: a plot of land in the middle of a city is much more valuable than the same plot of land in the countryside. Applying a top-down method would significantly undervalue property in Kensington and significantly overvalue property in Preston.

One could try to correct for value but differences in land value are not easy to explain, let alone calculate or correct for. That is why in real estate valuation is carried out by assessing the closest comparables (ie, the prices achieved in recent sales in the immediate area) and not by using a top-down approach.

The same is true in the IP market. We accept that patents, like land, can have very different values. Unfortunately, the top-down analysis adopted in these cases took no account of this. Although Ericsson’s patents were considered in some detail, Selna’s approach took no account of whether other patent owners held incremental patents around low-value technologies or whether they held independent, seminal patents covering key technologies. If Article One’s study in the opening paragraph is correct, Ericsson might have done better in an analysis which took account of patent value.

Aggregate royalty burden

Selna started his top-down analysis with two statements made by Ericsson. In 2002 Ericsson had been part of a consortium that supported a “modest single digit percentage royalty rate” for 3G. The court took this to be 5%. It relied on a similar statement from 2008, in which Ericsson suggested an aggregate rate for LTE patents of 6% and 8%. Applying what appears to be an estoppel approach, Selna held Ericsson to those figures.

He also decided that these rates were not aggregate rates for the technology in question, but aggregate rates for all cellular technologies in the device. However, each of Ericsson’s statements was quite clear about the technology to which it relates:

  • “the cumulative royalty rate for W-CDMA to be…”
  • “a reasonable maximum aggregate royalty level for LTE essential IPR in handsets is…”

    Figure 1. Essential patents ranked by ratio of high novelty patents

The statements appear to be about the aggregate royalty for the standard that they name. They do not state a rate for all standards in a product.

Selna appears to have assumed that a typical LTE handset would always include UMTS and GSM, and therefore the reader can read those into the LTE rate. Ignoring, for a minute, the express wording of the statements, this might be a possible interpretation if all LTE handsets automatically included UMTS and GSM and only those cellular standards. Unfortunately it is not that simple. LTE handsets typically include UMTS and GSM, but some also include CDMA2000 and/or IS95. Other LTE handsets do not include those technologies. The aggregate rate for a handset that includes CDMA technologies as well as UMTS/GSM must be different to one that does not; it would be unfair to the CDMA technology holders if CDMA was thrown in for free. Given this complexity, it is difficult to infer that Ericsson’s LTE industry aggregate statement could have been taken by anyone in the industry to have included some other cellular standards as well as LTE.

The UMTS statements relied on by Selna were made 16 years ago, and much has changed since then. It also seems arbitrary to use a company’s predictions for what it hoped would be cumulative rates for all patent owners, while ignoring the announcements that each company made about its own actual rates: at the same time as the aggregate statement, Ericsson announced that on the basis of its share of the industry its LTE rate would be 1.5%. If Fairfield is correct and Ericsson holds 15% of all essential LTE patents, and the aggregate rate for LTE is 10%, Ericsson’s 1.5% statement does not seem unreasonable in retrospect. It is not clear why this 1.5% statement was not treated as better evidence of what Ericsson had represented to the world that its royalty rate would be and why the court instead set out to derive (with the benefit of hindsight) a figure for Ericsson’s rate from a prediction about industry aggregate rates.

Figure 2. Fairfield Resources UMTS and LTE judged essential findings

Once the various patent owners’ individual and aggregate statements were published, it became immediately apparent that there was a problem. If one added up the individual rates sought by each patent owner, they substantially exceeded the predicted aggregate rate. This is not surprising: each patent owner knew how many patents it had filed, but none yet knew what its competitors had filed. Each underestimated how active its competitors had been.

It was surprising that Ericsson’s aggregate royalty prediction could create what is equivalent to an estoppel. Ordinarily, the promisee must demonstrate reliance on a statement for it to have that effect. Selna seemed to agree; he referred to “TCL’s reliance on statements Ericsson made”. However, it was the carriers, not TCL, that chose which standard to adopt. At the time when Ericsson made its 3G statement, UMTS had already been selected by the carriers. LTE was selected for technical reasons after WiMax had been rolled out and failed to gain traction. Contemporary statements by the carriers about the decision to adopt LTE make no mention of royalty costs as a factor in their decision. As an example, see Verizon’s paper explaining why it chose LTE over WiMax. Verizon’s reasoning is a good benchmark, because being a former CDMA2000 user it was the one major network that did not require backwards compatibility with UMTS, a feature which LTE had but WiMax did not. Unfettered by that constraint, Verizon was the carrier whose technology decision was most likely to be influenced by cost.

An absence of reliance on aggregate royalty predictions is not surprising; those who were in the industry at the time were well aware of the mismatch between the aggregate predictions and the individually sought rates. No one made a major choice about network technology based on aggregate predictions that did not add up.

Birss did not accept the companies’ announcements as a starting point. He stated: “In my judgment these statements have little value in arriving at the benchmark rate,” before noting the inconsistency between the proposed aggregate rate and the sum of the proposed individual rates (Paragraphs 269 and 270). This appears to be a key reason why Birss relegated the top-down approach to the status of a ‘sanity check’.

Birss also rightly recognised that the world has moved on considerably since these statements were made, explaining that “these statements do not take into account what implementers and SEP holders have actually been content to agree in the intervening years” and “compared to public statements, comparable licences are real data points”.

Selna’s reliance on these statements, compared to Birss’s dismissal of them, is the first major cause for the difference between the two decisions.

Confusing global and local rates

There is a second issue in Selna’s adoption of the aggregate statements. The companies who made the press releases were based all over the world, not just in the United States. They were discussing a global royalty rate, not a US rate. The aggregate rate being predicted is a weighted average of royalty rates across all countries.

Why is a global rate different to a single country rate and why does it matter? Assume for a moment that the world has only two countries: Country A and Country B. The implementer sells $50,000 of handsets in Country A and another $50,000 of handsets in Country B. Country A has strong patent protection. Country B has no effective patent system. If under a global licence agreement the implementer pays $1,000 in royalties on sales of $100,000, that is a global rate of 1%.

None of those royalties can be attributed to Country B, because it has no effective patent system. They must all have been paid in respect of Country A. So although the global rate is 1%, the local royalty rate paid in Country A was actually 2% ($1,000 on $50,000 of sales). In other words, local rates in countries with strong patent protection are higher than global blended rates.

Many developing countries have weak or no patent protection, but high mobile phone sales. Selna treated the aggregate rate statements above as if they were statements of the aggregate rate in the United States, rather than the blended global rate for all sales across the globe. Because the United States has a relatively strong and effective patent system, US rates should be higher than global rates. Adopting a global aggregate rate as a US aggregate rate has the effect of further lowering Selna’s calculated rates in the top-down analysis. Selna compounded that error by going on to apply a discount for countries that have less effective patent protection than the United States, which becomes a double discount.

Assessing shares of aggregate royalty burden

Both Selna and Birss had before them the evidence of Dr Kakaes. Kakaes had arranged for a number of engineers to conduct 20-minute reviews of patents in a sample of all possible SEPs in the industry. He sought to derive from this exercise the total number of essential patent families that an implementer needed to license. This involved removing expired patents, patents which were not actually essential and other irrelevant patents.

Birss noted a particular flaw with Kakaes’s analysis. He described the exercise as “nothing more than a coarse filter” and noted “a tendency built into it in favour of increasing the number of patents in the pool deemed essential” (Paragraph 344). He found that it “errs on the side of including a patent in the deemed essential pool” (Paragraph 355).

Anyone who has tried to review an SEP will know that 20 minutes is not long enough. Patents are long, and they are not written to make for easy or enjoyable reading. To read a patent and compare it to a standard in 20 minutes with any degree of accuracy is not practicable. Kakaes accepted this; in a footnote to Paragraph 41 of his expert testimony to Birss he said that all his reviewers could do in this time was check “that the declared standard specification(s) did not provide a clear reason to rule the patent out as being essential”. In other words, this exercise might catch the clearly non-essential patents, but anything that looked like it might be related to the standard was allowed through.

However, the same rather generous standard of review was not applied to Ericsson’s patents. Those were subjected to a highly rigorous review and as a result a number of them were found not to be essential. That is to be expected – the more time one has to pick a patent apart, the more likely one is to succeed.

Selna did accept that the work done by Kakaes overestimated the total number of SEPs. While Birss cut the total number by more than half, Selna made a modest reduction of 11.4%. Given that difference, the end result is unsurprising. For each standard, Selna found that there are in the industry nearly double the number of essential SEP families to the number used by Birss.

Selna also did not exclude expiring patents from the total (although he did from Ericsson’s share). Consequently, Selna found that Ericsson’s share of SEPs in each standard was much lower than the share determined by Birss. Selna’s estimate of Ericsson’s share was also significantly lower than that found by the third-party studies.

In summary, Selna’s top-down approach compounded a series of errors:

  • it wrongly assumes that all patents have equal value;
  • it relies on an aggregate royalty derived from an incorrect reading of Ericsson’s aggregate statements; and
  • it derives shares of that aggregate royalty by using different filters for the things being compared.

It is unsurprising that it produced an unreliable result.

Figure 3. Rates for Ericsson’s portfolio as found by Birss and Selna

Comparables

In assessing comparable licences, more data points give a more accurate result, but some data points are more useful than others.

Birss chose Ericsson’s agreements with Samsung, Huawei, Coolpad, ZTE and RIM as reference points. As explained above, Samsung can be considered to be an outlier in this group as it is such a large licensee.

Selna chose differently and excluded ZTE. His reasons for doing so are not easy to understand. Selna accepted that ZTE, as a licensee, is comparable to TCL. Indeed, ZTE and TCL are arguably the most comparable of all of the Ericsson licensees in terms of product mix, average sales price and geographic footprint. Selna also noted that the ZTE licence included a running royalty, with regional breakdowns for China, for some countries with high gross domestic product and for the rest of the world. This was the royalty structure that Selna ultimately awarded to TCL – a percentage royalty with geographic breakdowns. On the face of it, ZTE appears to be an excellent comparable.

So why was the ZTE licence excluded? According to Selna, the trouble lay in the unpacking. Cross-licences which give a net rate need to be unpacked in order to get a one-way running royalty. The problems were:

  • Ericsson’s projected sales for ZTE were by regions which did not match the regions in the ZTE licence, which made it difficult to unpack the net rates in the licence into two one-way rates; and
  • the one-way rate that ZTE appeared to be paying for UMTS was higher than Ericsson’s offered rate for UMTS.

So Selna rejected Ericsson’s unpacking of the licence into one-way rates. TCL had not provided any unpacking at all. Even if it was difficult to unpack the ZTE net rates into two one-way rates, the net rates gave a floor for what TCL should pay. Instead of using it as such, Selna entirely dismissed the ZTE licence from his analysis.

Figure 4. Total number of SEPs in each technology as found by Birss and Selna

Selna also excluded Coolpad because almost all of its sales are in China – he called it a “local king”. The concern with a Chinese local king is that it may produce an artificially low rate when compared to a global licensee (eg, see the finding of Birss at Paragraph 583: “rates are often lower in China than the rest of the world”). The rates ultimately awarded by Selna for global player TCL were even lower than those paid by Coolpad, which suggests that either something in the analysis was wrong or that the decision to exclude Coolpad was based on a false assumption. Once again, rather than exclude Chinese local kings altogether from the comparison, it might have made more sense to regard their rates as a floor for TCL.

The problem of ad valorem rates with high and low average sales prices

Selna included Ericsson’s agreement with Apple as a comparable; Birss did not. If one is using percentage (ad valorem) licence rates, then a licence with Apple has difficulties. This is because there is an enormous difference in the average selling price (ASP) of Apple devices compared to others. Apple devices sell for high hundreds of dollars – the iPhone X costs over $1,000. TCL has a low ASP, which is less than $50.

As an example of the difficulty of using Apple as a comparable for ad valorem rates, imagine that Apple was to pay a $1 royalty for each iPhone X. As a percentage rate that would be 0.01%. If TCL were to pay $1 royalty for one of its devices, that would be 5%. TCL would complain that paying a 5% royalty is not fair if Apple only pays 0.01%.

If TCL were instead to pay the same percentage royalty as Apple, it would pay only $0.005 per device, to Apple’s $1 per device. In that situation Apple would complain.

This is a well-known problem in patent licensing and is nothing new. Take Vertu – the Nokia subsidiary that made phones using precious metals and gemstones. These phones sold for tens or hundreds of thousands of dollars. Measured as a percentage of average selling price, the cellular SEP royalties that Vertu paid were rather low. At the other end of the scale, patent owners must accommodate companies like Sierra Wireless, which seek SEP licences for cellular modules that sell for only a few dollars. The patent owners try to allow for this problem by applying caps or floors to their ad valorem rates or using per-unit rates which vary for different product categories.

Some competition lawyers argue that per-unit rates, caps and floors are per se anti-competitive. Selna agreed, arguing that “there is no basis for essentially discriminating on the basis of the average selling price” (page 113). Given how widespread per-unit licensing of intellectual property is across all industries (eg, music, brands and pharmaceuticals) this finding may have wider implications than the court considered at the time.

In the context of Qualcomm and Apple the opposite argument is being used. The Federal Trade Commission argues against Qualcomm that it is the use of percentage royalty rates that is per se anti-competitive. Sometimes you just cannot win.

Birss excluded Apple as a comparable, recognising that its uniquely high ASP made it an unsuitable comparator. Selna recognised that there was a problem with Apple’s ASP, but surprisingly went on to find that Apple was similarly situated to TCL. That, perhaps less surprisingly, led him to award a low percentage rate.

Forecast or actual licensee revenues?

Many of these licences were cross-licences, some with lump-sum payments. The unpacking process requires, as a first step, the determination of forecast licensee revenues. This allows for an assessment of whether the parties had in mind the effective ad valorem or per-unit rate for sales over the period of the licence.

As the lump-sum royalty is agreed at the start of the licence, no one can predict with any certainty at that point in time what those licensee revenues are going to be over the next five years. The licensee might be tremendously successful and sell many more units than expected or it might have reached its peak and start to decline. It is not just volume that is hard to predict: the licensee’s ASPs or its 2G/3G/4G penetration levels might rise or fall out of line with expectations.

No party that entered into early fixed-sum 4G licences correctly predicted, for example, that 4G would penetrate the market as quickly as it did. Indeed, third-party market analysts significantly revised their LTE penetration forecasts each year from 2012 to 2014 as LTE penetration levels again and again exceeded previous forecasts. As a result, deals during that period (like several of the deals in this case) almost certainly underestimated LTE penetration levels and thus underestimated LTE sales revenues.

Industry negotiators calculate the rate of a prior licence as the unpacked rate in light of the most reasonable assumptions about forecasts available at the time of the licence. If courts with the benefit of hindsight use actual sales data when unpacking these licences – rather than contemporaneous forecast sales data – net rate or lump-sum agreements become too risky to enter into. Should sales beat expectations, the licence will turn out to be a low benchmark to which the licensor will be held in future. This would not just affect the smartphone industry: net rate or lump-sum deals are common across many industries.

Selna had access to Ericsson’s internal estimates. He noted that Samsung’s actual 4G sales far exceeded Ericsson’s estimate (page 73). This is not surprising – as explained above Ericsson was not the only company to underestimate the rate of 4G market penetration or Samsung’s success. Selna appears not to have recognised this and chose to use actual sales data (taken from the International Data Corporation (IDC)). As Samsung’s actual 4G sales significantly exceeded Ericsson’s expectations, that gave rise to a low rate.

There is a further factor. Sales figures from analysts such as IDC are retail sales revenue. Retail revenues are the amounts that the final customers pay for handsets in a store. However, royalties are typically not calculated on retail sales revenue – they are calculated on wholesale revenue. Wholesale revenues are the revenues that a handset vendor receives from a carrier. A retail mark-up can be 50% of the wholesale price. Although in his final order Selna allowed TCL to pay royalties on its wholesale price, he used retail sales as the royalty base when unpacking the other licences to determine a percentage rate. That led to a further significant reduction in calculated rates.

Confirmation bias

Selna found confirmation in his conclusions as the rate derived from the top-down analysis is at the bottom end of the range derived from the comparable licences analysis. In principle it is good to check results. Although if both results are low for the reasons described above, then this does not confirm either approach as being correct.

Unfortunate effect

The court’s application of these methods to TCL has had an unfortunate effect on other licensees. By delaying taking a licence, TCL secured one of the best deals without assuming any of the risks that its competitors took in agreeing early terms or lump sums.

The step that neither court took was to look at its end result – that is, to go back to the full range of rates paid and risks adopted by other licensees and ask: is the latecomer getting an unfair advantage over its competitors? Are those licensees who negotiated early when there were no benchmarks, and who agreed to pay when they had no certainty as to the market success of the technology, being discriminated against?

The court’s approach will not help the uptake of licensing in the industry today. Those implementers who do not take a licence gain the advantage of being able to leverage, with perfect hindsight, the most favourable licensing terms obtained by those who have gone before. If holding out gives latecomers a good or better deal, then holding out is incentivised and those who take a licence early and without litigation risk discrimination.

At the time of writing, the Unwired Planet appeal has not been handed down. However, the issues being addressed on appeal are not expected to affect the portfolio valuation approaches discussed above.

Action plan

The cases of TCL v Ericsson and Unwired Planet provide an ideal test case to show how different courts faced with similar evidence can reach different conclusions on FRAND issues. Some lessons that we can learn from these examples are:

  • FRAND determination through a comparable licence analysis is highly sensitive to variations in the inputs. It is possible to bias the result by rejecting certain data points as possible outliers before conducting the analysis.
  • The best comparator for a running royalty licence is another running royalty licence. Unpacking a lump-sum licence agreement to derive a running royalty is less reliable.
  • When unpacking a lump sum, it is important to use contemporaneous expected sales data, rather than actual sales with the benefit of hindsight.
  • Top-down analyses are less robust than real market data. If using a top-down analysis it is important to treat the patentee’s portfolio and the industry as a whole in the same way.
  • It is wrong to derive a percentage royalty from retail price data and then apply that same percentage to an implementer’s wholesale price.
  • Choosing the right court (or avoiding the wrong court) remains a critical part of litigation strategy.

Richard Vary is a partner at Bird & Bird, London, United Kingdom

Unlock unlimited access to all IAM content