How to pit a small regulator against big tech?
Professor Jason Furman’s Digital Competition Expert Panel has rightly put regulatory competence first
The Report of the UK’s Digital Competition Expert Panel was published last week, recommending market-based policies such as data portability that are worth getting excited about. Done right, they could create new firms and digital services that are hard to imagine now but might better help us plan our lives, connect with friends, and move around. But the thrust of the report is about positioning Britain’s competition authorities for the fight that’s coming with Amazon, Facebook, Google and others. It’s going to be spectacular, and the report’s authors are well aware that regulators will need to pick their battles if they’re going to win any of them.
The European Union is looking to fine Google again for what it sees as anti-competitive behaviour, and its Competition Commissioner, Margrethe Vestager, said recently that the United States’s economic model meant ‘more concentrated markets’ . That hurts, but the US could make things worse by giving up decades of antitrust work that’s focused on consumer welfare in exchange for just attacking companies that are big. Professor Tim Wu at Columbia Law School recently wrote a popular book, The Curse of Bigness, arguing that we should scrap recent anti-trust practice and go back to the apparently more aggressive approach advocated by Louis Brandeis, an American lawyer in the first half of the twentieth century. A leading candidate to be the next US President, Elizabeth Warren, has done the same but more coherently.
That way lies confusion, giving up the current focus of policy on consumers. Scrapping the consumer welfare test quickly gets one into the position of arguing that optimisation — firms learning how to do things better than others and growing as a result — is bad. Good luck with becoming richer if you believe that’s wrong.
Locating enormous firms in a social contract that binds them in the same way as the rest of us is one of the arguments of our age, and Professor Furman and his co-authors know that competition and antitrust regulators need to avoid it. Keeping regulatory authorities to the technical judgements they can make is the only way of maintaining their legitimacy.
And that’s before the difficulty of the technical questions starts to bite. Determining a digital market, which firms dominate it, and the jurisdiction that should apply, is flagged as a problem throughout the review. And the call for Britain to share its approach with others is really a way of saying it can’t do it alone and neither can anyone else.
To deal with all this the report suggests a sequencing of regulatory interventions that is driven by one fear and one threat. The fear is that the Competition and Markets Authority will make too many ‘false positives’ or ‘false negatives’ as it considers mergers between digital firms that are hard to judge for whether they are justifiably consolidating a market or not. Getting that wrong too often will undermine the authority of the regulatory regime and invite rolling attacks from Facebook lobbyists. The threat is that some firms will be designated as having ‘strategic market status’ and hence need to be watched closely or broken-up — a power which will be held by a new digital markets unit.
The review’s suggested code of conduct is the first line of defence in the intervention sequencing. It signals new conditions for old firms, encouraging them to behave in a less anti-competitive way before they get told to do so. It’s a deterrence that Professor Furman and his co-authors hope will mean that regulators have fewer cases to deal with, lessening the risk that they will be swamped or make crippling errors.
The suggestion that the Competition and Markets Authority retrospectively review mergers that it allowed in the past is the next signal. That’s not a ‘let’s learn from failure’ recommendation, but a realisation that some mergers — such as the purchase of Instagram by Facebook — were a mistake and that the firms involved should be much more cautious about making similar moves in the future. Again, that’s a cheap and efficient way for a regulator to signal change without risking the cost and error of being aggressive before it needs to be.
The report also starts to divide digital markets by their features. For example, it suggests that there are ‘key datasets’ in some industries, which may mean intervening in the future according to the data held by a number of firms rather than the market position held by one of them. The release of public datasets is discussed as a way to set pro-competition standards early, while a discussion of standards for new industries is about setting technology and data openness expectations early, obviously making current dominance problems less likely. Data trusts are considered a mechanism for mature markets.
All of the above is necessary before we can think about the cool policies like data portability, which should be a good way to encourage more dynamism in digital services like email and social networking than we have at the moment. But we can only get there if we protect the regulator and pick the problems that economics can address.