The EU’s Android Mistake

The European Commission announced this morning that its preliminary finding in its investigation of alleged anti-competitive practices by Google in relation to its Android operating system is that Google is indeed breaching EU rules. The action from the EU is misguided and unnecessary, but it will likely be disruptive to Google and have several unintended consequences anyway.

A quick primer

Note: I’m including links to three relevant Commission documents at the end of this piece, in case you want to read the sources.

As a brief primer on the basis for the Commission’s action here, having a dominant market share is not itself grounds for intervention, but abusing that dominant position is. The argument here is that Google is indeed abusing that dominant position by leveraging its high market share in mobile operating systems to force OEMs to pre-install Google services on their devices in return for being able to use the package of Google mobile services including the Play store, search, and so on. Specifically, the Commission has three objections here:

  • That Google forces OEMs who wish to license Android with the standard Google apps to pre-install the Google Search and Chrome apps and set Google search as the default
  • That Google won’t allow OEMs to sell phones using this flavor of Android as well as flavors based on AOSP
  • That Google pays OEMs to exclusively pre-install the Google Search app.

The key to the EU’s finding that Google has dominant market share is a narrow definition of the relevant market here. Instead of treating mobile operating systems as a whole (or even smartphone operating systems) as the relevant market, the Commission has chosen to use “licensable operating systems” as the basis for its determination that Google has dominant market share. In other words, this isn’t about Google’s dominance of consumer mobile operating systems, but about its dominance of the market for operating systems that can be licensed by OEMs. That’s a really important distinction, because it leads to a finding of much higher market share than were the Commission to consider this from a consumer perspective. Specifically, the Commission says that Google has over 90% share on this basis, whereas its consumer market share in the EU is well under that threshold in most markets, including the five largest markets.

This narrow definition also means that the main class of companies the Commission is seeking to protect here is not consumers but OEMs and alternative providers of search and browsers for mobile devices. Clearly, the Commission has some belief that it would be protecting consumers indirectly as well through such action, but it’s important to note the Commission’s primary focus here.

OEM choice

If the Commission’s main focus is on OEMs rather than consumers, it’s worth evaluating that a little. The reality is that OEMs clearly want to license the GMS version of Android, because that’s the version consumers want to buy. As Amazon has demonstrated, versions of Android without Google apps have some appeal, but far less than those versions that enable Google search, Gmail, Google Maps, and so on. Vestager’s statement alludes to a desire by at least some OEMs to use an alternative version of Android based on AOSP (presumably Cyanogen), but doesn’t go into specifics. Are there really many OEMs who would like to use both forms of Android in significant numbers, or is their complaining to the Commission just a way to push back on some of the other aspects of Android licensing they don’t like?

It’s certainly the case that OEMs and Android have a somewhat contentious relationship and Google has exerted more power in those relationships over the last recent years, but the main reason for the change in leverage is that Android OEMs have been so unsuccessful in differentiating their devices and hence making money from Android. Inviting the Commission to take action may be a roundabout way to change the balance of power in that relationship, but it’s not the solution to OEMs’ real problems.

Consumer choice

Here’s the critical point: this initial finding is just the first step – the ultimate outcome (assuming the Commission doesn’t materially change its findings) is that the EU will impose fines and/or force changes to the way Google licenses Android. Specifically, it would likely require the unbundling of the GMS package and the forced pre-installation of Chrome and search, much as Microsoft was forced in the past to provide a version of Windows without Windows Media Player bundled in and later to provide a “ballot box” option for consumers to install the browser of their choice.

The big question here is, of course, whether this would make much of a difference in a world where consumers are already free to install alternatives and set them as defaults if they choose to. There are several competing search engines and browsers in the Play Store today. Of these, the three most prominent search engines (Bing, Yahoo, and DuckDuckGo) each have 1-5 million downloads, while alternative browsers have been more popular:

  • Firefox – 100-500 million installs
  • CM Browser – 10-50 million
  • Opera – 100-500 million
  • Opera Mini – 50-100 million
  • Dolphin – 50-100 million.

On the one hand, then, there seems to be very little demand for alternative search engines on Android smartphones downloaded through the Play Store. And that shouldn’t surprise us, given that the Commission documents also tell us Google search has over 90% market share in the EU. As is the case elsewhere in the world, Google search is the gold standard, and there’s very little reason for most consumers to install an alternative. However, the small number of consumers who want to can do so today.

When it comes to browsers, it’s clear that there is more interest in alternatives, although in fairness many of those downloads of alternative browsers likely happened before Google introduced Chrome to Android and made it broadly available through OEMs. But, again, it’s clear that many consumers have already taken the opportunity to download alternative browsers under the current system. Would materially more consumers install these alternatives under a forced unbundling arrangement, and would the benefits to consumers and/or the browser makers outweigh the damage done to Google’s business through such action?

The Microsoft history

It’s inevitable that there will be comparisons between this case and the EU’s earlier cases against Microsoft. The Windows Media Player case took three years from the formal start to its preliminary decision (although the investigation started well before that), and by the time the process worked itself all the way through the outcome was essentially irrelevant. The market had moved on in such a way that the focus of the case was entirely misguided, and the effect on the market minimal. There’s a danger that the same thing happens here – the case takes years to complete, and by the time it’s completed the competitive dynamics have changed to an extent that things have either sorted themselves out or competitive worries have moved to an entirely different sphere.

One of the reasons for this is that competition and market forces in general had largely taken care of the issue in the interim, and that’s the key here – these markets are so fast moving that any regulatory intervention is likely to take longer and be less effective than simply allowing market forces to take their course. It’s hard to avoid the sense that the EU case is an outgrowth of European antipathy towards big American tech companies rather than a measured response to real abuses of dominance.

The irony of AOSP

One last quick point before I close. The great irony in all this is that Google is being hammered in part because it has always open sourced Android. The existence of AOSP is the crux of the Commission’s second objection to Google’s behavior with regard to Android. Google’s claim to openness is being used as a stick to beat it with. In this sense, being less open would have exposed Google to less criticism from the EU for being anticompetitive. That irony can’t be lost on Google, which could potentially resolve this concern by simply discontinuing the licensing of AOSP. That certainly isn’t the outcome the Commission wants (indeed, it seems to smile on the AOSP project in some of its comments today), but it’s an example of the kind of unintended consequences such action can have.

Links to relevant documents

The Commission has this morning published three separate documents in relation to this proceeding – here are links if you want to read the source material:

Facebook’s Sharing Problem

Last week, Amir Efrati at The Information wrote about the decline in “original sharing” at Facebook. That’s a reference to the more personal type of posts that people might share using the status box on the service, as opposed to sharing links or other less personal information or status messages. The data shared in the article suggests that this original sharing was down 21% year over year in mid-2015, and down around 15% year on year more recently.

The interesting question here is whether this matters, and why. The article suggests that this is the most important type of sharing on Facebook, because these personal posts bring the most engagement. That’s likely true, and I think there’s also an element of FOMO (fear of missing out) associated with knowing what friends are really up to which drives Facebook usage. To the extent that friends are no longer sharing this personal information on Facebook, that reason for using its apps starts to go away.

However, what’s increasingly clear is that Facebook has evolved from a social network to a content hub over the last several years. Yes, it’s a content hub where the content you see is largely driven by what those you’ve deemed “friends” (whether they really are such or not). But increasingly the driver of which specific things you actually see is Facebook’s algorithm, which is driven a lot more by your interests than by your friends per se. And much of the content you’re consuming is likely not those personal videos but articles (perhaps increasingly hosted on Facebook itself), videos (including the recently introduced live videos), and other forms of content which aren’t personal in nature.

That evolution from a social network to a content hub has coincided with a growth in many other forms of more personal communication, most notably messaging. Facebook clearly saw this trend coming several years ago, and acquired Instagram and WhatsApp while also turning Messenger into a standalone product. But it also failed to acquire Snapchat, and faces competition from a number of other products in this area. To the extent that more personal communication is happening outside of the News Feed, Facebook remains a participant in several ways.

But I also wonder if we’re seeing something of a maturing of the Facebook experience. I always come back to a really insightful post written by venture capitalist Fred Wilson back in 2011. In the context of Twitter, he wrote:

“Let’s remember one of the cardinal rules of social media. Out of 100 people, 1% will create the content, 10% will curate the content, and the other 90% will simply consume it.”

In some ways, what’s important about Facebook isn’t that it is seeing lower sharing, but that it ever had such high sharing in the first place. Even at the lower rates of sharing Facebook is seeing today, the Information article says “57% of Facebook users who used the app every week posted something in a given week, the confidential data show. But only 39% of weekly active users posted original content in a given week”. That’s much higher than the numbers cited by Wilson, even if it’s come down a bit recently. I’d also argue that, to the extent that users are sharing URLs or videos rather than personal content, they’re simply shifting into that curation category, and that still benefits Facebook.

The article talks about live video as one of Facebook’s responses to the sharing problem, but I’d argue that Facebook is working on lots of other stuff that can be seen as a response too. I’ve written elsewhere that the Notify app Facebook launched a while back is an example of this. The new Videos tab Facebook is introducing is another example. It’s clear that Facebook has been planning to deal with this issue for some time now.

This gets back to another thing I’ve written about previously, which is the role Facebook plays in our daily lives. From a jobs-to-be-done perspective, I’ve argued that the problem Facebook really solves is killing time. At some point, I suspect Facebook will truly embrace its new status as a content hub and start serving up content that wasn’t even shared by your friends. At that point, the personal sharing will matter less, and what will matter the most is that you care about the content being shared and engage with it.

Tesla’s Dodgy Claim

Tesla is now claiming that its Model 3 preorder process is breaking records – here’s the text from the claim Tesla is making:

“In the first 24 hours Model 3 received over 180,000 reservations, setting the record for the highest single-day sales of any product of any kind ever in world history.”

That is, of course, pure hyperbole, and there are two specific reasons why. The first is that the preorder process doesn’t represent sales at all. On Tesla’s own site, the process is referred to as reservations and not sales, and that’s all the $1,000 deposit represents – a place in a long line to have the right to buy a car 18 months or longer from now. Those who made a reservation in this way have no specific timeframe for delivery of their purchase, haven’t committed to any specific purchase, and have the right to a refund of their money at any time between now and whenever their car might finally be available. There is no sense in which this is a sale in any sort of traditional sense.

But even if you concede that the $1,000 represents a sale of some kind, the total revenue implied by that still falls far short of single-day sales for the most recent iPhone, for example, which likely does hold the record for largest single-day sales of any product. At just $180 million, Tesla’s Model 3 revenue is around 6% of single-day iPhone 6s sales. The only way the claim makes any sense at all is if you do what Elon Musk did in a tweet at the end of the first day of preorders, and apply some sort of anticipated average selling price to the 180,000 preorders. That’s even more disingenuous than the claim that these are sales at all, but it does lead to a far higher number. The comparison between these two different Tesla Model 3 numbers and assumed single-day sales for iPhone 6s is shown in the chart below:

image

Again,though, no-one has committed to actually buy a car from Tesla at this point, the process is entirely refundable, and Tesla won’t see even the majority of that revenue for a couple of years at least. This isn’t, in reality, any kind of sales record at all.

The stupid thing here is that the Model 3 preorder process is still a phenomenally impressive achievement. That so many of those placing reservations had never even seen the car is a testament to the power of Tesla’s brand and what it has achieved, and this likely is a record in the auto industry. But the hyperbole attached to the claim on Tesla’s site just detracts from all of that without having any real basis in fact.  Tesla already has the world’s admiration and respect – engaging in this kind of behavior detracts from rather than adds to that mystique.

The NFL’s Twitter Gamble

Earlier today, I published a post titled “Twitter’s NFL Gamble“. The post illustrates perfectly the danger of jumping on breaking news too quickly, in that a major piece of information emerged after I hit “Publish” on the post, which totally changed the dynamic of the story. So here I am with a second post in the same day on the same topic, from quite a different perspective. A good deal of the material in the initial piece still holds, but the key point from the title no longer makes as much sense.

The key piece of information was reported by Recode, and concerns two important elements – the price Twitter paid, and the nature of the content it will carry, specifically as it relates to ads. Here are the two key paragraphs from that piece:

“While the NFL and Twitter haven’t disclosed the price for the package, people familiar with the bidding said Twitter paid less than $10 million for the entire 10-game package, while rival bids topped $15 million. Those numbers are a fraction of the $450 million CBS and NBC collectively paid for the rights to broadcast the Thursday games. (A note from Twitter’s Investor Relations Twitter account notes that the company had already baked the cost of the deal into their 2016 guidance.)

One big reason for the disparity is that CBS and NBC have their own digital rights, and they will own most of the digital ad inventory in their games, people familiar with the deal say. So Twitter will be rebroadcasting the CBS and NBC feeds of the games, and will have the rights to sell a small portion of the ads associated with each game.”

With this as context, it becomes clear that this is far less of a gamble for Twitter than I originally understood, and actually far more of a gamble for the NFL. Splitting the broadcast and digital rights for the Thursday night games was a great innovation, and one I actually wrote up pretty positively in a post for Techpinions. But it now appears that the NFL has chosen not to be as disruptive as it might have been. Rather than license these rights to a new online video player, with all the advertising rights packaged in, the NFL has chosen to forego a big new revenue opportunity from the digital world and instead hand the ad revenue opportunity mostly to CBS and NBC, while Twitter merely gets the benefit of increased traffic from broadcasting games almost entirely packaged up by others.

That represents a big gamble on the NFL’s part, that it’s better off giving most of the rights to traditional players rather than opening up a new opportunity with a major video player from the online world. The Recode reporting certainly suggests that the NFL even chose to go with Twitter despite the fact that its offer was lower than others. The NFL may appear to be doing the opposite of gambling here, but the risk is that it’s setting up these online rights as something much less than what they could be. Over the next few years, these online rights could be really lucrative, and this Thursday night package was a great way to really test that market, but the NFL is putting all its eggs in the broadcast basket instead.

Twitter’s NFL gamble

Bloomberg broke the news this morning that Twitter is the winner of the digital rights package of Thursday night games the NFL has been auctioning off recently. Twitter came out of left field (if that’s not the wrong metaphor for this particular sport), and it’s worth thinking about both why Twitter would want this deal, and what the implications might be.

Update: some significant new details have emerged since I wrote the first version of this post, notably that Twitter has likely paid far less for these rights than previous rights owners, in part because it will sell very few ads itself and will largely carry the broadcast and ads provided by the network broadcasters. As such, the size of the gamble is significantly smaller, and the comments about guidance also make more sense. I subsequently wrote a second piece which covers the later news.

Firstly, we know now that Jack Dorsey really is serious about making live – and live video specifically – a focus in 2016! So far, Twitter has been used almost entirely for people to talk about live events being broadcast on other platforms, which has meant it hasn’t been able to benefit as directly as some other players from those live events, even if massive numbers of tweets were sent and even shown on television. Last night’s NCAA Championship basketball game is a great example of this. This deal suddenly gets Twitter directly into the business of showing these games and tapping into some of the additional associated revenue opportunities. It also significantly ups Twitter’s live video game from short, grainy videos to professionally produced content.

One of the most interesting things is going to be seeing how this fits into the Twitter product – with all the other bidders, there were obvious existing platforms for broadcasting NFL games, but with Twitter they’ll have to create a completely new home for this kind of thing. It’s possible they might use Periscope, but given the poor quality of most Periscope videos until now, I would think the NFL might have qualms about having their high-quality content appear there. Now that the news is out from the NFL, with comment from Twitter, we know that Twitter is describing the experience as being “right on Twitter,” but I’m curious to see the exact implementation.

The other big questions is how Twitter will do selling ads against this content – it’s obviously a very different type of advertising from what they’ve sold before, but it gives them their first real opportunity to cross-sell these different types of ads and break into television advertising for the first time. It may also be a first real opportunity to make really good money from the “logged-out users” Twitter has been talking up for so long, but who are so hard to advertise to effectively.

And then there’s the question of how much Twitter paid for the rights here. It’s hard to guess at because this package of rights is very different from any other similar package sold before – non-exclusive in the US, but exclusive internationally. But almost no matter what the exact number, it’s likely to be a meaningful fraction of Twitter’s overall revenue. That’s one of the reasons Twitter is such a surprising bidder (and winner) – it’s a much smaller company than most of the other names that were bidding, with just over $2 billion in revenue last year. If the rights costs in the hundreds of millions of dollars, which seems likely, then they may well cost 10-20% of revenue. That’s a huge gamble, and we all know the gamble didn’t pay off for Yahoo. The strangest thing is that the Twitter Investor Relations account tweeted this morning that all expenses associated with the rights are already baked into its guidance for the year. That seems particularly odd given that Twitter likely didn’t know whether they’d won the rights yet when they announced their guidance, and it’s a material amount of money.

Hopefully we’ll get more detail on all of this either later today or over the coming weeks, but it’s a fascinating illustration of the sheer breadth of the companies getting involved in the live video business at this point, coming from a diverse set of starting points within the broader industry.

BlackBerry Moves the Goalposts and Still Misses on Software

Note: for previous posts on BlackBerry, click here. I’ve specifically addressed some of the same topics in this earlier post

A little over a year ago, BlackBerry CEO John Chen said his goal was to double software revenue at the company from $250 million to $500 million in the company’s 2016 fiscal year, which ended in February. Today, the company reported results for that period, and even though the company moved the goalposts on that goal, it still missed its target. That’s important, because this software line is basically the future of the company, as hardware sales continue to tank, along with service access fees, the other historical mainstay of the company’s business.

Just to recap, the company first set its target for doubling software revenue back in late 2014, and at that point the goal was very much to double classic enterprise software revenue. In a meeting I attended with BlackBerry’s senior management in November 2014, we were told that each of BlackBerry’s roughly 250 sales reps was carrying a quota of $2 million for the year, which of course would add to $500 million if they all hit their quotas. So it was very clear that this target was for the enterprise software business BlackBerry then had.

However, a few months later, BlackBerry announced its first patent license sales, outside the scope of those enterprise software quotas, but nonetheless reported in a new Software and Technology Licensing segment by BlackBerry in its financial results from that point onward (the name has since changed to Software & Services). Ever since then, BlackBerry has referred to this combined number and not the pure enterprise software number in measuring progress on hitting that $500 million goal in FY2016. Hence my comments about shifting the goal posts. In addition, the company has made several acquisitions, including a major one in the form of Good Technology, which have also contributed to the revenues reported in that segment.

Even with all that, the company just reported GAAP Software and Services revenue for the quarter of $494 million, $6 million shy of the $500 million target. In its press release, the only financial document available to analysts before today’s call, it listed only non-GAAP revenue for this segment, which brought the annual total to a little over $500 million and allowed the company to congratulate itself on meeting the goal. (The difference between the two is a fairly small amount of deferred revenue.)

However, if we break down the revenues actually generated over the last five quarters in this segment into the part that represents the business that was originally supposed to hit that $500 million, and separate out the contribution from Good Technology and from patent and other licensing deals, we get a very different picture:BlackBerry Software and Licensing breakdownAs you can see, the portion of revenue that comes from recurring sources has remained roughly flat over that entire period, far from doubling. The company touted 106% growth in Software and Services revenue in Q4 and 113% for the entire fiscal year, but as the chart shows that growth was entirely made up by a combination of non-recurring revenue and revenue from the Good business, while the underlying business grew very little.

The good news here, if there is some, is that for three of the last four quarters, BlackBerry has been able to generate very meaningful non-recurring revenue from licensing and other sales on top of enterprise software sales, which suggests that even if this business isn’t as predictable as recurring revenue, it’s still coming in fairly regularly. But only 70% of the segment’s revenues were recurring in the quarter, which makes future software revenues much less predictable. BlackBerry’s goal in FY2017 is much  more modest than the doubling in revenue it aimed for in FY2016: it only wants to offset the decline in Service Access Fees, likely to be around $120-150 million in the year, but says nothing about offsetting the decline in Hardware, which declined by $90 million in FY2016 and is likely to continue to do so in FY2017.

Thoughts on Alphabet’s CEO struggles

We’ve had roughly two weeks now of fascinating insights into Nest specifically and Alphabet’s Other Bets in general, and I wanted to chime in on all this and revisit some of what I’ve written previously on the topic. For a quick primer on all that’s been going on, I suggest this brief reading list:

  • Google puts Boston Dynamics up for sale – Bloomberg article which suggests poor cultural fit, a lack of obvious routes to making money, and a general clampdown on financial responsibility at Alphabet companies
  • The Information’s lengthy article about Nest and Tony Fadell’s struggles there, which also mentioned the financial clampdown
  • Recode’s Mark Bergen writing about Alphabet’s broader CEO troubles
  • Another Mark Bergen piece about revenues at Nest and the two other moneymaking Other Bets (which happens to track pretty well with my estimates of these companies’s revenues here).

In three previous pieces, I wrote about Larry Page’s vision for Google as an emulator of the Berkshire Hathaway model, and then about the decision to turn Google into Alphabet, and subsequently Alphabet’s first set of financial results.

In the first of those pieces, I wrote these thoughts about why the Berkshire Hathaway model wasn’t appropriate for Google:

…the pieces of Google aren’t and can’t be independent in the way BH’s various businesses are, because many of them (including some of the largest, such as Android and YouTube) simply aren’t profitable in their own rights. Though the management of some of these bigger parts can be given a measure of autonomy, they can’t run anything like BH’s various subsidiaries can because they rely on the other parts of Google to stay afloat.

… if Page really is planning to build a conglomerate, that’s even worse news. For one thing, he’s absolutely the wrong guy to run it if he’s using Warren Buffett’s model as his ideal. Warren Buffett is, above all, a very shrewd investor, and Page’s major acquisitions have been anything but shrewd from a financial perspective.

Given what’s happened over the last few weeks, it’s becoming clear that even though Page really isn’t the right person to be running all this, Ruth Porat was brought in to offer exactly this kind of stricter oversight of the Other Bets. The challenge is that Google was never run this way, and so there’s a cultural clash there, which is only exacerbated in those parts where businesses were acquired and therefore brought their own distinct cultures. In some of the Other Bets, you now have a three-way culture clash, between the old Google culture, the new Porat-driven culture of financial discipline, and the mishmash of other cultures in acquisitions like Boston Dynamics, Nest, and Dropcam.

On balance, this tighter financial scrutiny is a good thing, and addresses some of those criticisms in my earlier pieces. In fact, in our 2016 predictions podcast, I made a somewhat out-there prediction that Alphabet would end up selling or spinning off at least one of its businesses in 2016 as a result of either poor fit or financial performance. Even I didn’t have that much confidence in that prediction, but it’s turned out to be accurate with the planned sale of Boston Dynamics. But none of that is to say that this is going to be painless for Alphabet or the individual Other Bets. It’s going to be a tough couple of years as this cultural clash works its way through.

Quick thoughts on Tony Fadell

Before I close, I wanted to just touch quickly on something I’ve hinted at on Twitter but haven’t really written about properly anywhere, and that’s Tony Fadell’s management style.

Much has been made of how Fadell’s style seems to emulate Steve Jobs’ style in many respects. The big difference, though, is that Steve Jobs always owned his brusque, rude style and never apologized for it (for better or worse). He recognized that his style wasn’t going to be popular, but believed it was still the right way to go even if people hated him for it. The difference with Fadell is that he seems to want to have his cake and eat it too – he wants to behave the way Jobs did but be loved as well. One of the strongest indicators of this is the way he seemed to recruit people to come to his defense following earlier critical articles (here and here) on his management style, and then retweeted their positive comments:Fadell tweetsThere’s nothing wrong with defending your management style if you believe it to be right, but there’s something disingenuous about embracing Steve Jobs’ abrasive style while also wanting to avoid the consequences. This was Steve Jobs’ approach to the same problem, as articulated by Jony Ive:

“I remember having a conversation with [Steve] and I was asking why it could have been perceived that in his critique of a piece of work he was a little harsh. We’d been working on this [project] and we’d put our heart and soul into this, and I was saying, ‘Couldn’t we … moderate the things we said?’

And he said, ‘Why?’ and I said, ‘ Because I care about the team.’ And he said this brutally, brilliantly insightful thing, which was, ’No Jony, you’re just really vain.’ He said, ‘You just want people to like you, and I’m surprised at you because I thought you really held the work up as the most important, not how you believed you were perceived by other people.’

I was terribly cross, because I knew he was right.”

Twitter and Instagram’s Communication Screwups

Note: both the Beyond Devices blog and Podcast are now available as a channel on Apple News, which is available if you’re using a device running iOS 9. Click on the button below to read this article there and follow the Beyond Devices channel.

Read_it_on_AppleNews_badge_RGB_US-UK

Today, hundreds of Instagram accounts were suddenly filled with panicked posts about a change to way the Instagram feed worked, which filled certain account holders with dread that their followers would no longer see their posts. In the previous few weeks, there were similar panics among Twitter users about two purported changes to that product: removal of the 140-character limit for tweets, and an algorithmic timeline similar to that being contemplated at Instagram. What’s striking about all three of these examples is that the companies arguably only have themselves to blame for the negative reaction, which could have been avoided if only they had communicated properly with users of their respective services.

In the case of Twitter’s supposed removal of the 140-character limit, reports started to surface over the first weekend in January that Twitter was considering a change, and it took a Twitter post (one that ironically embedded a sizable text document as an image) from Jack Dorsey to address the situation.  The problem was that Twitter had allowed a rumor about a possible change to get legs for several days before the company officially addressed it. Ironically, this month Dorsey finally announced that the company wouldn’t be raising the limit after all, but that just goes to show how powerful the user backlash was.

Twitter’s actual change – to an algorithmic timeline – also met with a significant user backlash, primarily from the sort of power users likely to pick up on news reports about the service and also the users most likely to care about such a change. In the end, Twitter better explained how the algorithmic timeline would work and – importantly – made clear that it could be turned off. When many users first experienced the new timeline a few days ago, they didn’t like it, but were able to turn it off. In the end, things weren’t nearly as bad as some users worried, but Twitter did itself a huge disservice by not explaining this better from the beginning.

And now we have Instagram’s equivalent moment – the company announced a few weeks ago that it would be introducing an algorithmic feed which would show the most “meaningful” posts first. However, it didn’t make clear exactly how this would work, and importantly didn’t specify whether users would be able to choose between this new feed and the current feed. As a result, and given Facebook’s history of making a similar change, many brands and creators on Instagram were understandably worried that their posts would suddenly become invisible to users. Then, at some point in the last 24 hours, a rumor (false, as it turns out) began to spread that the change would happen today. Hence, something like a game of what we Brits call Chinese Whispers and Americans call Telephone happened, and you had all those panicked posts on Instagram.

In all these cases, if the companies had just done three things, all the user backlash could have been avoided:

  • Communicate about the change before (or immediately after) rumors start
  • Make clear exactly what is planned, and when it will take effect
  • Specifically make clear that the changes will be optional rather than forced on users. (To be clear, we still don’t know if this will be the case on Instagram, but it absolutely should be).

Instead, you have overreaction by users, a huge backlash against something that may not even be happening (or may not be as bad as people fear), and a PR nightmare, all of which could have been avoided. I expect that to some extent any change to a major service that’s used by hundreds of millions of users will trigger some amount of panic, but in all three of these cases, the reaction has been much worse than it needed to be, and the companies only have themselves to blame. Nature and the Internet both abhor a vacuum, and when companies fail to communicate clearly, their users often fill in the gaps with worst-case scenarios, which serves no-one well.

 

Why Netflix is Wrong to Throttle AT&T and Verizon Customers

Today, it emerged that Netflix has been throttling video streams for those customers which are using the AT&T and Verizon Wireless networks (but not T-Mobile and Sprint customers) to stream its content. From what we know so far, the carriers were unaware of this, and are understandably upset. Netflix’s justification for this partial throttling, according to a Wall Street Journal article, was that “historically those two companies [T-Mobile and Sprint] have had more consumer-friendly policies.” And the overly-simplistic value judgement implied by that quote gets at the heart of why this is wrong.

There are several issues here. Firstly, this treatment is discriminatory but not discriminating – what I mean by that is that the Netflix policy discriminates between networks but treats all users on each network the same, regardless of what data plan they’re actually on. For example, as of December 2015, 11% of AT&T’s smartphone customers are still on unlimited plans. Since December, AT&T has begun selling unlimited plans again to certain customers who take DirecTV and AT&T service. As such, over 1 in 10 AT&T customers have unlimited plans, and that number is growing, but Netflix’s policy takes no account of this. The same applies to Verizon customers. By definition, Netflix doesn’t know which plans users are on. Perhaps I’m one of those unlimited customers at Verizon, or I have a 30GB plan from AT&T, but I’m treated the same as if I’m on a 1GB plan regardless. At the same time, not all Sprint or T-Mobile customers are on unlimited plans either.

Netflix hasn’t been transparent here until it was called out, either with customers or with the carriers. That’s problematic for two reasons – users who aren’t aware have no control either, and Netflix should have given users a choice. The other problem is that users will have assumed degraded video was the fault of poor network performance, which negatively impacts perceptions of the carrier rather than Netflix itself when video is throttled.

Netflix’s justification in a hurriedly put out but opaque blog post is that customers don’t mind, but it has no way of knowing how users really feel, because they haven’t been aware. To be sure, some users are very concerned about data caps, and would choose overages. Simply giving users a choice would have solved the problem without the underhanded approach Netflix has taken. It’s uncharacteristic for a company that’s been so bullish about transparency and fair treatment (and been a huge proponent of net neutrality). Netflix’s current approach has many of the same shortcomings as the original implementation of T-Mobile’s BingeOn plan, which also throttled video without users’ permission. Deliberately degrading video performance without user knowledge or consent is wrong, no matter who does it.

That WSJ quote at the beginning sums up what’s really going on here – except that what Netflix really means is that some carriers have been less friendly to over-the-top video providers by metering bandwidth their customers use. This Netflix policy has very little to do with better serving customers and everything to do with better serving Netflix by getting people to watch more of its videos. If it really wanted to serve customers better, it would make this policy explicit, transparent, and opt-in.

Solid Progress at Square

Square (finally) reported its Q4 2015 results today, and they demonstrate solid progress on the key things that matter. For a very quick primer on the keys to Square’s long-term success, see the video embedded below. For more detailed earlier analysis, see this piece and this piece.


Here’s an update on some key areas where Square is making progress. As a reminder, the core transaction processing business has pretty much fixed margins – Square takes a roughly 3% cut of transaction value, and keeps around a third of that (or 1%) as gross profit:
Square fixed marginsSo, no matter how much Square grows this side of its business, its margins are capped according to standard payment industry rates. However, Square isn’t just sticking to this business, but instead seeks to build an ecosystem around it through software and data products, so far mostly Square Capital (loans to Square payments customers) and Caviar (restaurant services). That business is very highly profitable because it has few incremental costs, and has been growing rapidly:Square margins by segmentSquare software and data growthAnother positive indicator this quarter was the fact that Square’s renegotiated contract with Starbucks, which was previously a heavy loss maker, broke even in Q4. This deal, which was originally done to drive scale for Square, has always been a drag on the business, but now promises to be much less of one:Starbucks gross marginThat also now means that Square’s three smaller reporting segments are collectively profitable on a gross margin level too:Square three smaller segmentsTo be sure, Square is still loss making overall by every measure but gross margin, but projects to be Adjusted EBITDA positive in 2016 and to start generating margins sometime beyond that. This quarter’s results suggest it’s very much on track for that goal, although it’s still a long way off.