Apple, Microsoft, and the Future of Touch

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

This is one of those rare weeks when two of the tech industry’s major players have back to back events and in the process illustrate their different takes on an important product category, in this case the PC. I’ve already written quite a bit about all this this week:

Now that it’s all done, though, I wanted to pull some of these themes and threads together. I attended today’s Apple event in person and so I’ve spent time with the new MacBooks, though not with Microsoft’s new hardware or software.

Differentiation: from hardware advantages to philosophical approaches

The biggest thing to come out of this week, which I previewed in my Techpinions piece on Monday, was a shift from hardware advantages to philosophical differences as the nexus of competition between Microsoft and Apple in PCs. MacBooks once enjoyed significant hardware advantages over all competing laptops in terms of battery life, portability, and features such as trackpads, but in recent years those advantages have all but disappeared. Instead, what we’re left with is increasingly stark philosophical differences in how these companies approach the market, and this week the focus was on touch.

Microsoft’s computing devices all run some flavor of Windows 10 and feature touch. Apple, on the other hand, continues to draw a distinction between two sets of products by both operating system and interactivity. On the one hand, you have iOS devices with touch interfaces, and on the other macOS devices with more indirect forms of interactivity. Today’s event saw Apple introduce an interesting new wrinkle to touch on the MacBook with the Touch Bar, but it’s clearer than ever that Apple refuses to put touch screens on the Mac and that won’t change soon.

Microsoft’s approach makes touch available everywhere, even when in many cases it doesn’t make sense. It’s optional, though, and Microsoft has pulled back from some of the earlier erroneous over-reliance on touch that characterized Windows 8. Apple, on the other hand, wants to largely preserve existing workflows based on mouse and keyboard interactivity while adding subtle new forms of interaction. It keeps all the interaction on the horizontal plane, while Microsoft has users switching back and forth between the tabletop and display planes. There isn’t necessarily a right and wrong here – both approaches are interesting and reflect each company’s different starting points and perspectives. But it’s differences like this that will characterize the next phase of competition between them.

In some ways, this new phase of competition is analogous to the competition between Apple and Google in the smartphone market. In both cases, there are now devices made by companies other than Apple which match Apple’s core hardware performance. That’s not to say that all devices now come up to Apple’s standards – it continues to compete only at the high end, while both Google and Microsoft’s ecosystems serve the full gamut of needs from cheap and cheerful to high-priced premium. But in smartphones as in PCs, the focus of competition at the high end is now moving to different approaches rather than hardware performance. It’s intriguing, then, that it’s during this era that both Google and Microsoft are finally getting serious about making their own hardware.

wall-image-strip-560

The Touch Bar itself is very clever. Apple made the decision to spend a lot of time in today’s event on demos, and I think that was a good use of the time (especially in an event with less ground to cover than most). The demos really showed the utility that the Touch Bar can provide in a variety of Apple and third party apps. What Apple has done here is in essence to take a slice of the screen and put it down within reach to allow you to interact with it. There will definitely be a learning curve involved here – I can see users forgetting that it’s there unless they make an effort to use it, but I can also see it prompting users to try to touch the screen (this happened to me in the demo area). “Touch here but not there” will be an interesting mental model to adapt to, but once users get the hang of it (and developers support it in their apps) I believe it will add real value.

Apple’s price coverage

Of course, MacBooks aren’t the only portable computers Apple makes, and it’s been increasingly making the case that the iPad Pro lineup should be considered computers too. These are Apple’s touch-screen computers, but in most consumers minds they don’t yet belong in the same category as Windows laptops. However, when you put the new MacBooks, older MacBooks, and iPad Pros together, you get an interesting picture in terms of price and performance coverage. The chart below shows base pricing for each of these products:

Apple Computer Portfolio

As you can see, there’s pretty good coverage from $599 all the way through $2399 with just the base prices. If you were to add storage and spec options (and Smart Keyboards in the case of the iPad Pros) the in between price points would be covered pretty well too. But Apple now offers a portable computer at almost any price point in this range, and that’s interesting. The newest MacBooks alone do a nice job of covering the spread from $1199 to $2399 with increasing power and capability, while the older MacBooks fill in some gaps. There’s no denying that these products are premium, but they extend down into price points that many people will be able to reach, while providing really top notch products for those that can afford or justify them. If you focus on those newer devices, I think this is the most coherent and logical MacBook portfolio Apple has had for years.

The next big question is what happens with desktops, because those are now from one to three years old, with no sign of an update. The one that’s had the most focus from Apple in recent years is the iMac, which is both the most mass market and the flashiest – it’s the only one that is highly visible, while both the Mac Pro and Mini could feasibly sit hidden under a desk. I don’t think Apple’s going to discontinue these anytime soon, but the timing of its lack of focus on these devices is providing an interesting window for Microsoft.

A few words on creativity

I won’t repeat everything I said in my earlier stuff on Microsoft’s event here, but suffice it to say that this creativity push is certainly interesting given that timing I just mentioned. However, it’s totally overblown to be talking about Microsoft somehow stealing away Apple’s creative customer base, for several reasons:

  • First, Apple has long since expanded beyond that base, especially if you look at the full set of devices including iPhones. Apple clearly isn’t selling hundreds of millions of iPhones solely to people that use Photoshop for a living. Even if you look at Mac buyers, they’re much broader than the cliche of ad agency creatives and video editors.
  • Secondly, all Microsoft has done so far is put a stake in the ground. The Surface Studio is a beautiful device and a well thought out machine for a subset of creative professionals. But workflows don’t change overnight just because a new computer comes along, especially if there’s an existing commitment to another ecosystem. The role of this device is to signal to creatives that Microsoft is serious about serving them, which is notable in its own right, but won’t sell millions of devices by itself.
  • Thirdly, Microsoft’s bigger creativity push is around software, with 400m plus Windows 10 users getting a bunch of new creativity software in the Creators Update in the spring. This will be much more meaningful in terms of spreading that creativity message far and wide than the new hardware.
  • Lastly, even with all this, Microsoft’s efforts to associate its brand with creativity and not just productivity will take years to take hold. Perceptions don’t change overnight either.

Apple’s event today was a nice reminder that it still takes these creative professionals very seriously – both the Adobe and DJ Pro demos were creativity-centric, and these new machines are clearly intended for creative professionals among others (the RAID arrays would be an obvious fit for people editing high-bandwidth video, for example). Apple isn’t going to cede this ground easily, but it will be very interesting to watch over the next few years how this aspect of the competition plays out.

 

Twitter’s Terrible New Metric

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

In the shareholder letter that accompanied Twitter’s Q3 earnings today, the company said:

consider that each day there are millions of people that come to Twitter to sign up for a new account or reactivate an existing account that has not been active in the last 30 days.

That sounds great, right? Progress! And yet this very metric is the perfect illustration of why Twitter hasn’t actually been growing quickly at all. Let’s break it down:

  • Starting point: “each day there are millions of people” – so that’s at least 2 million per day every day
  • There are ~90 days in a quarter, so 2 million times 90 is 180 million, all of whom count as MAUs in the respective months when they engage in this behavior, and could be potential MAUs for the quarter if they stick around for a couple of months
  • Over the course of this past quarter, Twitter only added 4 million new MAUs
  • That implies one of two things: either 2.2% or less (4/180) of that 180 million actually stuck around long enough to be an MAU at the end of the quarter, or a very large proportion of those who had been active users at the end of last quarter left
  • In fact, it might even get worse. Based on the same 2m/day logic, 60 million plus people become MAUs every month on this basis, meaning this behavior contributes at least 60 million of Twitter’s MAUs each quarter (quarterly MAUs are an average of the three monthly MAU figures) even if all 60 million never log in again. On a base of just over 300 million, that means around a fifth of Twitter’s MAUs each month are in this category
  • Bear in mind throughout all this that I’m taking the bear minimum meaning of “millions” here – 2 million. The real numbers could be higher.

In other words, this metric – which is intended to highlight Twitter’s growth opportunity – actually highlights just how bad Twitter is at retaining users. Because Twitter doesn’t report daily active users or churn numbers, we have to engage in exercises like this to try to get a sense of what the true picture looks like. But it isn’t pretty.

Why is retention so bad? Well, Twitter talked up a new topic-based onboarding process in its shareholder letter too. In theory, this should be helping – I’ve argued that topic-based rather than account-based follows are actually the way to go. But I signed up for a new test account this morning to see what this new onboarding process looks like, and the end results weren’t good.

Here’s what the topic based onboarding process looks like:

topics-560

So far, so good – I picked a combination of things I’m really interested in and a few others just to make sure there were a decent number of topics selected. I was also asked to upload contacts from Gmail or Outlook, which I declined to do because this was just a test account. I was then presented with a set of “local” accounts (I’m currently in the Bay Area on a business trip so got offered lots of San Francisco-based accounts including the MTA, SFGate, and Karl the Fog – fair enough). I opted to follow these 21 accounts as well, and finished the signup process. Here’s what my timeline looked like when I was done:

timeline-560

It’s literally empty – there is no content there. And bizarrely, even though I opted to follow 21 local accounts, I’m only shown as following 20 here. As I’m writing now, it’s roughly an hour later and there are now 9 tweets in that timeline, three each from TechCrunch and the Chronicle, and several others. This is a terrible onboarding experience for new users – it suggests that there’s basically no content, even though I followed all the suggested accounts and picked a bunch of topics. Bear in mind that I’m an avid Twitter user and a huge fan of the service – it provides enormous value to me. But based on this experience I’d never come away with that impression. No wonder those millions of new users every day don’t stick around. Why would you?

In that screenshot above, the recommendation is to “Follow people and topics you find interesting to see their Tweets in your timeline”. But isn’t that what I just did? As a new user, how do I feel at this point? And how do I even follow additional topics from here (and when am I going to see anything relating to the topics I already said I was interested in)? Twitter is suggesting even more SF-centric accounts top right, along with Ellen, who seems to be the vanilla ice cream of Twitter, but that’s it. If I want to use Twitter to follow news rather than people I know, which is how Twitter is increasingly talking about itself, where do I go from here?

I hate beating up on the companies I follow – I generally try to be more constructive than this, because I think that’s more helpful and frankly kinder. But I and countless others have been saying for years now that Twitter is broken in fundamental ways, and there are obvious solutions for fixing it. Yet Twitter keeps going with this same old terrible brokenness for new users, despite repeated promises to fix things. This, fundamentally, is why Twitter isn’t growing as it should be, and why people are losing faith that it will ever turn things around.

AT&T Doubles Down on the Ampersand

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website.

I recently spent a couple of days with AT&T as part of an industry analyst event the company holds each year. It’s usually a good mix of presentations and more interactive sessions which generally leave me with a pretty good sense of how the company is thinking about the world. Today, I’m going to share some thoughts about where the consumer parts of AT&T sit in late 2016, but I’m going to do so with the shadow of a possible Time Warner merger looming over all of this – something I’ll address at the end. From a consumer perspective the two major themes that emerged from the event for me were:

  • AT&T now sees itself as an entertainment company
  • AT&T is doubling down on the ampersand (&).

Let me explain what I mean by both of those.

AT&T as an entertainment company

The word “entertainment” showed up all over the place at the event, and it’s fair to say it’s becoming AT&T’s new consumer identity. From a reporting perspective, the part of AT&T which serves the home is now called the Entertainment Group, for example, and CEO Randall Stephenson said that was no coincidence – it’s the core of the value proposition in the home now. But this doesn’t just apply to the home side of the business – John Stankey, who runs the Entertainment Group, said at one point that “what people do on their mobile devices will be more and more attached to the emotional dynamics of entertainment” too.

That actually jibes pretty closely with something I wrote in my first post on this blog:

There are essentially five pieces to the consumer digital lifestyle, and they’re shown in the diagram below. Two of these are paramount – communications and content. These are the two elements that create emotional experiences for consumers, and around which all their purchases in this space are driven, whether consciously or unconsciously.

What’s fascinating about AT&T and other telecoms companies is that the two things that have defined them throughout most of their histories – connectivity and communications – are taking a back seat to content. People for the most part don’t have emotional connections with their connectivity or their devices – they have them with the other people and with the content their devices and connectivity enable them to engage with. AT&T seems to be betting that being in the position of providing content will create stickier and more meaningful relationships which will be less susceptible to substitution by those offering a better deal. And of course video is at the core of that entertainment experience.

The big question here, of course, is whether this is how consumers want to buy their entertainment – from the same company that provides their connectivity. AT&T is big on the idea that people should be able to consume the content of their choice on the device of their choice wherever they choose. On the face of it, that seems to work against the idea that one company will provide much of that experience, and I honestly think this is the single biggest challenge to AT&T’s vision of the future and of itself as an entertainment company. But this is where the ampersand comes in.

Doubling down on the ampersand

One of the other consistent themes throughout the analyst event was what AT&T describes as “the power of &”. AT&T has actually been running a campaign on the business side around this theme, but it showed up on the consumer side of the house too at the event. Incidentally, I recalled that I’d seen a similar campaign from AT&T before, and eventually dug up this slide from a 2004 presentation given by an earlier incarnation of AT&T.

But even beyond this ad campaign, AT&T is talking up the value of getting this and that, and on the consumer side this has its most concrete instantiation in  what AT&T has done with DirecTV since the merger. This isn’t just about traditional bundling and the discounts that come with it, but about additional benefits you get when you bundle. The two main examples are the availability of unlimited data to those who bundle AT&T and DirecTV, and the zero-rating of data for DirecTV content on AT&T wireless networks. Yes, AT&T argues, you can watch DirecTV content on any device on any network, but when you watch it on the AT&T network it’s free. The specific slogan here was “All your channels on all your devices, data free when you have AT&T”.

The other aspect here is what I call content mobility. What I mean by that is being able to consume the content you have access to anywhere you want. That’s a given at this point for things like Netflix, but still a pretty patchy situation when it comes to pay TV, where rights often vary considerably between your set top box, home viewing on other devices, and out-of-home viewing. The first attempts to solve this problem involved boxes – VCRs and then DVRs for time shifting, and then the Slingbox for place shifting. But the long term solution will be rooted in service structure and business models, not boxes. For example, this content mobility has been a key feature of the negotiations AT&T has been undertaking both as a result of the DirecTV merger and in preparation for its forthcoming DirecTV Now service. It still uses a box – the DirecTV DVR – where necessary as a conduit for out-of-home viewing where it lacks the rights to do so from the cloud, but that’s likely temporary.

AT&T’s acquisition of DirecTV was an enabler of both of these things – offering zero rating as a benefit of a national wireless-TV bundle, and the negotiating leverage that comes from scale. It also, of course, gained access to significantly lower TV delivery costs relative to U-verse.

Now, the big question is whether consumers will find any of this compelling enough to make a big difference. I’m inherently skeptical of zero rating content as a differentiator for a wireless operator – even if you leave aside the net neutrality concerns some people have about it, it feels a bit thin. What actually becomes interesting, though, is how this allows DirecTV to compete against other video providers – in a scenario where every pay TV provider basically offers all the same channels, this kind of differentiation could be more meaningful on that side of the equation. If all the services offer basically the same content, but DirecTV’s service allows you to watch that content without incurring data charges on your mobile device, that could make a difference.

Context for AT&T&TW

So let’s now look at all of this as context for a possible AT&T-Time Warner merger (which as I’m finishing this on Saturday afternoon is looking like a done deal that will be announced within hours). One of the slides used at the event is illustrative here – this is AT&T’s take on industry dynamics in the TV space:

ATT TV industry view

Now focus in on the right side of the slide, which talks about the TV value chain compressing:

ATT TV compression

The point of this illustration was to say that the TV value chain is compressing, with distributors and content owners each moving into each other’s territory. (Ignore the logos at the top, at least two of which seem oddly out of place). The discussion around this slide went as follows (I’m paraphrasing based on my notes):

Earlier, there were discrete players in different parts of the value chain. That game has changed dramatically now – those heavy in production are thinking about their long-term play in distribution. Those who distribute are thinking about going back up the value chain and securing ownership rights. Premium content continues to play a role in how people consume network capacity. Scale and a buying position in premium content is therefore essential.

In addition, AT&T executives at the event talked about the fact that the margins available on both the content and distribution side would begin to collapse for those only participating on one side as players increasingly play across both.

The rationale for a merger

I think a merger with Time Warner would be driven by three things:

  • A desire to avoid being squeezed in the way just described as other players increasingly try to own a position in both content ownership and distribution – in other words, be one of those players, not one of their victims
  • A furthering of the & strategy – by owning content, AT&T can offer unique access to at least some of that content through its owned channels, including DirecTV and on the AT&T networks. This is analogous to the existing DirecTV AT&T integration strategy described above
  • Negotiating leverage with other content providers and service providers.

Both the second and third of these points would also support the content mobility strategy I described earlier, providing both leverage with content owners and potentially unique rights to owned content.

How would AT&T offer unique content? I don’t think it would shut off access to competitors, but I could see several possible alternatives:

  • Preserving true content mobility for owned channels – only owned channels get all rights for viewing Time Warner content on any device anywhere. Everyone else gets secondary rights
  • Exclusive windows for content – owned channels like DirecTV and potentially AT&T wireless would get early VOD or other access to content, for example immediate VOD viewing for shows which don’t show up for 24 hours, 7 days etc on other services
  • Exclusive content – whole existing shows and TV channels wouldn’t go exclusive, but I could see exclusive clips and potentially new shows go exclusive to DirecTV and AT&T.

The big downside with all this is that whatever benefits AT&T offers to its own customers, by definition it would be denying those benefits to non-customers. That might be a selling point for DirecTV and AT&T services, but wouldn’t do much for Time Warner’s content. The trends here are inevitable, with true content mobility the obvious end goal for all content services – it’s really just a matter of time. To the extent that AT&T is seen to be standing in the way of that for non-customers, that could backfire in a big way.

On balance, I’m not a fan of the deal – I’ve outlined what I see as the potential rationale here, but I think the downsides far outweigh the upsides. Not least because the flaws in Time Warner’s earlier mega-merger apply here too – since you can never own all content, but just a small slice, your leverage is always limited. What people want is all the relevant content, not just what you’re incentivized to offer on special terms because of your ownership structure. I’ll wait and see how AT&T explains the deal to see if the official rationale makes any more sense, but I suspect it won’t change much.

Microsoft’s Evolving Hardware Business

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website.

Microsoft reported earnings yesterday, and the highlights were all about the cloud business (Alex Wilhelm has a good summary of some of the key numbers there in this post on Mattermark).  Given that I cover the consumer business, however, I’m more focused on the parts of Microsoft that target end users, which are mostly found in its More Personal Computing segment (the one exception is Office Consumer, which sits in the Productivity & Business Processes segment).

The More Personal Computing segment is made up of:

  • Windows – licensing of all versions of Windows other than Windows server
  • Devices – including Surface, phones, and accessories
  • Gaming – including Xbox hardware, Xbox Live services, and game revenue
  • Search advertising – essentially Bing.

Microsoft doesn’t report revenues for these various components explicitly, but often provides enough data points in its various SEC filings to be able to draw reasonably good conclusions about the makeup of the business. As a starting point, Microsoft does report revenue from external customers by major product line as part of its annual 10-K filing – revenue from the major product lines in the More Personal Computing Group are shown below:

External revenue for MPC group

Windows declining for two reasons

It’s worth noting that it appears Windows revenue has fallen off a cliff during this period. However, a big chunk of the apparent decline is due to the deferral of Windows 10 revenue, which has to be recognized over a longer period of time than revenue from earlier versions of Windows, which carried less expectation of free future updates. At the same time, the fact that Windows 10 was a free upgrade for the first year also depressed revenues. As I’ve been saying for some time now, going forward it’s going to be much tougher for Microsoft to drive meaningful revenue from Windows in the consumer market in particular, in a world where every other vendor gives their OS away for free. That means Microsoft has to find new sources of revenue in consumer: enter hardware.

Phones – dwindling to nothing

First up, phones, which appear to be rapidly dwindling to nothing. It’s become harder to find Lumia smartphone sales in Microsoft’s reporting recently, and this quarter (as far as I can tell) the company finally stopped reporting phone sales entirely. That makes sense, given that Lumia sales were likely under a million in the quarter and Microsoft is about to offload the feature phone business. The chart below shows Lumia sales up to the previous quarter, and my estimate for phone revenues for the past two years, which hit around $300 million this quarter:

Phone business metrics

Surface grows year on year but heading for a dip

Surface has been one of the bright spots of Microsoft’s hardware business over the last two years. Indeed – this home-grown hardware line has compared very favorably to that acquired phones business we were just discussing:

Surface and Phone revenue

As you can see, Surface has now outsold phones for four straight quarters, and that’s not going to change any time soon. Overall, Surface revenues are growing year on year, which is easier to see if you annualize them:

Trailing 4-quarter Surface revenue

However, what you can also see from that first Surface chart is that revenues for this product line are starting to settle into a pattern: big Q4 sales, followed by a steady decline through the next three quarters. That’s fine as long as there is new hardware each year to restart the cycle, but from all the reporting I’ve seen it seems the Surface Pro and Surface Book will get only spec bumps and very minor cosmetic changes, which leaves open the possibility of a year on year decline. Indeed, this is exactly what Microsoft’s guidance says will happen:

We expect surface revenue to decline as we anniversary the product launch from a year ago.

I suspect the minor refresh on the existing hardware combined with the push into a new, somewhat marginal, product category (all-in-ones) won’t be enough to drive growth. The question is whether the revenue line recovers in the New Year or whether we’ll see a whole year of declines here – that, in turn, would depress overall hardware sales already shrinking from the phone collapse.

It’s also interesting to put Surface revenues in context – they’ve grown very strongly and are now a useful contributor to Microsoft’s overall business, but they pale in comparison to both iPad and Mac sales, neither of which have been growing much recently:

Surface vs iPad vs Mac

Ahead of next week’s Microsoft and Apple events, that context is worth remembering – for all the fanfare around Surface, Microsoft’s computing hardware business is still a fraction of the size of Apple’s.

Gaming – an oldie but kind of a goodie

Gaming, of course, is the oldest of Microsoft’s consumer hardware businesses, but its gaming revenue is actually about more than just selling consoles – it also includes Xbox Live service revenues and revenues from selling its own games (now including Minecraft) and royalties from third party games. However, it’s likely that console sales still dominate this segment. Below is my estimate for Gaming revenue:

Gaming revenue

In fact, Microsoft actually began reporting this revenue line this quarter, though unaccountably only for this quarter, and not for past quarters. Still, it’s obvious from my estimates that this, too, is an enormously cyclical business, with a big spike in Q4 driven by console sales and to a lesser extent game purchases, followed by a much smaller revenue number in Q1 and a steady build through Q3 before repeating. Microsoft no longer reports console sales either, sadly, likely because it was coming second to Sony much of the time before it stopped reporting. Still, gaming makes up almost a third of MPC segment revenues in Q4, and anything from 8-20% of the total in other quarters. In total, hardware likely now accounts for 30-50% of total revenue from the segment quarterly.

Search advertising – Microsoft’s quiet success story

With all the attention on cloud, and the hardware and Windows businesses going through a bit of a tough patch, it’d be easy to assume there were no other bright spots. And yet search advertising continues to be the undersold success story at Microsoft over the last couple of years. I’ve previously pointed out the very different trajectories of the display and search ad businesses at Microsoft, which ultimately resulted in the separation of the display business, but the upward trajectory of search advertising has accelerated since that decision was made.

Again, Microsoft doesn’t report this revenue line directly, but we can do a decent job of estimating it, as shown in the chart below:

Search advertising revenue

There are actually two different revenue lines associated with search advertising – what I’ve shown here is total actual revenue including traffic acquisition costs, but Microsoft tends to focus at least some of its commentary on earnings calls on a different number – search revenue ex-TAC. As you can see, the total number has plateaued over the last three quarters according to my estimates, though the year on year growth numbers are still strong. However, the ex-TAC number is growing more slowly. In other words, this growth is coming at the expense of higher traffic acquisition costs, which seems to be the result of the deal Microsoft signed with Yahoo a few quarters ago and an associated change in revenue recognition. Still, it’s a useful business now in its own right, with advertising generating 7% of Microsoft’s revenues in the most recent fiscal year.

Growth at Netflix Comes at a Cost

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website.

Netflix reported its financial results on Monday afternoon, and the market loved what it saw – the share price was up 20% a couple of hours later. The single biggest driver of that positive reaction was subscriber growth, which rebounded a little from last quarter’s pretty meager numbers. Here are a few key charts and figures from this quarter’s results. A much larger set can be found in the Q3 Netflix deck from the Jackdaw Research Quarterly Decks Service, which was sent to subscribers earlier this afternoon. The Q2 version is available for free on Slideshare.

Subscriber growth rebounds

As I mentioned, subscriber growth rebounded at least a little in Q3. However, the rebound was fairly modest, and the longer-term trends are worth looking at too. Here’s quarterly growth:

Quarter on Quarter growth Netflix Q3 2016

The numbers were clearly better than Q2 both domestically and internationally, but not enormously so, especially in the US. Here’s the longer-term picture, which shows year on year growth:

Year on Year Growth Netflix Q3 2016

As you can see, there’s been a real tapering off in the US over the past two years, while internationally it’s flattened following consistent acceleration through the end of last year. To put this year’s numbers so far in context, here’s a different way of presenting the quarterly domestic data:

Cyclical Growth Trends Netflix Q3 2016

That light blue line is the 2016 numbers, and as you can see each of this year’s quarters has been below the last three years’ equivalents, and the last two quarters have been well below. Arguably, Q3 was even further off the pace than Q2, so for all the celebration of a return to slightly stronger growth, this isn’t necessarily such a positive trend when looked at this way.

The cost of growth

Perhaps more importantly, this growth is becoming increasingly expensive in terms of marketing. I’ve mentioned previously that, as Netflix approaches saturation in the US, it will need to work harder and spend more to achieve growth, and we’re still seeing that play out. If the objective of marketing is growth, then one way of thinking about marketing spending is how much growth it achieves.

Ideally, we’d measure this by establishing a cost per gross subscriber addition – i.e. the marketing spend divided by the number of new subscribers enticed to the service as a result of it. However, since Netflix stopped reporting gross adds in 2012, we have to go with the next best thing, which is marketing spend per net subscriber addition, which is shown in the chart below:

marketing-costs-per-net-add-q3-2016

As you can see, there was a massive spike in Q2 due to the anemic growth numbers domestically, but even in Q3 the number is around 3 times what it had been in the recent past. Yes, Netflix returned to healthier growth in Q3, but it had to spend a lot to get there. But even in the international line, somewhat dwarfed by US spending the last two quarters, there has been an increase. In its shareholder letter, Netflix wrote this off as increased marketing for new originals, but the reality is that the marketing was still necessary to drive the subscriber growth it saw in the quarter, which in turn was lower than it has been.

The price increase worked – kind of

Of course, one big reason for the slower growth these last two quarters is the price increase Netflix has been introducing in a graduated fashion – or, in its own characterization, “un-grandfathering” of the base which was kept on older pricing for longer than new subscribers. As I wrote in this column for Variety, the price increase was really about keeping the margin growth going in the domestic business as Netflix invests more heavily in content, and I predicted that it would pay off in the long term.

Here’s what’s happened to the average revenue per paying customer as that price increase has kicked in:

Revenue per subscriber Netflix Q3 2016

There’s an enormous spike domestically in Q3, whereas internationally the increase kicked in a little earlier, despite the fact that it only affected certain markets. Overall, though, the price increase has driven average revenue per subscriber quite a bit higher – around $2 so far – so it’s arguably worked. Of course, it’s come at the cost of increased churn and perhaps slower customer additions, and the longer term effects of that will take a while to play out. We’ll need to watch the Q4 results to see whether growth starts to recover, or whether the results we’ve seen over the last two quarters are a sort of “new normal” we should expect to see more of going forward.

Meanwhile, domestic margins continue to tick up in a very predictable fashion:

Netflix Q3 2016 Domestic margins

The key, though, at this point, is to marry this increasingly profitability with breakeven followed by increasing profitability overseas, something Netflix has been predicting will happen next year. As of right now, the international business as a whole is still unprofitable, but several individual countries outside the US are already profitable for Netflix, and so it has a roadmap for other markets as they grow and hit scale milestones as well. What investors buying the stock today are really betting on is that this scenario plays out as Netflix expects it to, but it’s arguably still too early to tell whether it will.

 

Google’s Schizophrenic Pixel Positioning

This is my second post about Google’s event this week, and there will likely be more. The first tackled Google’s big strategy shift: moving from a strategy of gaining the broadest possible distribution for its services to preferring its own hardware in a narrower rollout. Today, I’m going to focus on the Pixel phones.

Positioning Pixel as a peer to the iPhone…

The Pixel phones are the most interesting and risky piece of this week’s announcements, because they go head to head against Google’s most important partners. One of my big questions ahead of time was how Google would address this tension, and in the end it simply didn’t, at least not during the event. The way it addressed it indirectly was to aim its presentation and the phones at the iPhone instead of at other Android phones. There were quite a few references to the iPhone during the event, and they’re worth pulling out:

  • A presenter said as an aside, “no unsightly camera bump” when describing the back of the Pixel phones
  • The unlimited photo and video storage was positioned against the iPhone, explicitly so when an image of iOS’s “Storage Full” error message was shown on screen (as it was in a recent Google Photos ad campaign)
  • The colors of the Pixel phones have names which appear to mock Apple’s color names
  • The pricing of the Pixel phones is identical to the pricing for the iPhone 7, right down to the first-time $20 increase to $769 for the iPhone 7 Plus from the earlier $749 price point for the larger phones, despite the fact that the larger Pixel has no additional components
  • A reference to the 3.5mm headphone jack in the Pixel commercial.

Google is attempting to position the Pixel as a true peer to the iPhone, unlike Nexus devices, which have usually been priced at a discount with feature disparities (notably in the cameras) to match. The pricing is easily the most telling element here, because there’s literally no other reason to match the pricing so precisely, and Google could arguably have benefited from undercutting the iPhone on price instead. Rather, Google wants us to see the Pixel as playing on a level playing field with the iPhone. This is very much a premium device, something that Chrome and Android exec Hiroshi Lockheimer explicitly addressed in an interview with Bloomberg published this week:

Premium is a very important category. Having a healthy premium device ecosystem is an important element in an overall healthy ecosystem. For app developers and others. It’s where certain OEMs have been successful, like Samsung. It’s where Apple is also very strong. Is there room for another player there? We think so. Do we think it’s an important aspect of Android? Yeah, absolutely.

What’s most interesting to me is the question and answer near the end there: “Is there room for another player there? We think so.” Given that the premium smartphone market is basically saturated at this point, that’s an interesting statement to make. Unlike, say, in the low end of the smartphone market, where there’s still quite a bit of growth, the only sense in which there’s “room” for another player at the premium end is by squeezing someone else out. Google clearly wants that to be Apple, but it’s arguably more likely to be Samsung if it’s anyone.

We’ve seen from long experience that switching from iOS to Android is much rarer than the other way, and so Google is far more likely to take share from Samsung than Apple, even with its overt focus on competing against the iPhone. In addition, this is fundamentally an Android phone with a few customizations, and will be seen as such, and therefore in competition with other Android devices, rather than the iPhone, for all Google’s focus on the iPhone in its messaging.

…while also mocking the iPhone (and iPhone owners)

But perhaps the biggest misfire here is the schizoid positioning versus the iPhone – on the one hand, the Pixel borrows very heavily from the iPhone – the look, especially from the front; the two sizes; the pricing, the focus on the camera; the integrated approach to hardware and software (of which more below); and so on. And yet at the same time Google seems determined to mock the iPhone, as evident in the color naming and in other ways throughout the presentation. If you want to go head to head against the iPhone, you do it in one of two ways: you show how you’re different (as Samsung has arguably done successfully), or you show how you’re the same but better. You don’t do it by aping lots of features and then mocking the very thing you’re aping at the same time (and by implication its customers, the very customers you’re going after).

True integration, or just a smokescreen?

The other major element of this strategy, of course, is that Google is now capitulating to the Apple strategy of many years and more recently Microsoft’s Surface strategy: the company that makes the best hardware is the company that makes the OS. Again, the approach is best encapsulated in an interview, this time with Rick Osterloh, head of Google’s new consolidated hardware division:

Fundamentally, we believe that a lot of the innovation that we want to do now ends up requiring controlling the end-to-end user experience.

What’s odd is that there seems to be relatively little evidence of this approach in what was announced on Tuesday. Is there really anything in the Pixel phones that couldn’t have been achieved by another OEM working at arm’s length from Google? One of the biggest benefits of taking this integrated approach is deep ties between the OS and the hardware, but from that perspective, Google isn’t actually allowing its Android division to get any closer to its own hardware team than other OEMs. It’s only integration with other Google services (outside of Android) where the Pixel team got special access, and even then only because they’re the only ones who have asked to do so.

All of this undermines Google’s argument that the Pixel is somehow in a different category because it’s “made by Google” (even leaving aside the fact-checking on that particular claim from a hardware perspective). This phone could easily have been made by an OEM with the same motivations – the big difference is that no OEM has precisely those motivations, not that the Pixel team was somehow given special access.

In fact, this gets at the heart of one of the main drivers behind the Pixel – Google reasserting control over Android and putting Google services front and center again. I’ve written about this previously in the context of Google’s attempts to do this through software, as exemplified by its I/O 2014 announcements. But those efforts largely failed to reclaim both control over Android and a more prominent role for Google services on Android phones. As a result, Google’s relationship with Android releases has continued to be analogous to that of a parent sending a child off to college – both have done all they can to set their creation on the right path, but have little control over what happens next.

If, though, this is the real motivation behind Pixel (and I strongly suspect it is), then all this stuff about targeting the iPhone and tightly integrated hardware and software is really something of a smokescreen. I would bet Google’s OEM partners can see that pretty clearly too, and for all Google executives’ reassurances that the OEMs are fine with it, I very much doubt it.

Google’s Big Strategy Shift

There’s so much to say about today’s Google hardware event, and it’s tempting to pour it all into this one post. Instead, though, I’m going to be focused here and probably write several separate posts on announcements from today during the rest of this week. It’ll also be the main topic of conversation on the Beyond Devices Podcast this week, so be sure to check that out later in the week.

My focus here is what I’m terming Google’s big strategy shift, but it may not be the shift you’re thinking of. Yes, it’s notable that Google is making its own hardware, but it’s been doing that for years. The big shift therefore isn’t so much that Google is making its own hardware, as that it’s preferring that hardware when it comes to Google services, notably the Google Assistant.

Previously, Google services have been launched on either the web or through the major app stores, typically Play and iOS simultaneously or one shortly after the other. But with the new devices announced today, Google appears to be using the Google Assistant as a way to advantage its own hardware rather than going broad. That’s a massive strategic shift, and has much broader implications than simply making a phone, a speaker, or a WiFi router.

Think about what this means: Google is choosing to favor a few million hardware sales over usage of these services by billions of people, at least in the short term. Its old approach was to pursue the broadest possible distribution for its services by making them available in almost all the places people might expect to find them. But its new approach is much more reminiscent of Apple’s, which of course is designed to differentiate hardware and not drive maximum usage.

Why, then, would Google do this? The most obvious reason is that Google couldn’t find enough other ways to make its new devices stand out in the market, and so chose to use the Assistant as a differentiator. That’s understandable, but it’s a pretty significant strategic sacrifice to make. Another possible explanation is that it didn’t want to overload the Google Assistant with too many users at once, and so it’s put it in places where usage will be limited at first – Allo (currently 75 in the Play store and 691 in the App Store), Google Home, and the Pixel phones. That’s a bit odd given how broadly used all the services behind the Google Assistant already are – it’s not like Google can’t handle the server load – but it might make sense to work out some kinks before making the Assistant more broadly available.

The next question then becomes how soon the Google Assistant becomes available elsewhere – on the web, as part of Android, or as an iOS app. The sooner it becomes available, the more easily Google will achieve its usual goal of broad distribution, but the more quickly it erodes one of the big differentiators of Pixel. The longer it holds it back, the less relevant it becomes (and the harder it becomes to tell Google’s AI story), but the longer Pixel stands out in the market. I’d argue that how Google answers this question will be one of the strongest indicators we’ll have of how it really feels about its big increase in hardware investment.