Cord Cutting in Q3 2016

I do a piece most quarters after the major cable, satellite, and telecoms operators have reported their TV subscriber numbers, providing an update on what is at this point a very clear cord-cutting trend. Here is this quarter’s update.

As a brief reminder, the correct way to look at cord cutting is to focus on three things:

  • Year on year subscriber growth, to eliminate the cyclical factors in the market
  • A totality of providers of different kinds – i.e. cable, satellite, and telco – not any one or two groups
  • A totality of providers of different sizes, because smaller providers are doing worse than larger ones.

Here, then, on that basis, are this quarter’s numbers. First, here’s the view of year on year pay TV subscriber changes – a reported – for the seventeen players I track:

year-on-year-net-adds-all-public-players

As you can see, there’s a very clear trend here – with one exception in Q4 2015, each quarter’s year on year decline has been worse than the previous one since Q2 2014. That’s over two years now of worsening declines. As I’ve done in previous quarters, I’m also providing a view below of what the trend looks like if you extract my estimate for DISH’s Sling subscribers, which are not classic pay TV subs but are included in its pay TV subscriber reporting:

year-on-year-net-adds-minus-sling

On that basis, the trend is that much worse – hitting around 1.5 million lost subscribers year on year in Q3 2016.

It’s also worth noting that once again these trends differ greatly by type and size of player. The chart below shows net adds by player type:net-adds-by-player-type

The trend here has been apparent for some time – telco subs have taken a complete nosedive since Verizon ceased expanding Fios meaningfully and since AT&T shifted all its focus to DirecTV following the announcement of the merger. Indeed, that shift in focus is extremely transparent when you look at U-verse and DirecTV subs separately:att-directv-subs-growth

The two combined are still negative year on year, but turned a corner three quarters ago and are steadily approaching year on year parity, though not yet growth:

att-combined-subsCable, on the other hand, has been recovering somewhat, likely benefiting from the reduced focus by Verizon and AT&T on the space with their telco offerings. The cable operators I track collectively lost only 81k subscribers year on year, compared with well over a million subscribers annually throughout 2013 and 2014. Once again, that cable line masks differences between the larger and smaller operators, which saw distinct trends:

cable-by-size

The larger cable operators have been faring better, with positive net adds collectively for the last two quarters, while smaller cable operators like Cable ONE, Mediacom, Suddenlink, and WideOpenWest collectively saw declines, which have been fairly consistent for some time now.

The improvement in the satellite line, meanwhile, is entirely due to the much healthier net adds at DirecTV, offset somewhat by DISH’s accelerating declines. Those declines would, of course, be significantly worse if we again stripped out Sling subscriber growth, which is likely at at around 600-700k annually, compared with a loss of a little over 400k subs reported by DISH in total.

A quick word on Nielsen and ESPN

Before I close, just a quick word on the Nielsen-ESPN situation that’s emerged in the last few weeks. Nielsen reported an unusually dramatic drop in subscribers for ESPN in the month of October, ESPN pushed back, Nielsen temporarily pulled the numbers while it completed a double check of the figures, and then announced it was standing by them. The total subscriber loss at ESPN was 621,000, and although this was the one that got all the attention, other major networks like CNN and Fox News lost almost as many.

In the context of the analysis above, 500-600k subs gone in a single month seems vastly disproportionate to the overall trend, which is at around 1-1.5 million per year depending on how you break down the numbers. Additionally, Q4 is traditionally one of the stronger quarters – the players I track combined actually had positive net adds in the last three fourth quarters, and I suspect for every fourth quarter before that too. That’s what makes this loss so unexpected, and why the various networks have pushed back.

However, cord cutting isn’t the only driver of subscriber losses – cord shaving is the other major driver, and that makes for a more feasible explanation here. Several major TV providers now have skinny bundles or basic packages which exclude one or more of the major networks that saw big losses. So some of the losses could have come from subscribers moving to these bundles, or switching from a big traditional package at one operator to a skinnier one elsewhere.

And of course the third possible explanation is a shift from traditional pay TV to one of the new online providers like Sling TV or Sony Vue. Nielsen’s numbers don’t capture these subscribers, and so a bigger than usual shift in that direction would cause a loss in subs for those networks even if they were part of the new packages the subscribers moved to on the digital side. The reality, of course, is that many of these digital packages are also considerably skinnier than those offered by the old school pay TV providers – DirecTV Now, which is due to launch shortly, has 100 channels, compared with 145+ on DirecTV’s base satellite package, for example.

This is the new reality for TV networks – a combination of cord cutting at 1.5 million subscribers per year combined with cord shaving that will eliminate some of their networks from some subscribers’ packages are going to lead to a massive decline in subscribership over the coming years. Significant and accelerating declines in subscribers are also in store for the pay TV providers, unless they participate in the digital alternatives as both DISH and AT&T/DirecTV are already.

The US Wireless Market in Q3 2016

One of the markets I follow most closely is the US wireless market. Every quarter, I collect dozens of metrics for the five largest operators, churn out well over a hundred charts, and provide analysis and insight to my clients on this topic. Today, I’m going to share just a few highlights from my US wireless deck, which is available on a standalone basis or as part of the Jackdaw Research Quarterly Decks Service, along with some additional analysis. If you’d like more information about any of this, please visit the Jackdaw Research website or contact me directly.

Postpaid phones – little growth, with T-Mobile gobbling up most of it

The mainstay of the US wireless industry has always been postpaid phones, and it continues to account for over half the connections and far more than half the revenues and profits. But at this stage, there’s relatively little growth left in the market – the four main carriers added fewer than two million new postpaid phone customers in the past year, a rate that has been slowing fairly steadily:

postpaid-phone-net-adds-for-big-4This was always inevitable as phone penetration began to reach saturation, and as the portion of the US population with good credit became particularly saturated. But that reality means that future growth either can’t come from postpaid phones, or has to come through market share gains almost exclusively.

In that context, then, T-Mobile has very successfully pursued the latter strategy, winning a disproportionate share of phone customers from its major competitors over the last several years. The chart below shows postpaid phone net adds by carrier:postpaid-phone-net-adds-by-carrier

As you can see, T-Mobile is way out in front for every quarter but Q2 2014, when AT&T preemptively moved many of its customers onto new cheaper pricing plans. AT&T has been negative for much of the last two years at this point, while Sprint has finally returned to growth during the same period, and Verizon has seen lower adds than historically. What’s striking is that T-Mobile and Sprint have achieved their relatively strong performances in quite different ways. Whereas Sprint’s improved performance over the past two years has been almost entirely about reducing churn – holding onto its existing customers better – T-Mobile has combined reduced churn with dramatically better customer acquisition.

The carriers don’t report postpaid phone gross adds directly, but we can derive total postpaid gross adds from net adds and churn, and I find the chart below particularly striking:
gross-adds-as-percent-of-base

What that chart shows is that T-Mobile is adding far more new customers in proportion to its existing base than any of the other carriers. Sprint is somewhat close, but AT&T and Verizon are far behind. But the chart also shows that this source of growth for T-Mobile has slowed down in recent quarters, likely as a direct effect of the slowing growth in the market overall. And that slowing gross adds number has translated into lower postpaid phone net adds over the past couple of years too:

t-mobile-postpaid-phone-net-adds-by-quarter

That’s a bit of an unconventional chart, but is shows T-Mobile’s postpaid phone net adds on an annual basis, so you can see how each year’s numbers compare to previous years’. As you can see, for most of 2015 and 2016, these net adds were down year on year. The exceptions were again around Q2 2014, and then the quarter that’s just ended – Q3 2016, when T-Mobile pipped its Q3 2015 number ever so slightly. The reason? Likely the launch of T-Mobile One, which I wrote about previously. The big question is whether T-Mobile will return to the declining pattern we saw previously when the short-term effects of the launch of T-Mobile One wear off.

Smartphone sales – slowing on postpaid, holding up in prepaid

All of this naturally has a knock-on effect on sales of smartphones, along with the adoption of the new installment plans and leasing, which are breaking the traditional two-year upgrade cycle. The number of new smartphones in the postpaid base has been slowing dramatically over the last couple of years too:

year-on-year-growth-in-postpaid-smartphone-base

But the other thing that’s been happening is that upgrade rates have been slowing down significantly too. From a carrier reporting perspective, the number that matters here is the percentage of postpaid devices being upgraded in the quarter. This number has declined quite a bit in the last couple of years too, across all the carriers, as shown in the cluster of charts below:

postpaid-device-upgrade-rate-for-all-4-carriers

The net result of this is fewer smartphones being sold, and the number of postpaid smartphones sold has fallen year on year for each of the last four quarters. Interestingly, the prepaid sales rate is holding up a little better, likely because smartphone penetration is lower in the prepaid market. There were also signs in Q3 that the new iPhones might be driving a slightly stronger upgrade cycle than last year, which could be good for iPhone sales in Q4 if that trend holds up through the first full quarter of sales.

What’s interesting is that the upgrade rates are very different between carriers, and T-Mobile in particular captures far more than its fair share of total sales, while AT&T captures far less than it ought to. The chart below compares the share of the smartphone base across the four major carriers with the share of smartphone sales:

smartphone-base-versus-sales

As you can see, T-Mobile’s share of sales is far higher than its share of the base, while AT&T’s (and to a lesser extent Verizon’s) is far lower.

Growth beyond phones

So, if postpaid phone growth is slowing, growth has to come from somewhere else, and that’s very much been the case. Tablets had been an important source of growth for some of the carriers for a few years, but their aggressive pursuit has begun to cost them dearly now, at least in the case of Sprint and Verizon. Both carriers had promotions on low-cost tablets two years ago and are now finding that buyers don’t feel the need to keep the relationship going now their contracts are up. Both are seeing substantial tablet churn as a result, and overall tablet net adds are down by a huge amount over the past year:

tablet-net-adds

There may be some recovery in tablet growth as Verizon and Sprint work their way through their churn issues, but I suspect this slowing growth is also reflective of broader industry trends for tablets, which appear to be stalling. Still in postpaid, there’s been a little growth in the “other” category, too, but that’s mostly wireless-based home phone services, and it’s not going to drive much growth overall. So, the industry likely needs to look beyond traditional postpaid services entirely.

Prepaid isn’t growing much faster

The next big category for the major operators is prepaid, which has gone through an interesting evolution over the last few years. It began as the option for people who couldn’t qualify for postpaid service because of poor credit scores, and was very much the red-headed stepchild of the US wireless industry, in contrast to many other markets where it came to dominate. But there was a period a few years back where it began to attract customers who could have bought postpaid services but preferred the flexibility of prepaid, especially when prepaid began to achieve feature parity with postpaid. However, that ebbed again as installment plans took off on the postpaid side and made those services more flexible. Now, we’re going through yet another change as a couple of the big carriers use their prepaid brands as fighter brands, going after their competitors’ postpaid customers. The result is that those two carriers are seeing very healthy growth in prepaid, while the other operators are struggling.  In the chart below, I’ve added in TracFone, which is the largest prepaid operator in the US, but not a carrier (it uses the other operators’ networks on a wholesale basis):

prepaid-net-adds

As you can see, AT&T (mostly through its Cricket brand) and T-Mobile (mostly through its MetroPCS brand) have risen to the top, even as Sprint has gone rapidly downhill and Verizon and TracFone have mostly bounced around roughly at or below zero. There is some growth here, but it’s all being captured by the two operators, while the others are treading water or slowly going under.

Connected devices – the fastest-growing category

The fastest-growing category in the US wireless market today is what are called connected devices. For the uninitiated, that probably requires something of an explanation, since you might think of all wireless connections as being connected devices. The best way to think about the connected devices category is that these are connections sold for non-traditional things, so not phones and mostly not tablets either, but rather connected cars, smart water meters, fleet tracking, and all kinds of other connections which are more about objects than people. The one exception is the wireless connections that get bundled into some Amazon Kindle devices as part of the single upfront purchase, where the monthly bill goes to Amazon and not the customer.

This category has been growing faster than all the others – the chart below shows net adds for the four major categories we’ve discussed so far across the five largest operators, and you can see that connected devices are well out in front over the past year or so:comparison-of-net-adds

Growth in this category, in turn, is dominated by two operators – AT&T and Sprint, as shown in the chart below (note that Verizon doesn’t report net adds in this category publicly):connected-devices-net-adds

At AT&T, many of these net adds are in the connected car space, where it has signed many of the major car manufacturers as customers. The rest of AT&T’s and most of Sprint’s are a mix of enterprise and industrial applications, along with the Kindle business at AT&T. T-Mobile also has a much smaller presence here, and Verizon has a legacy business as the provider of GM’s OnStar services as well as a newer IoT-focused practice.

Though the connection growth here is healthier than the other segments, the revenue per user is much lower, in some cases only single digit dollars a month. However, this part of the market is likely to continue to grow very rapidly in the coming years even as growth in the core postpaid and prepaid markets evaporates, so it’s an important place for the major carriers to invest for future growth.

MacBook Pro with Touch Bar Review

Note: the version of this post on Medium has larger images and other benefits – I recommend you read it there.

On Thursday morning last week, Apple sent me a review unit of the new MacBook Pro with Touch Bar for testing. I’ve been using it almost non-stop since, to try to put it through its paces and evaluate this latest power laptop from Apple. I’ve only had four days with it, and so this is probably best seen as a set of early impressions rather than a thoroughgoing review, but here are my thoughts on using it so far. I’ll cover quite a few bases here, but my main focus will be on addressing two particular issues which I suspect people will have the most questions about: the Touch Bar and the power of this computer to do heavy duty work.

The model I’m using

First off, here’s the model I’m using:

mbp-2016-specs

In short, this is the 15-inch version, with 16GB of RAM, but it’s not the highest-end model. There is a version with a 2.9GHz processor and a Radeon Pro 460 graphics card, which would be a good bit more powerful for some tasks than the machine I’m using, though the RAM on that computer is the same.

I’m coming to this experience from using two main Macs over the past couple of years. When I’m at my desk, I’m typically using a 2010-version Mac Pro with 32GB of RAM, a processor with 12 2.66GHz cores, a massive SSD, and a Radeon GPU. When I’m mobile, I’m using a MacBook Air from a couple of years ago, with 4GB of memory and an Intel graphics card. In most respects, at least on paper, this MBP is a big step up on the MBA, but is less powerful than the Mac Pro, with the exception of the graphics card.

The Touch Bar

So let’s start with the Touch Bar. I had a chance to play around with the Touch Bar a bit at the launch event, and found it intriguing. It was already clear then that this was the kind of feature that could save time and make workflows easier if done right, but that would also come with a learning curve, and my first few days using it more intensively have confirmed both of those perceptions.

An analogy

The best analogy I can think of is learning to touch type. My oldest daughter has recently gone through this process, and I remember going through it when I was about the same age. Before you start learning, you’ve probably got pretty good at the hunt-and-peck method, and may even be quite fast. When you start learning to touch type, a lot of it is about forcing yourself to change your habits, which can be painful. At first, you’re probably slower than before, and the temptation is to go back to doing what you’ve always done, because if feels like you’re going backwards. But over time, as you master the skill, you get faster and faster, and it feels even more natural. You’re also able to stay in the flow much better, watching the screen rather than the keys.

Learning to use the Touch Bar is a lot like that. If you already use a Mac regularly, you likely have pretty well-established workflows, combining mouse or trackpad actions, typing, and keyboard shortcuts. Suddenly, the Touch Bar comes along and gives you new ways of doing some of the things you’ve always done a certain way. A few may replace keyboard shortcuts, but the vast majority will instead be replacements for mouse or trackpad actions. The first step is remembering that these options are now available. The Touch Bar is quite bright enough to see in any lighting conditions, but it’s not intended to be distracting, so although you may be vaguely aware of it in your peripheral vision as you’re looking at the screen, it doesn’t draw your eye. You have to consciously remember to use it, a bit like how you have to consciously remember to use all your fingers when you’re learning to touch type.

At first, your instinct is to just keep doing things the way you’ve always done them. But then you start to realize that the repetitive task you’re doing by moving the mouse cursor away from the object you’re working with to the taskbar or to the Format pane at the side of the window could be accomplished much more easily by just pressing a button in the Touch Bar. You try it and it works great. The next time you do it a little more quickly, and pretty soon it’s a habit. That first couple of times it may take more time than your old method, because you’re having to break the old habit, but you quickly develop a new, more efficient, habit. Your mouse cursor stays by the object you’re working with (or out of the way entirely) and you go on with your work. I’ve been integrating the Touch Bar into some of my workflows over the last few days, and it’s now starting to become natural and I’m getting to the stage where things are faster than they were before.

Below are some samples that show the adaptability of the Touch Bar:

touch-bar-samples-560

This adaptability is one of the strengths of the Touch Bar — the way it morphs not just between apps but based on the context within each app too. The video below shows several examples in quick succession as I move between apps and between contexts within apps. You’ll see how rapidly it changes as I go through these (there’s no sound on the video):

Most of the buttons are either self-explanatory or familiar enough to be intuitive, but I did find a couple of cases where I simply had no idea what a button meant. Since you can’t hover over these buttons in the way you can an on-screen button, there’s really no way to find out either, which can be tricky.

Ultimately, as I’ve written previously, the Touch Bar represents a different philosophical approach to touch on laptops by Apple compared with Microsoft’s all-touch approach to computers. I’ve used a few Windows laptops with touch, and though there have been times when it was useful, it’s often frustrating – the screen tends to bounce away from you when you jab it with your finger and touch targets are often too small. Apple’s approach keeps the horizontal and vertical planes separate – the vertical plane on a MacBook is purely a display, while the horizontal plane is the one you interact with. This is easier on your hands and arms, and allows you to work more quickly because everything is within easy reach. The trackpads on Apple’s laptops have brought some of the benefits of touch to laptops over the last few years, and the Touch Bar takes this a step further.

Third party support

For now, the Touch Bar is only available in first-party applications on the Mac, and most of Apple’s own apps now support it. However, if you’re a typical Mac user it’s quite likely that you spend a fair amount of time in third-party apps, and that’s certainly the case with me. I spend a lot of my time on the Mac in Tweetbot and Evernote, for example, neither of which support the Touch Bar yet, except for auto-correction when typing, which is universal.

Apple demoed some third party apps with Touch Bar integration at its launch event, and below is a table of those apps whose developers have committed to supporting it so far:

touch-bar-support-560

For now, users will be able to take advantage of Touch Bar inside the Apple apps and a handful of others, and that will mean adapting some workflows but not others. The experience here is going to be like the early days of 3D Touch support on the iPhone – it will be nice to have for the apps where it’s available, but there will be a lot of apps where it doesn’t work yet. In some cases, that’s going to push users towards apps that do support the feature, as was the case with 3D Touch. And since support is relatively easy to build, I would guess many developers will get on board quickly once the laptops are out.

Touch ID

Since the Touch ID sensor is part of the Touch Bar strip, it’s worth mentioning that briefly too. For anyone who’s used Touch ID on an iPhone or iPad, the value proposition will be fairly obvious – this is a great way to unlock your device without using a password. To be sure, people probably unlock their laptops many fewer times per day than they do their phones, but it’s still a handy time-saver. I’ve had Apple Watch unlock set up on my MacBook Air for a few weeks, and found that useful, but didn’t feel the need to set it up on this MacBook Pro because Touch ID is actually faster.

But Touch ID goes beyond just unlocking — it can also be used for various other functions where you’d normally enter your system password, including certain app installations and system changes. When it’s available, an indicator shows up in the Touch Bar strip pointing to the sensor, which is handy, because it can’t always be used in place of a password.

Siri

It’s also worth discussing the Siri button that’s part of the Touch Bar too. I’ve been using Sierra on my existing Macs for a couple of months now, but haven’t made much use of Siri, in part because I can never remember which hot key I’ve set to invoke it, and clicking on the on-screen Siri button in the taskbar is too much trouble. Having a dedicated Siri button is definitely making me use Siri more.

Power and performance

On, then, to power and performance. I gave you the specs for the machine I’m testing earlier – it’s not the top of the line model, but given some of the commentary from the professional community and those claiming to speak on their behalf over the last couple of weeks, I wanted to put this side of the MacBook Pro to the test.

Testing

I’m not a regular user of heavy-duty creative apps, but I have used Final Cut Pro fairly extensively in the past, and have an Adobe Creative Cloud subscription which gives me access to other apps like Photoshop, Lightroom, Premiere, and Illustrator, some of which I use occasionally. As a first test, I imported some 4K video shot on my iPhone into the new version of Final Cut Pro and edited it. I checked all the boxes for analysis in the importing process, but it still completed quickly and without slowing down the computer. Both Final Cut and the other apps I had open continued to perform smoothly during the analysis and background tasks. The editing was smooth, and I got to use the new Touch Bar buttons at several points, adding in titles, transitions, and other elements, and then exported the file. Everything was quick and smooth, and the experience was very comparable to what I’m used to on my Mac Pro, which is where I’ve mostly used FCP in the past.

Next, I decided to push things a little harder and shot a longer 4K video while riding my bike. The bike was bumping around all over the place while recording, and as a result there was lots of movement and also rolling shutter issues in the video. I imported this video into Adobe Premiere, and then used the Warp Stabilizer effect to try to smooth out some of those issues. This task took quite a bit longer, but again the computer continued to function just fine while the task was underway, even when I simultaneously opened up Lightroom and imported several hundred RAW images from my DSLR. The fans did spin up during the Premiere background tasks, but I’ve noticed they’re quite a bit quieter on this new MacBook than on past MacBooks I’ve used, which I’d guess is due to the new fan design.

There is no doubt in my mind that this MacBook Pro is perfectly capable of handling heavy duty professional creative work. That’s not to say that a computer with more cores, more RAM, or an upgraded graphics card couldn’t do some of these tasks faster, but many creative professionals will have a stationary machine like a Mac Pro, an iMac, or something else back at their desk and will use the MBP when they’re on the go.

Input from creative professionals

As I mentioned, I’m not a creative professional, but I happen to have married into a family of them, so I checked in with three of my brothers in law who work as video professionals (two as editors and one as a producer). I asked them several questions about the hardware and software they use, their workflows, and attitudes towards these things in their places of work. Both the editors are currently using 5K iMacs with 32GB of RAM, and mostly use Adobe Premiere or Avid for editing (Final Cut Pro has fallen out of favor with the pro video editing crowd since the FCP X release, though at least one of them said that he expected the latest update to win some former users back to the Apple side). This MacBook Pro, which maxes out at 16GB, wouldn’t match the performance of one of those 5K iMacs, but could well be the kind of machine they’d take with them if they were editing or reviewing footage on set. And with the ability to drive two 5K monitors, they could even finish the job when back at the office on the same computer. It wouldn’t perhaps be as fast at some of the background tasks as an iMac or Mac Pro, but it would allow them to do the job just fine, and I think that’s the proper way to see this computer.

Portability

That brings me to the next thing that’s worth talking about, which is portability. The new 13″ MacBook Pro is being positioned as a successor of sorts to the 13″ MacBook Air — it has a similar footprint and weighs about the same, yet is far more powerful. This 15″ MacBook Pro, of course, is larger (and potentially even more powerful), and so obviously not to be seen as a direct replacement for the Air. But as that’s the transition that I’m making personally, it makes sense to make that comparison at least briefly. The MBP is clearly heavier and larger than the MBA, though not by as much as you might think. It weighs a pound more — 4 pounds versus 3 — but the footprint is very similar, and it’s actually thinner than the MBA at its thickest point. And of course it has four times the pixels on the screen. The images below should give you some sense of the size comparison:

unadjustednonraw_thumb_ac unadjustednonraw_thumb_ae unadjustednonraw_thumb_b1 ycz5pp1aq9kej4h6s1bmka_thumb_93

The true comparison, of course, is to the earlier 15″ MacBook Pro, which is roughly half a pound heavier and slightly thicker. I actually have an older 15″ MacBook Pro around as well, from about five or six years ago, and this thing is night and day from a size and weight perspective. Long story short, this is a very portable laptop, less so certainly than the 13″ one, but more so than any other 15″ Apple has ever made, and likely more so than most other 15″ laptops on the market today. And yet it has the power I talked about earlier.

Keyboard, Screen, and Audio

Three other hardware features are worth discussing at least briefly here.

Firstly, the keyboard. This keyboard takes the same approach as the keyboard on the 12″ MacBook, but is a new version which has a different dome switch which allows for more of a springy feel. I haven’t used the MacBook keyboard extensively, but this keyboard has been totally fine for me. I adjusted to it almost immediately, and it feels fine. I have noticed that typing on it is a little noisy, I think because I’m using as much weight as I have used in the past on laptops with more key travel, and so I’m slowly adjusting my weight, which is resulting in a quieter experience.

The screen on this thing is beautiful. Apple now has P3 color on its newest iPhones, iPads, and MacBook Pros, and it’s a really nice improvement. I took some pictures of the Pro next to the Air to try to capture this, but it’s hard to get right in a photograph. However, looking at them side by side, there is both deeper color and a noticeably brighter screen on the Pro. And of course it’s a Retina display too, so the screen looks much sharper too. The combination of the Retina resolution and the brightness and color gamut make it really nice for watching videos. I spent some time over the weekend watching a variety of video on it, and it was one of the nicest displays I’ve ever used for this.

Lastly, the sound. The new MacBook Pro has different speakers, and they’re quite a bit louder than on the MacBook Air. In my office, I have a stereo hooked up to an AirPort Express for AirPlay and play all my music that way, but the new Pro will do fine even on its own for sound volume and quality. I tested with a random iTunes track, as you can hear in the audio clip below. I recorded using an iPhone placed between the two laptops.

The sound quality is noticeably louder and fuller on the MacBook Pro, as I hope you can hear in that sample. Again, this makes it perfect for watching movies in your spare time, as well as for listening to music.

Ports and adapters

Another thing I’ve seen some concern about with this new MacBook is the ports, all four of which are Thunderbolt 3 / USB-C. That’s a new port for me – I’ve never owned a computer with a USB-C port, though two of the smartphones I’ve tested recently (the Google Pixel and LeEco Pro3) have USB-C charging. As a result, I was interested to see how I’d get by with my existing peripherals.

I made a trip to the Apple Store and picked up a few adapters:

  • Two USB-A to USB-C adapters for my USB peripherals
  • A Thunderbolt 2 to Thunderbolt 3 adapter for my Thunderbolt display
  • A USB-C Digital AV Multiport Adapter for another display that uses HDMI.

Of course, all these adapters are discounted until the end of the year, which was nice because cost adds up fast on some of these. All of them worked fine, and I’ve appreciated being able to plug in any of these various peripherals on either side. It’s particularly nice to be able to shift power from side to side based on where the nearest outlet is.

This is a classic Apple situation – removing ports before the world has necessarily moved on, in part as an attempt to move people along. But in this case Apple is particularly far ahead of the market, and so these adapters are a concession to that reality. Some people will already have USB-C or Thunderbolt 3 peripherals such as hard drives, and these will become increasingly common over the next few years. Along with the adapters, Apple sells a variety of LaCie, G-Tech, and Sandisk storage devices and the LG displays, which support USB-C natively.

But for now, we’re going to be using adapters when we use a number of existing peripherals. I already have a pocket full of adapters in my work bag for my MacBook Air, for presenting, using Ethernet cables, and so on, so I’m used to this situation. And as I pointed out on Twitter recently, even if you buy all the adapters Apple recommends as you go through the buying process for a new MacBook Pro, the cost is a tiny fraction of the total (and of course less than full price between now and December 31). I will say that it feels a bit odd with a brand new iPhone and a brand new computer not to be able to plug one into the other out of the box, though I suspect many users no longer plug their iPhones into their computers at all.

Design

This is the first MacBook Pro to be available in Space Gray, and it’s a nice new option (this is the one Apple sent me, and in person it looks darker than in most of the pictures in this post). It’s sleek looking, and smudges and scratches will show up a lot less on this surface than on the bright silver surface of earlier MacBooks. It’s a good looking computer overall too, regardless of the finish. The display takes up much of the vertical plane, with fairly small bezels (one of the ways Apple was able to shrink the footprint), while the horizontal plane looks really good with the addition of the Touch Bar and a larger trackpad.

img_8210

I’ve found that trackpad to be totally fine, by the way — even though it’s consistently under the heels of my hands, I’ve never once accidentally moved the cursor or clicked on anything while typing because of it. I will say that I use the bottom right corner for right clicking and that’s now a long way from the center of the trackpad, which has resulted in some failed right-clicks when I haven’t moved far enough with my fingers. If you tend to use Control-click instead of bottom-right click, then this obviously won’t be an issue. I have also noticed that if the laptop is resting on my lap rather than on a table, there’s something about the angle of my hand on the trackpad that sometimes accidentally right clicks when I’m trying to click in the center of the trackpad, because another part of my hand is resting on the bottom right corner of the trackpad. This happens because the trackpad is really very close to the edge of the computer now on the side closest to you, so that the heel of your hand can easily stray onto the trackpad when resting on the edge.

Miscellaneous glitches

I did have one or two glitches here and there. For the first day and a half I was using the MacBook Pro, it would lose WiFi connectivity when it went to sleep, and fail to reconnect. After a restart, this issue seemed to resolve. Secondly, while I left Adobe Premiere processing video and stepped away for a few minutes, the computer went to sleep, and when I woke it, the whole computer did a hard crash, restarting out of the blue. Lastly, I had an occasion when the computer hung to the extent that I had to restart it.

I’m not used to having these issues regularly on Macs, though I’ve experienced each of them on occasion in the past. It was odd to have these happen in quick succession, and I’m not sure what to ascribe that to – Apple says it hasn’t seen these issues itself in testing. I will say that none of these issues has happened twice, but I’ll be watching for more of this stuff to see if these were just flukes.

Conclusions

This is a really solid new laptop from Apple. I wrote after the launch event that Apple now has the most logical lineup of laptops it’s had in a long time, with a clear progression in terms of power, portability, and price. Even within the new MacBook Pro range, there are size, power, and feature options. But all of these are intended to be pro computers.

That’s not to say they’re all intended to be the only computer someone who uses heavy-duty creative apps needs – the Mac Pro and iMac are there at least in part to meet those needs. But these are computers that the vast majority of people who use a Mac for work would be fine to use as their only machine – that’s certainly the case for me. This 15″ version I’ve been testing is slightly less portable than the 13″ version, but can be significantly more powerful, and could handle pretty much any video or photo editing task you’d want to throw at it. Yes, there are desktops including Apple’s that could perform some of those tasks more quickly, but this laptop is intended for someone who needs portability too, and that’s the point here. Every computing device involves compromises – here, portability has been prioritized over raw power, but not in such a way that makes this computer useless for powerful tasks.

All that would be true even if the Touch Bar didn’t exist, and yet it does. It’s a really nice addition to what’s already a great computer, and once you get some way along the learning curve it really speeds up tasks and makes life easier on your hands. As third party developers embrace it, it’ll be even more universally useful, and I wouldn’t be surprised if we see some developers using the Touch Bar in really innovative ways within their apps. Can you live without it? Absolutely – all of us have until now. But it’s a great addition if you’re in the market for a new laptop.

Facebook, Ad Load, and Revenue Growth

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

Facebook and ad load have been in the news a bit the past few days, since CFO David Wehner said on Facebook’s earnings call that ad load would be a less significant driver of revenue growth going forward. I was listening to the call and watching the share price, and it was resolutely flat after hours until the moment he made those remarks, and then it dropped several percent. So it’s worth unpacking the statement and the actual impact ad load has as a driver of ad growth a bit.

A changing story on ad loads

First, let’s put the comments on ad load in perspective a bit. It’s worth looking at what’s been said about ad loads on earlier earnings calls to see how those comments compare. Here’s some commentary from the Q4 2015 call:

So, ad load is definitely up significantly from where we were a couple of years ago. And as I mentioned, it’s one of the factors driving an increasing inventory. Really one thing to kind of think about here is that improving the quality and the relevance of the ads has enabled us to show more of them and without harming the experience, and our focus really remains on the experience. So, we’ll continue to monitor engagement and sentiment very carefully. I mentioned that we expect the factors that drove the performance in 2015 to continue to drive the performance in 2016. So, I think that’s the color I can give on ad loads.

Here’s commentary from a quarter later, on the Q1 2016 call:

So on ad load, it’s definitely up from where we were couple of years ago. I think it’s really worth emphasizing that what has enabled us to do that is just improving the quality and the relevance of the ads that we have, and that’s enabled us to show more of them without harming the user experience at all. So that’s been really key. Over time, we would expect that ad load growth will be a less significant factor driving overall revenue growth, but we remain confident that we’ve got opportunities to continue to grow supply through the continued growth in people and engagement on Facebook as well as on our other apps such as Instagram.

Some of that is almost a carbon copy of the Q4 commentary, but note the second half of the paragraph, where Wehner goes from saying 2016 would be like 2015 to saying that over time ad load would be a less significant driver. This is something of a turning point. Now, here’s Q2’s commentary:

Additionally, we anticipate ad load on Facebook will continue to grow modestly over the next 12 months, and then will be a less significant factor driving revenue growth after mid-2017. Since ad load has been one of the important factors in our recent strong period of revenue growth, we expect the rate at which we are able to grow revenue will be impacted accordingly

These remarks turn “over time” into the more specific “after mid-2017”. Now here’s the Q3 commentary that caused the stock drop:

I also wanted to provide some brief comments on 2017. First on revenue, as I mentioned last quarter, we continue to expect that ad load will play a less significant factor driving revenue growth after mid-2017. Over the past few years, we have averaged about 50% revenue growth in advertising. Ad load has been one of the three primary factors fueling that growth. With a much smaller contribution from this important factor going forward, we expect to see ad revenue growth rates come down meaningfully….

Again, it feels like there’s an evolution here, even though Wehner starts out by saying he’s repeating what he said last quarter. What’s different now is the replacement of “less significant factor driving revenue” with “much smaller contribution from this important factor”, and “the rate at which we are able to grow revenue will be impacted accordingly” to “ad revenue growth rates come down meaningfully“. Those changes are both a matter of degree, and they feel like they’re intended to suggest a stronger reduction in growth rates going forward.

Drivers of growth

However, as Wehner has consistently reminded analysts on earnings calls, ad load is only one of several drivers of growth for Facebook’s ad revenue. The formula for ad revenue at Facebook is essentially:

Users x time spent x ad load x price per ad

To the extent that there’s growth in any of those four components, that drives growth in ad revenue, and to the extent that there’s growth in several of them, there’s a multiplier effect for that growth. To understand the impact of slowing growth from ad load, it’s worth considering the contribution each of these elements makes to overall ad revenue growth at the moment:

  • User growth – year on year growth in MAUs has been running in the mid teens, with a rate between 14 and 16% in the last year, while year on year growth in DAUs has been slightly higher, at around 16-17% fairly consistently
  • Time spent – Facebook doesn’t regularly disclose actual time spent, but has said recently that this metric is also up by double digits, so at least 10% year on year and perhaps more
  • Ad load – we have no metric or growth rate to look at here at all, except directionally: it rose significantly from 2013 to 2015, and continues to rise, but will largely cease to do so from mid-2017 onwards.
  • Price per ad – Facebook has regularly provided directional data on this over the last few years, but it’s been a highly volatile metric unless recently, with growth spiking as mobile took off, and then settling into the single digits year on year in the last three quarters.

So, to summarize, using our formula above, we have growth rates as follows: 16-17% user growth plus 10%+ growth in time spent plus an unknown growth in ad load, plus 5-6% growth in price per ad.

The ad load effect

Facebook suggests that ad load is reaching saturation point, so just how loaded is Facebook with ads today? I did a quick check of my personal Facebook account on four platforms – desktop web, iOS and Android mobile apps, and mobile web on iOS. I also checked the ad load on my Instagram account. This is what I found:

  • Desktop web: an ad roughly every 7 posts in the News Feed, plus two ads in the right side bar. The first ad was the first post on the page
  • iOS app: an ad roughly every 12 posts, with the first ad being the second post in the News Feed
  • iOS web: An ad roughly every 10 posts, with the first ad being the fourth post in the News Feed
  • Android app: an add roughly every 10-12 posts, with the first ad being the second post in the News Feed
  • Instagram on iOS: the fourth post and roughly every 10th post after that were ads.

That’s pretty saturated. You might argue that Facebook could raise the density of ads on mobile to match desktop density (every 7 rather than every 10-12), but of course on mobile the ad takes up the full width of the screen (and often much of the height too), which means the ceiling is likely lower on mobile. I’m sure Facebook has done a lot of testing of the tipping point at which additional ads deter usage, and I would imagine we’re getting close to that point now. So this is a real issue Facebook is going to be dealing with. I did wonder to what extent this is a US issue – in other words, whether ad loads might be lower elsewhere in the world due to lower demand. But on the Q2 earnings call, Facebook said that there aren’t meaningful differences in ad load by geography, so this is essentially a global issue.

So, then, if this ad load issue is real, what are the implications for Facebook’s ad revenue growth? Well, Facebook’s ad revenue has grown by 57-63% year on year over the past four quarters, and increasing ad load is clearly accounting for some of the growth, but much of it is accounted for by the other factors in our equation. Strip that ad load effect out and growth rates could drop quite a bit, by anywhere from 10-30 percentage points. Facebook could then be left with 30-50% year on year growth without a contribution from ad load. Even at the lower end of that range, that’s still great growth, while at the higher end it’s amazing growth. But either would be lower than it has been recently.

Of course, it’s also arguable that capping ad load would constrain supply of ad space, which could actually drive up prices if demand remains steady or grows (which Facebook is certainly forecasting). Facebook has dismissed suggestions in the past that it would artificially limit ad load to drive up prices, but this is a different question. Supply constraints could offset some of the slowing contribution from ad load itself, though how much is hard to say.

Ad revenue growth from outside the News Feed

Of course, Facebook isn’t limited to simply showing more ads in the Facebook News Feed. While overall impressions actually fell from Q4 2013 to Q3 2015 as usage shifted dramatically from desktop to mobile, where there are fewer ads, total ad impressions have been up by around 50% year on year in the last three quarters. Much of that growth has been driven by Instagram, which of course has ramped from zero to the significant ad load I just described over the course of the last three years. Multiplied by Instagram user growth (which isn’t included in Facebook’s MAU and DAU figures) and that’s a significant contribution to overall ad growth too. As I understand it, the ad load comments apply to Instagram too, but there will still be a significant contribution to overall ad revenue growth from user growth.

And then there are Facebook’s other properties which until today haven’t shown ads at all: Messenger and WhatsApp. As of today, Facebook Messenger is going to start showing some ads, and that will be another potential source of growth going forward. WhatsApp may well do something similar in future, too, although Zuckerberg will have to overcome Jan Koum’s well-known objections first.

Growth beyond ad revenue

And then we have growth from revenue sources other than ads. What’s been striking about Facebook over the last few years – even more than Google – is how dominated its revenues have been by advertising. The proportion has actually risen from a low of 82% of revenue in Q1 2012 all the way back up to 97.2% in Q3 2016. It turns out that the increasing contribution from other sources was essentially down to the FarmVille era, with Zynga and other game companies generating revenues through Facebook’s game platform. What’s even more remarkable here is that these payments are still the bulk of Facebook’s “Payments and other fees” revenues today, as per the 10-Q:

…fees related to Payments are generated almost exclusively from games. Our other fees revenue, which has not been significant in recent periods, consists primarily of revenue from the delivery of virtual reality platform devices and related platform sales, and our ad serving and measurement products. 

As you can see in the second half of that paragraph, Facebook anticipates generates some revenue from Oculus sales going forward, though it hasn’t been material yet, and later in the 10-Q the company suggests this new revenue will only be enough to (maybe) offset the ongoing decline in payments revenue as usage continues to shift from desktop to mobile.

Of course, Facebook now has its Workplace product for businesses too, which doesn’t even merit a mention in this section of the SEC filing. Why not? Well, it would take 33 million active users to generate as much revenue from Workplace in a quarter as Facebook currently generates from Payments and other fees. It would take 12 million active users just to generate 1% of Facebook’s overall revenues today. And that’s because Facebook’s ad ARPU is almost $4 globally per quarter, and $15 in the US and Canada. Multiplied by 1.8 billion users, it’s easy to see why Workplace at $1-3 per month won’t make a meaningful contribution anytime soon.

Conclusion: a fairly rosy future nonetheless

In short, then, Facebook is likely going to have to make do with ad revenue for the vast majority of its future growth. That’s not such a bad thing, though – as we’ve already seen, the other drivers of ad revenue growth from user growth to price per ad to time spent by users are all still significant drivers of growth in the core Facebook product, and new revenue opportunities across Instagram, Messenger and possibly WhatsApp should contribute meaningfully as well. That’s not to say that growth might not be slower, and possibly quite a bit slower, than in the recent past. But at 30% plus, Facebook will still be growing faster than any other big consumer technology company.

tim-cook-1120

Apple, Microsoft, and the Future of Touch

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

This is one of those rare weeks when two of the tech industry’s major players have back to back events and in the process illustrate their different takes on an important product category, in this case the PC. I’ve already written quite a bit about all this this week:

Now that it’s all done, though, I wanted to pull some of these themes and threads together. I attended today’s Apple event in person and so I’ve spent time with the new MacBooks, though not with Microsoft’s new hardware or software.

Differentiation: from hardware advantages to philosophical approaches

The biggest thing to come out of this week, which I previewed in my Techpinions piece on Monday, was a shift from hardware advantages to philosophical differences as the nexus of competition between Microsoft and Apple in PCs. MacBooks once enjoyed significant hardware advantages over all competing laptops in terms of battery life, portability, and features such as trackpads, but in recent years those advantages have all but disappeared. Instead, what we’re left with is increasingly stark philosophical differences in how these companies approach the market, and this week the focus was on touch.

Microsoft’s computing devices all run some flavor of Windows 10 and feature touch. Apple, on the other hand, continues to draw a distinction between two sets of products by both operating system and interactivity. On the one hand, you have iOS devices with touch interfaces, and on the other macOS devices with more indirect forms of interactivity. Today’s event saw Apple introduce an interesting new wrinkle to touch on the MacBook with the Touch Bar, but it’s clearer than ever that Apple refuses to put touch screens on the Mac and that won’t change soon.

Microsoft’s approach makes touch available everywhere, even when in many cases it doesn’t make sense. It’s optional, though, and Microsoft has pulled back from some of the earlier erroneous over-reliance on touch that characterized Windows 8. Apple, on the other hand, wants to largely preserve existing workflows based on mouse and keyboard interactivity while adding subtle new forms of interaction. It keeps all the interaction on the horizontal plane, while Microsoft has users switching back and forth between the tabletop and display planes. There isn’t necessarily a right and wrong here – both approaches are interesting and reflect each company’s different starting points and perspectives. But it’s differences like this that will characterize the next phase of competition between them.

In some ways, this new phase of competition is analogous to the competition between Apple and Google in the smartphone market. In both cases, there are now devices made by companies other than Apple which match Apple’s core hardware performance. That’s not to say that all devices now come up to Apple’s standards – it continues to compete only at the high end, while both Google and Microsoft’s ecosystems serve the full gamut of needs from cheap and cheerful to high-priced premium. But in smartphones as in PCs, the focus of competition at the high end is now moving to different approaches rather than hardware performance. It’s intriguing, then, that it’s during this era that both Google and Microsoft are finally getting serious about making their own hardware.

wall-image-strip-560

The Touch Bar itself is very clever. Apple made the decision to spend a lot of time in today’s event on demos, and I think that was a good use of the time (especially in an event with less ground to cover than most). The demos really showed the utility that the Touch Bar can provide in a variety of Apple and third party apps. What Apple has done here is in essence to take a slice of the screen and put it down within reach to allow you to interact with it. There will definitely be a learning curve involved here – I can see users forgetting that it’s there unless they make an effort to use it, but I can also see it prompting users to try to touch the screen (this happened to me in the demo area). “Touch here but not there” will be an interesting mental model to adapt to, but once users get the hang of it (and developers support it in their apps) I believe it will add real value.

Apple’s price coverage

Of course, MacBooks aren’t the only portable computers Apple makes, and it’s been increasingly making the case that the iPad Pro lineup should be considered computers too. These are Apple’s touch-screen computers, but in most consumers minds they don’t yet belong in the same category as Windows laptops. However, when you put the new MacBooks, older MacBooks, and iPad Pros together, you get an interesting picture in terms of price and performance coverage. The chart below shows base pricing for each of these products:

Apple Computer Portfolio

As you can see, there’s pretty good coverage from $599 all the way through $2399 with just the base prices. If you were to add storage and spec options (and Smart Keyboards in the case of the iPad Pros) the in between price points would be covered pretty well too. But Apple now offers a portable computer at almost any price point in this range, and that’s interesting. The newest MacBooks alone do a nice job of covering the spread from $1199 to $2399 with increasing power and capability, while the older MacBooks fill in some gaps. There’s no denying that these products are premium, but they extend down into price points that many people will be able to reach, while providing really top notch products for those that can afford or justify them. If you focus on those newer devices, I think this is the most coherent and logical MacBook portfolio Apple has had for years.

The next big question is what happens with desktops, because those are now from one to three years old, with no sign of an update. The one that’s had the most focus from Apple in recent years is the iMac, which is both the most mass market and the flashiest – it’s the only one that is highly visible, while both the Mac Pro and Mini could feasibly sit hidden under a desk. I don’t think Apple’s going to discontinue these anytime soon, but the timing of its lack of focus on these devices is providing an interesting window for Microsoft.

A few words on creativity

I won’t repeat everything I said in my earlier stuff on Microsoft’s event here, but suffice it to say that this creativity push is certainly interesting given that timing I just mentioned. However, it’s totally overblown to be talking about Microsoft somehow stealing away Apple’s creative customer base, for several reasons:

  • First, Apple has long since expanded beyond that base, especially if you look at the full set of devices including iPhones. Apple clearly isn’t selling hundreds of millions of iPhones solely to people that use Photoshop for a living. Even if you look at Mac buyers, they’re much broader than the cliche of ad agency creatives and video editors.
  • Secondly, all Microsoft has done so far is put a stake in the ground. The Surface Studio is a beautiful device and a well thought out machine for a subset of creative professionals. But workflows don’t change overnight just because a new computer comes along, especially if there’s an existing commitment to another ecosystem. The role of this device is to signal to creatives that Microsoft is serious about serving them, which is notable in its own right, but won’t sell millions of devices by itself.
  • Thirdly, Microsoft’s bigger creativity push is around software, with 400m plus Windows 10 users getting a bunch of new creativity software in the Creators Update in the spring. This will be much more meaningful in terms of spreading that creativity message far and wide than the new hardware.
  • Lastly, even with all this, Microsoft’s efforts to associate its brand with creativity and not just productivity will take years to take hold. Perceptions don’t change overnight either.

Apple’s event today was a nice reminder that it still takes these creative professionals very seriously – both the Adobe and DJ Pro demos were creativity-centric, and these new machines are clearly intended for creative professionals among others (the RAID arrays would be an obvious fit for people editing high-bandwidth video, for example). Apple isn’t going to cede this ground easily, but it will be very interesting to watch over the next few years how this aspect of the competition plays out.

 

Twitter’s Terrible New Metric

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website. If you want to contact me directly, you’ll find various ways to do so here.

In the shareholder letter that accompanied Twitter’s Q3 earnings today, the company said:

consider that each day there are millions of people that come to Twitter to sign up for a new account or reactivate an existing account that has not been active in the last 30 days.

That sounds great, right? Progress! And yet this very metric is the perfect illustration of why Twitter hasn’t actually been growing quickly at all. Let’s break it down:

  • Starting point: “each day there are millions of people” – so that’s at least 2 million per day every day
  • There are ~90 days in a quarter, so 2 million times 90 is 180 million, all of whom count as MAUs in the respective months when they engage in this behavior, and could be potential MAUs for the quarter if they stick around for a couple of months
  • Over the course of this past quarter, Twitter only added 4 million new MAUs
  • That implies one of two things: either 2.2% or less (4/180) of that 180 million actually stuck around long enough to be an MAU at the end of the quarter, or a very large proportion of those who had been active users at the end of last quarter left
  • In fact, it might even get worse. Based on the same 2m/day logic, 60 million plus people become MAUs every month on this basis, meaning this behavior contributes at least 60 million of Twitter’s MAUs each quarter (quarterly MAUs are an average of the three monthly MAU figures) even if all 60 million never log in again. On a base of just over 300 million, that means around a fifth of Twitter’s MAUs each month are in this category
  • Bear in mind throughout all this that I’m taking the bear minimum meaning of “millions” here – 2 million. The real numbers could be higher.

In other words, this metric – which is intended to highlight Twitter’s growth opportunity – actually highlights just how bad Twitter is at retaining users. Because Twitter doesn’t report daily active users or churn numbers, we have to engage in exercises like this to try to get a sense of what the true picture looks like. But it isn’t pretty.

Why is retention so bad? Well, Twitter talked up a new topic-based onboarding process in its shareholder letter too. In theory, this should be helping – I’ve argued that topic-based rather than account-based follows are actually the way to go. But I signed up for a new test account this morning to see what this new onboarding process looks like, and the end results weren’t good.

Here’s what the topic based onboarding process looks like:

topics-560

So far, so good – I picked a combination of things I’m really interested in and a few others just to make sure there were a decent number of topics selected. I was also asked to upload contacts from Gmail or Outlook, which I declined to do because this was just a test account. I was then presented with a set of “local” accounts (I’m currently in the Bay Area on a business trip so got offered lots of San Francisco-based accounts including the MTA, SFGate, and Karl the Fog – fair enough). I opted to follow these 21 accounts as well, and finished the signup process. Here’s what my timeline looked like when I was done:

timeline-560

It’s literally empty – there is no content there. And bizarrely, even though I opted to follow 21 local accounts, I’m only shown as following 20 here. As I’m writing now, it’s roughly an hour later and there are now 9 tweets in that timeline, three each from TechCrunch and the Chronicle, and several others. This is a terrible onboarding experience for new users – it suggests that there’s basically no content, even though I followed all the suggested accounts and picked a bunch of topics. Bear in mind that I’m an avid Twitter user and a huge fan of the service – it provides enormous value to me. But based on this experience I’d never come away with that impression. No wonder those millions of new users every day don’t stick around. Why would you?

In that screenshot above, the recommendation is to “Follow people and topics you find interesting to see their Tweets in your timeline”. But isn’t that what I just did? As a new user, how do I feel at this point? And how do I even follow additional topics from here (and when am I going to see anything relating to the topics I already said I was interested in)? Twitter is suggesting even more SF-centric accounts top right, along with Ellen, who seems to be the vanilla ice cream of Twitter, but that’s it. If I want to use Twitter to follow news rather than people I know, which is how Twitter is increasingly talking about itself, where do I go from here?

I hate beating up on the companies I follow – I generally try to be more constructive than this, because I think that’s more helpful and frankly kinder. But I and countless others have been saying for years now that Twitter is broken in fundamental ways, and there are obvious solutions for fixing it. Yet Twitter keeps going with this same old terrible brokenness for new users, despite repeated promises to fix things. This, fundamentally, is why Twitter isn’t growing as it should be, and why people are losing faith that it will ever turn things around.

AT&T Doubles Down on the Ampersand

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website.

I recently spent a couple of days with AT&T as part of an industry analyst event the company holds each year. It’s usually a good mix of presentations and more interactive sessions which generally leave me with a pretty good sense of how the company is thinking about the world. Today, I’m going to share some thoughts about where the consumer parts of AT&T sit in late 2016, but I’m going to do so with the shadow of a possible Time Warner merger looming over all of this – something I’ll address at the end. From a consumer perspective the two major themes that emerged from the event for me were:

  • AT&T now sees itself as an entertainment company
  • AT&T is doubling down on the ampersand (&).

Let me explain what I mean by both of those.

AT&T as an entertainment company

The word “entertainment” showed up all over the place at the event, and it’s fair to say it’s becoming AT&T’s new consumer identity. From a reporting perspective, the part of AT&T which serves the home is now called the Entertainment Group, for example, and CEO Randall Stephenson said that was no coincidence – it’s the core of the value proposition in the home now. But this doesn’t just apply to the home side of the business – John Stankey, who runs the Entertainment Group, said at one point that “what people do on their mobile devices will be more and more attached to the emotional dynamics of entertainment” too.

That actually jibes pretty closely with something I wrote in my first post on this blog:

There are essentially five pieces to the consumer digital lifestyle, and they’re shown in the diagram below. Two of these are paramount – communications and content. These are the two elements that create emotional experiences for consumers, and around which all their purchases in this space are driven, whether consciously or unconsciously.

What’s fascinating about AT&T and other telecoms companies is that the two things that have defined them throughout most of their histories – connectivity and communications – are taking a back seat to content. People for the most part don’t have emotional connections with their connectivity or their devices – they have them with the other people and with the content their devices and connectivity enable them to engage with. AT&T seems to be betting that being in the position of providing content will create stickier and more meaningful relationships which will be less susceptible to substitution by those offering a better deal. And of course video is at the core of that entertainment experience.

The big question here, of course, is whether this is how consumers want to buy their entertainment – from the same company that provides their connectivity. AT&T is big on the idea that people should be able to consume the content of their choice on the device of their choice wherever they choose. On the face of it, that seems to work against the idea that one company will provide much of that experience, and I honestly think this is the single biggest challenge to AT&T’s vision of the future and of itself as an entertainment company. But this is where the ampersand comes in.

Doubling down on the ampersand

One of the other consistent themes throughout the analyst event was what AT&T describes as “the power of &”. AT&T has actually been running a campaign on the business side around this theme, but it showed up on the consumer side of the house too at the event. Incidentally, I recalled that I’d seen a similar campaign from AT&T before, and eventually dug up this slide from a 2004 presentation given by an earlier incarnation of AT&T.

But even beyond this ad campaign, AT&T is talking up the value of getting this and that, and on the consumer side this has its most concrete instantiation in  what AT&T has done with DirecTV since the merger. This isn’t just about traditional bundling and the discounts that come with it, but about additional benefits you get when you bundle. The two main examples are the availability of unlimited data to those who bundle AT&T and DirecTV, and the zero-rating of data for DirecTV content on AT&T wireless networks. Yes, AT&T argues, you can watch DirecTV content on any device on any network, but when you watch it on the AT&T network it’s free. The specific slogan here was “All your channels on all your devices, data free when you have AT&T”.

The other aspect here is what I call content mobility. What I mean by that is being able to consume the content you have access to anywhere you want. That’s a given at this point for things like Netflix, but still a pretty patchy situation when it comes to pay TV, where rights often vary considerably between your set top box, home viewing on other devices, and out-of-home viewing. The first attempts to solve this problem involved boxes – VCRs and then DVRs for time shifting, and then the Slingbox for place shifting. But the long term solution will be rooted in service structure and business models, not boxes. For example, this content mobility has been a key feature of the negotiations AT&T has been undertaking both as a result of the DirecTV merger and in preparation for its forthcoming DirecTV Now service. It still uses a box – the DirecTV DVR – where necessary as a conduit for out-of-home viewing where it lacks the rights to do so from the cloud, but that’s likely temporary.

AT&T’s acquisition of DirecTV was an enabler of both of these things – offering zero rating as a benefit of a national wireless-TV bundle, and the negotiating leverage that comes from scale. It also, of course, gained access to significantly lower TV delivery costs relative to U-verse.

Now, the big question is whether consumers will find any of this compelling enough to make a big difference. I’m inherently skeptical of zero rating content as a differentiator for a wireless operator – even if you leave aside the net neutrality concerns some people have about it, it feels a bit thin. What actually becomes interesting, though, is how this allows DirecTV to compete against other video providers – in a scenario where every pay TV provider basically offers all the same channels, this kind of differentiation could be more meaningful on that side of the equation. If all the services offer basically the same content, but DirecTV’s service allows you to watch that content without incurring data charges on your mobile device, that could make a difference.

Context for AT&T&TW

So let’s now look at all of this as context for a possible AT&T-Time Warner merger (which as I’m finishing this on Saturday afternoon is looking like a done deal that will be announced within hours). One of the slides used at the event is illustrative here – this is AT&T’s take on industry dynamics in the TV space:

ATT TV industry view

Now focus in on the right side of the slide, which talks about the TV value chain compressing:

ATT TV compression

The point of this illustration was to say that the TV value chain is compressing, with distributors and content owners each moving into each other’s territory. (Ignore the logos at the top, at least two of which seem oddly out of place). The discussion around this slide went as follows (I’m paraphrasing based on my notes):

Earlier, there were discrete players in different parts of the value chain. That game has changed dramatically now – those heavy in production are thinking about their long-term play in distribution. Those who distribute are thinking about going back up the value chain and securing ownership rights. Premium content continues to play a role in how people consume network capacity. Scale and a buying position in premium content is therefore essential.

In addition, AT&T executives at the event talked about the fact that the margins available on both the content and distribution side would begin to collapse for those only participating on one side as players increasingly play across both.

The rationale for a merger

I think a merger with Time Warner would be driven by three things:

  • A desire to avoid being squeezed in the way just described as other players increasingly try to own a position in both content ownership and distribution – in other words, be one of those players, not one of their victims
  • A furthering of the & strategy – by owning content, AT&T can offer unique access to at least some of that content through its owned channels, including DirecTV and on the AT&T networks. This is analogous to the existing DirecTV AT&T integration strategy described above
  • Negotiating leverage with other content providers and service providers.

Both the second and third of these points would also support the content mobility strategy I described earlier, providing both leverage with content owners and potentially unique rights to owned content.

How would AT&T offer unique content? I don’t think it would shut off access to competitors, but I could see several possible alternatives:

  • Preserving true content mobility for owned channels – only owned channels get all rights for viewing Time Warner content on any device anywhere. Everyone else gets secondary rights
  • Exclusive windows for content – owned channels like DirecTV and potentially AT&T wireless would get early VOD or other access to content, for example immediate VOD viewing for shows which don’t show up for 24 hours, 7 days etc on other services
  • Exclusive content – whole existing shows and TV channels wouldn’t go exclusive, but I could see exclusive clips and potentially new shows go exclusive to DirecTV and AT&T.

The big downside with all this is that whatever benefits AT&T offers to its own customers, by definition it would be denying those benefits to non-customers. That might be a selling point for DirecTV and AT&T services, but wouldn’t do much for Time Warner’s content. The trends here are inevitable, with true content mobility the obvious end goal for all content services – it’s really just a matter of time. To the extent that AT&T is seen to be standing in the way of that for non-customers, that could backfire in a big way.

On balance, I’m not a fan of the deal – I’ve outlined what I see as the potential rationale here, but I think the downsides far outweigh the upsides. Not least because the flaws in Time Warner’s earlier mega-merger apply here too – since you can never own all content, but just a small slice, your leverage is always limited. What people want is all the relevant content, not just what you’re incentivized to offer on special terms because of your ownership structure. I’ll wait and see how AT&T explains the deal to see if the official rationale makes any more sense, but I suspect it won’t change much.

Microsoft’s Evolving Hardware Business

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website.

Microsoft reported earnings yesterday, and the highlights were all about the cloud business (Alex Wilhelm has a good summary of some of the key numbers there in this post on Mattermark).  Given that I cover the consumer business, however, I’m more focused on the parts of Microsoft that target end users, which are mostly found in its More Personal Computing segment (the one exception is Office Consumer, which sits in the Productivity & Business Processes segment).

The More Personal Computing segment is made up of:

  • Windows – licensing of all versions of Windows other than Windows server
  • Devices – including Surface, phones, and accessories
  • Gaming – including Xbox hardware, Xbox Live services, and game revenue
  • Search advertising – essentially Bing.

Microsoft doesn’t report revenues for these various components explicitly, but often provides enough data points in its various SEC filings to be able to draw reasonably good conclusions about the makeup of the business. As a starting point, Microsoft does report revenue from external customers by major product line as part of its annual 10-K filing – revenue from the major product lines in the More Personal Computing Group are shown below:

External revenue for MPC group

Windows declining for two reasons

It’s worth noting that it appears Windows revenue has fallen off a cliff during this period. However, a big chunk of the apparent decline is due to the deferral of Windows 10 revenue, which has to be recognized over a longer period of time than revenue from earlier versions of Windows, which carried less expectation of free future updates. At the same time, the fact that Windows 10 was a free upgrade for the first year also depressed revenues. As I’ve been saying for some time now, going forward it’s going to be much tougher for Microsoft to drive meaningful revenue from Windows in the consumer market in particular, in a world where every other vendor gives their OS away for free. That means Microsoft has to find new sources of revenue in consumer: enter hardware.

Phones – dwindling to nothing

First up, phones, which appear to be rapidly dwindling to nothing. It’s become harder to find Lumia smartphone sales in Microsoft’s reporting recently, and this quarter (as far as I can tell) the company finally stopped reporting phone sales entirely. That makes sense, given that Lumia sales were likely under a million in the quarter and Microsoft is about to offload the feature phone business. The chart below shows Lumia sales up to the previous quarter, and my estimate for phone revenues for the past two years, which hit around $300 million this quarter:

Phone business metrics

Surface grows year on year but heading for a dip

Surface has been one of the bright spots of Microsoft’s hardware business over the last two years. Indeed – this home-grown hardware line has compared very favorably to that acquired phones business we were just discussing:

Surface and Phone revenue

As you can see, Surface has now outsold phones for four straight quarters, and that’s not going to change any time soon. Overall, Surface revenues are growing year on year, which is easier to see if you annualize them:

Trailing 4-quarter Surface revenue

However, what you can also see from that first Surface chart is that revenues for this product line are starting to settle into a pattern: big Q4 sales, followed by a steady decline through the next three quarters. That’s fine as long as there is new hardware each year to restart the cycle, but from all the reporting I’ve seen it seems the Surface Pro and Surface Book will get only spec bumps and very minor cosmetic changes, which leaves open the possibility of a year on year decline. Indeed, this is exactly what Microsoft’s guidance says will happen:

We expect surface revenue to decline as we anniversary the product launch from a year ago.

I suspect the minor refresh on the existing hardware combined with the push into a new, somewhat marginal, product category (all-in-ones) won’t be enough to drive growth. The question is whether the revenue line recovers in the New Year or whether we’ll see a whole year of declines here – that, in turn, would depress overall hardware sales already shrinking from the phone collapse.

It’s also interesting to put Surface revenues in context – they’ve grown very strongly and are now a useful contributor to Microsoft’s overall business, but they pale in comparison to both iPad and Mac sales, neither of which have been growing much recently:

Surface vs iPad vs Mac

Ahead of next week’s Microsoft and Apple events, that context is worth remembering – for all the fanfare around Surface, Microsoft’s computing hardware business is still a fraction of the size of Apple’s.

Gaming – an oldie but kind of a goodie

Gaming, of course, is the oldest of Microsoft’s consumer hardware businesses, but its gaming revenue is actually about more than just selling consoles – it also includes Xbox Live service revenues and revenues from selling its own games (now including Minecraft) and royalties from third party games. However, it’s likely that console sales still dominate this segment. Below is my estimate for Gaming revenue:

Gaming revenue

In fact, Microsoft actually began reporting this revenue line this quarter, though unaccountably only for this quarter, and not for past quarters. Still, it’s obvious from my estimates that this, too, is an enormously cyclical business, with a big spike in Q4 driven by console sales and to a lesser extent game purchases, followed by a much smaller revenue number in Q1 and a steady build through Q3 before repeating. Microsoft no longer reports console sales either, sadly, likely because it was coming second to Sony much of the time before it stopped reporting. Still, gaming makes up almost a third of MPC segment revenues in Q4, and anything from 8-20% of the total in other quarters. In total, hardware likely now accounts for 30-50% of total revenue from the segment quarterly.

Search advertising – Microsoft’s quiet success story

With all the attention on cloud, and the hardware and Windows businesses going through a bit of a tough patch, it’d be easy to assume there were no other bright spots. And yet search advertising continues to be the undersold success story at Microsoft over the last couple of years. I’ve previously pointed out the very different trajectories of the display and search ad businesses at Microsoft, which ultimately resulted in the separation of the display business, but the upward trajectory of search advertising has accelerated since that decision was made.

Again, Microsoft doesn’t report this revenue line directly, but we can do a decent job of estimating it, as shown in the chart below:

Search advertising revenue

There are actually two different revenue lines associated with search advertising – what I’ve shown here is total actual revenue including traffic acquisition costs, but Microsoft tends to focus at least some of its commentary on earnings calls on a different number – search revenue ex-TAC. As you can see, the total number has plateaued over the last three quarters according to my estimates, though the year on year growth numbers are still strong. However, the ex-TAC number is growing more slowly. In other words, this growth is coming at the expense of higher traffic acquisition costs, which seems to be the result of the deal Microsoft signed with Yahoo a few quarters ago and an associated change in revenue recognition. Still, it’s a useful business now in its own right, with advertising generating 7% of Microsoft’s revenues in the most recent fiscal year.

Growth at Netflix Comes at a Cost

Note: this blog is published by Jan Dawson, Founder and Chief Analyst at Jackdaw Research. Jackdaw Research provides research, analysis, and consulting on the consumer technology market, and works with some of the largest consumer technology companies in the world. We offer data sets on the US wireless and pay TV markets, analysis of major players in the industry, and custom consulting work ranging from hour-long phone calls to weeks-long projects. For more on Jackdaw Research and its services, please visit our website.

Netflix reported its financial results on Monday afternoon, and the market loved what it saw – the share price was up 20% a couple of hours later. The single biggest driver of that positive reaction was subscriber growth, which rebounded a little from last quarter’s pretty meager numbers. Here are a few key charts and figures from this quarter’s results. A much larger set can be found in the Q3 Netflix deck from the Jackdaw Research Quarterly Decks Service, which was sent to subscribers earlier this afternoon. The Q2 version is available for free on Slideshare.

Subscriber growth rebounds

As I mentioned, subscriber growth rebounded at least a little in Q3. However, the rebound was fairly modest, and the longer-term trends are worth looking at too. Here’s quarterly growth:

Quarter on Quarter growth Netflix Q3 2016

The numbers were clearly better than Q2 both domestically and internationally, but not enormously so, especially in the US. Here’s the longer-term picture, which shows year on year growth:

Year on Year Growth Netflix Q3 2016

As you can see, there’s been a real tapering off in the US over the past two years, while internationally it’s flattened following consistent acceleration through the end of last year. To put this year’s numbers so far in context, here’s a different way of presenting the quarterly domestic data:

Cyclical Growth Trends Netflix Q3 2016

That light blue line is the 2016 numbers, and as you can see each of this year’s quarters has been below the last three years’ equivalents, and the last two quarters have been well below. Arguably, Q3 was even further off the pace than Q2, so for all the celebration of a return to slightly stronger growth, this isn’t necessarily such a positive trend when looked at this way.

The cost of growth

Perhaps more importantly, this growth is becoming increasingly expensive in terms of marketing. I’ve mentioned previously that, as Netflix approaches saturation in the US, it will need to work harder and spend more to achieve growth, and we’re still seeing that play out. If the objective of marketing is growth, then one way of thinking about marketing spending is how much growth it achieves.

Ideally, we’d measure this by establishing a cost per gross subscriber addition – i.e. the marketing spend divided by the number of new subscribers enticed to the service as a result of it. However, since Netflix stopped reporting gross adds in 2012, we have to go with the next best thing, which is marketing spend per net subscriber addition, which is shown in the chart below:

marketing-costs-per-net-add-q3-2016

As you can see, there was a massive spike in Q2 due to the anemic growth numbers domestically, but even in Q3 the number is around 3 times what it had been in the recent past. Yes, Netflix returned to healthier growth in Q3, but it had to spend a lot to get there. But even in the international line, somewhat dwarfed by US spending the last two quarters, there has been an increase. In its shareholder letter, Netflix wrote this off as increased marketing for new originals, but the reality is that the marketing was still necessary to drive the subscriber growth it saw in the quarter, which in turn was lower than it has been.

The price increase worked – kind of

Of course, one big reason for the slower growth these last two quarters is the price increase Netflix has been introducing in a graduated fashion – or, in its own characterization, “un-grandfathering” of the base which was kept on older pricing for longer than new subscribers. As I wrote in this column for Variety, the price increase was really about keeping the margin growth going in the domestic business as Netflix invests more heavily in content, and I predicted that it would pay off in the long term.

Here’s what’s happened to the average revenue per paying customer as that price increase has kicked in:

Revenue per subscriber Netflix Q3 2016

There’s an enormous spike domestically in Q3, whereas internationally the increase kicked in a little earlier, despite the fact that it only affected certain markets. Overall, though, the price increase has driven average revenue per subscriber quite a bit higher – around $2 so far – so it’s arguably worked. Of course, it’s come at the cost of increased churn and perhaps slower customer additions, and the longer term effects of that will take a while to play out. We’ll need to watch the Q4 results to see whether growth starts to recover, or whether the results we’ve seen over the last two quarters are a sort of “new normal” we should expect to see more of going forward.

Meanwhile, domestic margins continue to tick up in a very predictable fashion:

Netflix Q3 2016 Domestic margins

The key, though, at this point, is to marry this increasingly profitability with breakeven followed by increasing profitability overseas, something Netflix has been predicting will happen next year. As of right now, the international business as a whole is still unprofitable, but several individual countries outside the US are already profitable for Netflix, and so it has a roadmap for other markets as they grow and hit scale milestones as well. What investors buying the stock today are really betting on is that this scenario plays out as Netflix expects it to, but it’s arguably still too early to tell whether it will.

 

Google’s Schizophrenic Pixel Positioning

This is my second post about Google’s event this week, and there will likely be more. The first tackled Google’s big strategy shift: moving from a strategy of gaining the broadest possible distribution for its services to preferring its own hardware in a narrower rollout. Today, I’m going to focus on the Pixel phones.

Positioning Pixel as a peer to the iPhone…

The Pixel phones are the most interesting and risky piece of this week’s announcements, because they go head to head against Google’s most important partners. One of my big questions ahead of time was how Google would address this tension, and in the end it simply didn’t, at least not during the event. The way it addressed it indirectly was to aim its presentation and the phones at the iPhone instead of at other Android phones. There were quite a few references to the iPhone during the event, and they’re worth pulling out:

  • A presenter said as an aside, “no unsightly camera bump” when describing the back of the Pixel phones
  • The unlimited photo and video storage was positioned against the iPhone, explicitly so when an image of iOS’s “Storage Full” error message was shown on screen (as it was in a recent Google Photos ad campaign)
  • The colors of the Pixel phones have names which appear to mock Apple’s color names
  • The pricing of the Pixel phones is identical to the pricing for the iPhone 7, right down to the first-time $20 increase to $769 for the iPhone 7 Plus from the earlier $749 price point for the larger phones, despite the fact that the larger Pixel has no additional components
  • A reference to the 3.5mm headphone jack in the Pixel commercial.

Google is attempting to position the Pixel as a true peer to the iPhone, unlike Nexus devices, which have usually been priced at a discount with feature disparities (notably in the cameras) to match. The pricing is easily the most telling element here, because there’s literally no other reason to match the pricing so precisely, and Google could arguably have benefited from undercutting the iPhone on price instead. Rather, Google wants us to see the Pixel as playing on a level playing field with the iPhone. This is very much a premium device, something that Chrome and Android exec Hiroshi Lockheimer explicitly addressed in an interview with Bloomberg published this week:

Premium is a very important category. Having a healthy premium device ecosystem is an important element in an overall healthy ecosystem. For app developers and others. It’s where certain OEMs have been successful, like Samsung. It’s where Apple is also very strong. Is there room for another player there? We think so. Do we think it’s an important aspect of Android? Yeah, absolutely.

What’s most interesting to me is the question and answer near the end there: “Is there room for another player there? We think so.” Given that the premium smartphone market is basically saturated at this point, that’s an interesting statement to make. Unlike, say, in the low end of the smartphone market, where there’s still quite a bit of growth, the only sense in which there’s “room” for another player at the premium end is by squeezing someone else out. Google clearly wants that to be Apple, but it’s arguably more likely to be Samsung if it’s anyone.

We’ve seen from long experience that switching from iOS to Android is much rarer than the other way, and so Google is far more likely to take share from Samsung than Apple, even with its overt focus on competing against the iPhone. In addition, this is fundamentally an Android phone with a few customizations, and will be seen as such, and therefore in competition with other Android devices, rather than the iPhone, for all Google’s focus on the iPhone in its messaging.

…while also mocking the iPhone (and iPhone owners)

But perhaps the biggest misfire here is the schizoid positioning versus the iPhone – on the one hand, the Pixel borrows very heavily from the iPhone – the look, especially from the front; the two sizes; the pricing, the focus on the camera; the integrated approach to hardware and software (of which more below); and so on. And yet at the same time Google seems determined to mock the iPhone, as evident in the color naming and in other ways throughout the presentation. If you want to go head to head against the iPhone, you do it in one of two ways: you show how you’re different (as Samsung has arguably done successfully), or you show how you’re the same but better. You don’t do it by aping lots of features and then mocking the very thing you’re aping at the same time (and by implication its customers, the very customers you’re going after).

True integration, or just a smokescreen?

The other major element of this strategy, of course, is that Google is now capitulating to the Apple strategy of many years and more recently Microsoft’s Surface strategy: the company that makes the best hardware is the company that makes the OS. Again, the approach is best encapsulated in an interview, this time with Rick Osterloh, head of Google’s new consolidated hardware division:

Fundamentally, we believe that a lot of the innovation that we want to do now ends up requiring controlling the end-to-end user experience.

What’s odd is that there seems to be relatively little evidence of this approach in what was announced on Tuesday. Is there really anything in the Pixel phones that couldn’t have been achieved by another OEM working at arm’s length from Google? One of the biggest benefits of taking this integrated approach is deep ties between the OS and the hardware, but from that perspective, Google isn’t actually allowing its Android division to get any closer to its own hardware team than other OEMs. It’s only integration with other Google services (outside of Android) where the Pixel team got special access, and even then only because they’re the only ones who have asked to do so.

All of this undermines Google’s argument that the Pixel is somehow in a different category because it’s “made by Google” (even leaving aside the fact-checking on that particular claim from a hardware perspective). This phone could easily have been made by an OEM with the same motivations – the big difference is that no OEM has precisely those motivations, not that the Pixel team was somehow given special access.

In fact, this gets at the heart of one of the main drivers behind the Pixel – Google reasserting control over Android and putting Google services front and center again. I’ve written about this previously in the context of Google’s attempts to do this through software, as exemplified by its I/O 2014 announcements. But those efforts largely failed to reclaim both control over Android and a more prominent role for Google services on Android phones. As a result, Google’s relationship with Android releases has continued to be analogous to that of a parent sending a child off to college – both have done all they can to set their creation on the right path, but have little control over what happens next.

If, though, this is the real motivation behind Pixel (and I strongly suspect it is), then all this stuff about targeting the iPhone and tightly integrated hardware and software is really something of a smokescreen. I would bet Google’s OEM partners can see that pretty clearly too, and for all Google executives’ reassurances that the OEMs are fine with it, I very much doubt it.