Both Apple and Google have just updated their mobile OS user stats, while Microsoft shared a new number for Windows 10 adoption at its event this week, giving us a rare opportunity to make some comparisons between these major operating systems at a single point in time. We now have the following stats straight from the sources:
The stats provided by both Apple and Google on their developer sites with regard to the user distribution across their mobile operating systems (Android and iOS)
The 110 million Windows 10 number provided by Microsoft this week
The 1.4 billion total active Android user base number provided by Google at its event last week
Total Windows users of around 1.5 billion, as reported by Microsoft several times at recent events.
In addition, there are various third party sources for additional data, including NetMarketShare and its estimate of the usage of other versions of Windows. Lastly, I have estimated that there are roughly 500 million iPhones in use now, and around 775 million iOS devices in use in total (including iPads and iPod Touches).
If we take all these data sets together, it’s possible to arrive at a reasonably good estimate for the actual global user bases of major operating system versions at the present time. The chart below shows the result of this analysis:There are several things worth noting here:
Each company has one entry in the top three, with Microsoft first, Google second, and Apple third.
However, only one of these entrants is the latest version of that company’s operating system (iOS 9), while the other two are the third most recent versions (Windows 7 and Android KitKat).
Google has three of the top six operating systems, none of which is its latest operating system (Marshmallow, released this past week). Even its second most recent version (Lollipop), now available for a year, is only the third most adopted version after KitKat and Jelly Bean.
Both iOS 9 and iOS 8 and the three most used versions of Android beat out every version of Windows but Windows 7.
The most recent versions of the three companies’ major operating systems are used by a little over 400 million (iOS 9), 110 million (Windows 10), and a negligible number (Android Marshmallow) respectively.
The second most recent versions are used by around 330 million (Android Lollipop), around 250 million (iOS 8), and around 200 million (Windows 8) respectively.
There are lots more data points to tease out here, but to my mind it’s a striking illustration of the differences in the size and adoption rates of these three major operating systems.
Two additional thoughts
Just for interest, I’m including a couple of additional thoughts below.
First off, here’s the same chart, but with iOS reduced to just the iPhone base. The order changes a fair amount, but iOS 8 and iOS 9 still make a good showing:
Lastly, I wanted to revisit my post from a couple of weeks ago about the initial adoption of iOS 9, especially as it relates to Mixpanel’s data. In that post, I showed how Mixpanel’s iOS adoption data tends to be pretty close to Apple’s own data except for the month or so after a new version of iOS ships, when it tends to skew way lower than Apple’s own data. Now that we’re a few weeks on from the initial launch, and Apple has released the second set of iOS adoption data since the launch, I wanted to revisit that pattern. Interestingly, the very same pattern is playing out again – despite the initial significant discrepancy, Mixpanel’s data is now once again very close to Apple’s own:
With the launch of Twitter’s Project Lightning and the Moments feature this week, Twitter is reinforcing an important point about Twitter and its future: there isn’t one Twitter, but two. Moments is part of one Twitter, while almost everyone writing about is part of the other, and as such many of those people seem baffled by it.
Twitter 1: Power users and broadcasters
The first Twitter is made up of what you might call power users and broadcasters. That is, these are people who use Twitter a great deal, and most of them have likely spent significant effort doing the following things:
Carefully choosing accounts to follow
Building up a following
Regularly tweeting themselves
Engaging with other Twitter users through @replies, DMs, retweets and so on.
Twitter 2: Casual users
The second Twitter is made up of what we might call casual users – people who are trying Twitter on for size, dipping in and out occasionally, or perhaps signing up for a specific short-term purpose with no intention of engaging long-term. These users make up a fairly substantial portion of Twitter’s current audience, but also more importantly the vast majority of the users it has lost over time (who in turn make up the majority of those who have ever tried Twitter). Many of these users have likely never spent significant time doing those four things I talked about above, and aren’t likely to.
Moments is for Twitter 2
The key thing about Moments is that it’s for Twitter 2, not Twitter 1. And yet the vast majority of the people writing about it are Twitter 1 people – power users and broadcasters. And I’m seeing all these people complaining on Twitter that they don’t find Moments useful, that it isn’t customized based on the accounts they’re already following, and so on. And that completely misses the point.
Just over a year ago, I wrote a piece called “Twitter’s channel model is broken” in which I argued that:
Twitter is effectively an a la carte TV service with hundreds of millions of channels on offer. The burden is on the user to choose individual accounts to follow, which can be an overwhelming experience.
I went on to say:
…live TV only works because you just have to turn on the TV and something is there. If you don’t like it, you change the channel. But on Twitter today, there’s literally nothing on until you explicitly tell the service what you’re interested in, and if you don’t like it, it’s a lot of work to change channels, because you effectively have to create each channel yourself in a very manual and labor-intensive fashion. It works fine once you’ve created a channel you’re happy with, but I suspect many users never reach this point and thus don’t use the service often or abandon it altogether.
Moments as it works today is the implementation of the model that I talked about here. The whole point is that it doesn’t require any training, just like turning on the TV doesn’t require any training – something will be there, and if you don’t like it you can change the channel. Moments provides these channels which can be instantly available the moment someone signs up for the service. If I were Twitter, I would probably allow people to skip the signup process entirely, at least temporarily, but perhaps that will come in time.
I have no doubt that Moments will get better over time and that it will eventually become more customized and curated based on your existing interests. But the whole point for now is to create an easily-consumable product that doesn’t require any work up front. And that’s going to be the key to Twitter’s growth going forward, because Twitter 2 is where all the growth is. Reducing friction in setup and use of the service for these users is therefore a critical element in Twitter’s future success.
Feeding Twitter 1
The big risk is that Twitter will focus so much on Twitter 2 that it fails to feed Twitter 1. Twitter 1 is the most vocal Twitter, and essentially all the influencers – whether celebrities, power users, or reporters – are in Twitter 1. Ignoring Twitter 1 as the company focuses on Twitter 2 would be a huge mistake, especially because so much of the content consumed by Twitter 2 is provided by Twitter 1. There’s a symbiotic relationship here, and one that Twitter has to be very careful not to disrupt.
The problem is that Twitter has another goal it’s trying to achieve: monetization. Twitter’s monetization strategy involves serving up ads, which in turn requires that people use Twitter’s own apps or its website to consume those ads. And yet Twitter 1 disproportionately uses third party clients like Tweetbot and Twitterrific. Because of Twitter’s insistence on monetization through advertising, and its general discouragement of clients that replicate the core Twitter experience, it’s started withholding some important features from the API it makes available to third party clients. My own Twitter client of choice is Tweetbot, which just received a big overhaul with lots of cool new features, but which is unable to show group DMs or the recently released Polls feature. If I only ever use Tweetbot, I am simply never aware that I have group DMs (several of which I’ve missed as a result), and tweets with polls just look rather empty because there’s no indication that there is anything other than the text there.
At this point, Twitter has an important decision to make: should it begin to make some concessions in order to feed Twitter 1? There are signs that it’s starting to do so, as some Verified users are no longer seeing ads in the Twitter client. This is an interesting example of recognizing the value that Twitter 1’s most influential users have, and granting some favors in return. Shutting off ads might cause some power users to use the first-party client again, but if Twitter is forgoing revenue from these users anyway, why not let them use the clients they want, and give the makers of those clients access to the new functionality too through APIs? The third party clients are limited already by the caps Twitter put on user growth a while back, so there’s little danger here of mass adoption of new clients.
The fact is, Twitter needs to do more to publicly acknowledge and respond to this increasing bifurcation between the two Twitters if it’s to avoid the core experience becoming unusable for both groups as it seeks to bridge the gap between them. Releasing a true power-user client of its own (not the buggy mess that is Tweetdeck) or re-embracing third party clients would be an important step in this direction.
In May last year, when Microsoft unveiled the Surface Pro 3, I wrote a piece about the new device but also about the way it was unveiled, titled, “Surface Pro 3, like every other device, is a compromise.” In that piece, I wrote about Microsoft’s insistence that the Surface Pro 3 was a no-compromise device, when in fact all devices represent compromises of one sort or another. I went on to say:
The biggest change in Microsoft’s Surface strategy over the last several years has been the locus of the compromise it’s still inevitably making. The first Surfaces were intended to be good tablets first and good laptops second (and ended up being neither). But with the Surface Pro 3, Microsoft has created a competitive laptop first, and a compromised tablet second. But it’s still pretending that there’s no compromise, and that is why the Surface line will continue to perform poorly. At some point, Microsoft has to stop pretending that a single device can meet all needs and start optimizing for different use cases with different devices, just like every other manufacturer.
Fast-forward to today, and we got the next version of the Surface Pro, the Surface Pro 4. And we saw two of the same phrases from that first event repeated: “no compromise” and “the tablet that can replace your laptop. So far, so predictable.
But then, immediately after the SP4 was introduced, we were shown the Surface Book. Which is a laptop. And Panos Panay, the presenter, started out by talking about all the things a laptop does that the Surface Pro does poorly – a better typing experience, a bigger screen, and so on. This was one of the most bizarre juxtapositions I’ve ever seen at a tech event. After 30 minutes of talking about how the Surface Pro 4 could replace your laptop with no compromises, the very same presenter offered up a laptop which was clearly better, because it didn’t make certain of those compromises.
Taking a step back for a minute, both products look really promising. I’ll withhold final judgment until I get to use these devices (or at least until others I trust have done so and shared their opinions). But this “no compromise” nonsense continues to do a massive disservice to Microsoft and to its customers. As I said in that earlier piece, every device involves a compromise. That compromise might involve features, functionality, look and feel, size, weight, price, or any number of other elements. But every device does involve compromises. And instead of pretending that it doesn’t, Microsoft needs to embrace what’s distinctive and best about each of the devices it offers. However, when you look at the Surface Pro 4 and Surface Book side by side, you start to realize that the Surface Book is really just the concept of the Surface Pro taken to its logical conclusion – thin, light, with a detachable keyboard and pen.
Is there anything that the SP4 does better than the Surface Book? Yes, it’s slightly smaller, and quite a bit lighter than the Surface Book if the keyboard is attached. To my mind, the only benefit to the SP4 is that it’s cheaper – in other words, it’s an inferior but more affordable version of the Surface Book. By the end of the Surface Book demos, I saw people asking on Twitter, “why does Microsoft even need the Surface Pro 4?” and as far as I can tell, the answer is “because the Surface Book starts at $1499”.
One quick comment on OEMs. Unlike the Surface, with which Microsoft said it was creating a new category, and therefore has somewhat been able to skirt around the fact that Microsoft is now competing with its partners, the Surface Book was not burdened with any qualifiers. It was simply positioned as the best, the thinnest, the most powerful Windows 10 laptop. Period. If I’m one of Microsoft’s OEM partners, I’m betting I’m not very happy about that at all. However, those OEMs have only themselves to blame if Microsoft, which has zero experience in making laptops, is able to produce a more compelling computer than the OEMs that have had decades of experience. What does it say about Microsoft’s OEM partners that Microsoft has been able to do this to them, and that it’s willing to do so? The one saving grace is that the vast majority of Windows PCs are sold at well under $1500, and so this really isn’t targeted at the core of the Windows PC market. But it’s still a finger in the eye of the Windows OEMs.
Lastly, this parting thought. Satya Nadella took the stage at the end of the event and gave the kind of speech he’s given at almost every event Microsoft has had since he took over as CEO – big picture themes, Microsoft’s mission statement, and so on. I’m a fan of Nadella, but this speech felt like so much waffle after what was a really compelling set of device introductions. All the energy seemed to go out of the event when he took the stage. The other thing that happened was that, as he mentioned them, I suddenly remembered that Microsoft had introduced a new Band and the Lumia 950 devices earlier in the event. I had almost completely forgotten those by the time the Surface stuff was over with. They were so completely overshadowed by what came after, and for all Panos Panay’s attempts at enthusiasm about the Lumias, it was very clear that he had inherited those products and his true loves were the Surface products. I might still write about the Lumias separately at some point, but for now I see little in them that’s going to transform the fortunes of Windows Phone or Microsoft’s phone hardware business.
On the day Apple Music launched, I wrote a first-day “review” of sorts, based on my first few hours with the service. Now that several months have passed (and I’m about to cross over from the three-month free trial to being a paying customer), I thought I’d revisit my thoughts on the service, and talk through how I’m actually using it today.
I’m not listening to Beats 1
As I mentioned in the first day review, I didn’t feel like Beats 1 was for me, and nothing has really changed in that respect. The reality is that I haven’t regularly listened to normal linear radio since I was a teenager, when I used to listen to it as I got ready for my day in the morning. Over the last 15 years or so, I’ve essentially listened just to the music I want to, rather than allowing my listening to be driven by others. I’ve dabbled with Pandora from time to time, but with that and other services I’ve generally found that they take too much training to be useful, or simply don’t have a high enough percentage of music I like to be worth the time. Beats 1 falls into the latter category – my tastes in music continue to be too narrow/specific for what Beats 1 offers. Apparently, many others love it, but not me.
“For You” has become much more useful
The “For You” tab is the place where you’re supposed to find things that are relevant to you and your tastes. When I first started using Apple Music, I found this tab to be somewhat lacking – the playlists and albums recommended either felt random or too literally connected to albums I already owned or tastes I’d explicitly specified. The recommendations were either things I already knew I liked, or they were too out there. But since that time, the recommendations seem to have got better, as I’ve tried quite a few of the playlists out, and I’m liking more of what’s recommended.
“Add to my music” is the key
To my mind, the most important feature of Apple Music is the “Add to my music” button (often just a little plus sign, and where that isn’t present, as a menu item behind the three little dots). As someone who still listens to a lot of the music I own, I like the idea of adding to that library far better than having to recreate my library from scratch, or simply search every time I want to listen to something. As such, the Add to my music option in Apple Music is a perfect fit.
However, I’m not just using it as a “permanently add this to my collection of music” button, but rather as a way to quickly bookmark something for later listening. For example, when I browse through that For You tab, and a playlist or album looks interesting and worth checking out, I add it to my music, even though I haven’t listened to it yet and might not get to it for a while. But everything I add in this way shows up in the Recently Added section on all my various devices, and as such the next time I’m listening to music it’ll be right there. If I like it, then it stays in my library, but if I don’t, it’s simple to remove it again. My music library has grown quite a bit this way over the past few months, and especially as the For You recommendations have been getting better.
Playlists are useful in several ways
I find playlists very useful, and they’re a key part of how I’m using Apple Music. But I’m using them in a few different ways. First off, Apple Music suggests various playlists in For You, some of which look interesting enough to add. In some cases, the playlist itself is good enough that I just keep that in my library as something to listen to again. But in other cases, I only like one or a handful of songs in the playlist, and in that case I typically click through on the song or artist and start playing more of their songs and albums, often adding those to my library while deleting the original playlist. I might also check what other playlists those songs or those artists are part of, and add those to my library. This is often a good way to find similar music and/or artists, as is the Related Artists section.
The other thing I’m really enjoying is the “Intro to…” playlists Apple Music has for many artists. These are almost like a new take on the Greatest Hits albums many artists throw out after they’ve produced enough albums, but they’re not quite that straightforward – they often combine well-known songs with other more obscure ones. And then there’s often the option of a second, “Deep Cuts” playlist for the artist if you want to go deeper. Using these playlists, I’ve rediscovered (rather than discovered for the first time) artists from my past and especially my childhood, and in the process introduced a number of them to my kids too.
Things I wish were better
Although I’m using Apple Music a lot, there are still things I wish were better. For one, I wish the Music app on my iPhone would give me more information about artists and albums than it does, more in line with the desktop version of iTunes. I typically use the desktop version to discover new music and add it to my collection, but I do a lot of my listening on my phone. Sometimes I want to learn more about an album or an artist, and it’s either unintuitive how to get that information to display in the app, or impossible to get it to display at all. I quite like the artist descriptions and album reviews Apple has always provided in iTunes and now in Apple Music, but these can be hard to get to or missing entirely in the mobile app. (Artist descriptions are available if you click on artist names enough times, even though there’s no visual indicator that this will happen, while album descriptions are totally missing as far as I can tell.)
My other main complaint is that Apple Music doesn’t seem to deal with caching and streaming all that well, especially on weaker cellular connections. Sometimes, when I’m in the car, if I skip too quickly through songs in a playlist or album that I haven’t downloaded to my phone, things just seem to get stuck – the song is ostensibly playing, with the progress indicator and time still moving, but no audio is heard. I imagine this is just a bug, but for a service which now majors on streaming, it needs to be fixed.
Apple Music probably isn’t for everyone
As I mentioned in that first-day review, Apple Music is a great fit for me – the combination of my own library and new music I discover through the service is just what I’ve always wanted from a subscription music service. But the more I talk to people in real life and online through Twitter and Facebook and so on, the more I realize it’s not a great fit for everyone. Obviously, there’s a substantial number of people who simply don’t listen to that much music, who aren’t a good fit for any music service. But I’m talking about people who do care about music. Unless you care a great deal about mixing your library and new music, there’s not a huge amount in Apple Music to convince you to switch from another service. In fact, if you’ve made a heavy investment of time in another service, switching can seem daunting – Apple Music doesn’t offer a way to import playlists from Spotify, for example, which could dramatically smooth the way for switchers. But there also aren’t that many unique features that would draw you over from another service. And unless you’re buying one or more albums a month at the moment, there may not be that strong a reason to start paying for a subscription music service either. Beats 1 – one of the most unique aspects of Apple Music – is free to anyone, and many of the other features are relatively undifferentiated from other services.
Apple at this point has two fundamental problems with Apple Music – relatively few people have even tried the service (something I wrote about here for Techpinions Insiders), and of those who have, there’s anecdotal and survey evidence that many are turning off the auto-renewal and allowing their subscriptions to lapse before they become paying customers. So far, I’m not seeing a great deal of action from Apple that would meaningfully change either of these things, for all the value I’m getting out of the service personally.
All the major publications which were given review units of the new iPhones ahead of the launch came out last week. I got my review unit last Friday (along with many Apple customers), but since I’ve now been able to spend a few days with it, I wanted to share a few thoughts. I’m going to try very hard not to rehash everything everyone else has said, and also to add a bit more insight in areas I haven’t seen others write about yet. I’m also going to spend a bit more time on the cameras than most of the other reviews – I’m not the world’s greatest photographer, but I do enjoy taking pictures and my phone is by far the camera I use the most to take pictures of my kids, so it has to be good. As such, I’ve spent a good chunk of time over the last few days taking pictures and videos of various things to test the camera specifically and I’ll share some examples below.
The new devices are nominally virtually unchanged on the outside from the previous versions, and that’s certainly the first impression they give too. They are a hair thicker and a tiny bit heavier than their predecessors, but if I hadn’t known that I might well not have noticed. Along with the iPhone 6 Plus I’m using, Apple also sent one of its new leather cases (I have the saddle brown one) and I’ve been using that for the last few days, which has made comparisons between this phone and the iPhone 6 Plus less relevant.
As with previous iPhones, the hardware feels very solid, well balanced, and high quality. Nothing’s changed there. The aluminum and glass are both supposed to be more durable than last year’s, but I can’t think of a way to test either of those that doesn’t involve trying to break the phone, so I haven’t tested either.
The new vibration engine (apparently now one and the same as the taptic engine) is very nice, too – I don’t use vibrating alerts much anymore since I started wearing the Apple Watch, but on the odd occasions when I still get them (mostly for phone calls), they’re more substantial than they used to be. It’s hard to know for sure, but I feel like the phone speaker has got better too – calls sound clearer and louder than before.
3D Touch is arguably the headline feature on the new iPhones, and from the moment I got to use it in person at Apple’s September event I’ve said I thought it was going to be important.
Just tried 3D Touch on the new iPhones. This is going to be huge. Both on home screen and in apps including third party apps
Having now used it for more than just a few minutes in a tightly-controlled demo environment, I have a few additional thoughts:
This is a big deal, but for now it’s mostly used by Apple’s own apps and just a handful of third-party apps. That has two implications. One, if you tend to use third-party replacements for key things like Mail, Calendar, and so on, you’ll find 3D Touch a lot less useful, at least for now. Two, that may mean you migrate back to Apple’s own apps in some cases, to make use of this feature. I’m curious to see how quickly most third-party app-makers add support – if I were them, I’d do it quickly, especially the Quick Actions functionality. I suspect this will be like the Apple Watch, in that apps that fail to support it will find users replacing them with ones that do.
Especially on the 6S Plus, which is the one I’m testing, 3D Touch makes apps on the top half of your home screen less useful than those on the bottom. Yes, you can use the Reachability feature to bring those higher-up apps within easier reach of your thumb, but that adds friction in the use of a feature that’s all about reducing friction. I haven’t done this yet, but I can see myself rearranging the icons on my main home screen based on which I’m likely to use Quick Actions with.
Speaking of which, I’ve always kept the Camera app in the top right of my first home screen, because I do want access to it when the phone is unlocked, but I most often trigger it when the phone is unlocked, and therefore use the camera button on the lower right of the lock screen. However, the introduction of 3D Touch and the much-faster Touch ID sensor (on which more below) means I rarely see the lock screen anymore, and even when I do using that lock-screen camera button is less flexible than the app icon for the camera on the home screen. I wish I could use 3D Touch in some way on the lock screen – that’s something Austin Mann predicted Apple would do, but it didn’t. I suspect that’s because the lock screen is becoming less relevant, but it also means I likely need to put my Camera app icon somewhere closer to the bottom of the screen, and maybe even on my home row.
For now, I’ve been using 3D Touch more in apps than on the home screen, and that’s partly because I tend to use third-party apps more than Apple’s own. I‘ve used it most in Instagram and the Photos app, where I’m using it both to view Live Photos and to quickly review recently-taken pictures when I’m still in camera mode. The latter is a really great addition, and I think third party developers will likely come up with lots of cool ideas for using this feature.
One thing I think developers should be thinking about is making Quick Actions user-customizable. Instagram, for example, chose to make access to the Direct inbox, Search, and View Activity the three additional Quick Actions beyond the obvious New Post option. If I had my way, I’d probably choose other aspects of the app to get quick access to, and I’m betting I’m not alone in that. Launch Center Pro does a great job of this as a key feature, and I think it’s brilliant (h/t @rjonesy).
Related to this, the order in which functions appear in Quick Actions is interesting too – I think we’re accustomed to reading menu-type lists (along with everything else we read) from top to bottom, but depending on where an app sits on your home screen, the menu items may appear in what seems to be reverse order (I think the rule of thumb is that the thing you’re most likely to want to use is closest to the app icon itself, for easy thumb access, but that may mean it’s at the bottom of the list). That means something of a learning curve for users, but is probably also something developers should think about in designing the order of items (and any icons they use alongside the labels).
Lastly, I’ve noticed some of the negative side effects of the introduction of 3D Touch. One of the things that’s happened to me several times is tapping on web links without any result. I think what’s happening is that I’m tapping just hard enough to trigger 3D Touch, but not holding it at all, which leaves me in a sort of limbo where I don’t get either the desired result or any visual signal that I’ve accidentally activated 3D Touch either. John Gruber has talked about the problem of trying to delete apps, a function I suspect we’ve all kind of activated by pressing down fairly hard on the screen to trigger the wobbling icons. I’ve had this problem too, and there are several other places where I’ve previously pressed fairly hard for the “long press” but now have to get used to pressing only gently. No doubt the mental and physical adjustments involved will come in time.
I won’t spend lots of time on this, as it’s been well-covered elsewhere, but the Touch ID sensor is dramatically faster now, and frequently completes authentication before the lock screen even pops up fully. That’s wonderful for quick access to functionality, but as others have pointed out (and as I alluded to in the context of the camera above) it does mean the lock screen becomes a lot less useful, unless you trigger the home button with a finger not registered for Touch ID, or use the side button to turn on the phone. Training the Touch ID sensor is also much quicker now – not something you have to do a lot, but I’ve added several fingers to the new phone more or less immediately, whereas it took me a long time to bother doing so on the iPhone 6 Plus because it took more time to do.
My initial reaction to the Live Photos demo was that this reminded me of something out of Harry Potter (it may have helped that my daughter has recently been reading the books and seeing the films for the first time).
Apple brings Harry Potter technology to the iPhone – Live Photos — Jan Dawson (@jandawson) September 9, 2015
Again, I got a brief demo of the feature at the September event, but it’s very different to be working with your own limited skills as a photographer rather than with carefully chosen images pre-installed on a demo phone. I’m glad I read some of the reviews last week ahead of trying to use it, because it meant I was immediately aware that I needed to change my past behavior slightly and hold the phone steady both before and after taking the still image (though the software on the phone now knows to cut off the video if the phone is lowered prematurely). That probably shortened the learning curve somewhat, but it’s still an interesting process to figure out how best to use Live Photos. Apple’s demo photos were an interesting mix of moving objects and people, and I’ve definitely had more luck with the latter than the former in terms of getting compelling Live Photos out of the process.
Below are some examples of Live Photos from the last few days – they’re a mix of objects and people/animals, and you’ll see how variable the results can be. For what it’s worth, sharing these anywhere other than in iOS is still difficult – I connected my phone to my Mac and used QuickTime to record the screen of my iPhone as I used 3D Touch to interact with them, which resulted in this series of 7-second videos you see below. In each case, the video starts with the still image, then shifts to the video, and returns to the still (you may hear background noise from my home office on the audio on some of them – no idea why QuickTime records microphone noise when capturing the iPhone screen).
Overall, I’m really enjoying Live Photos, and there are some interesting things to note:
Even when in Live Photo mode, you can capture multiple pictures in quick succession – at first, I was waiting for the yellow Live indicator to disappear before taking the next picture, but I found that it works just fine even when the pictures are taken close together. The video still attaches itself to each picture in the same way, which means you get an interesting effect when scrolling through pictures quickly and playing the Live Photo – you’ll hear almost the same background noise on each, with the beginning and end shifting a fraction of a second each time. This is very clever stuff on Apple’s part.
I kind of wish Apple had made these Live Photos auto-play as you scroll through your camera roll (or gave users the option of selecting this) – it would make your camera roll come alive in a completely different way, whereas for now your camera roll looks entirely static until you decide to engage with an individual picture. Maybe it’s the Harry Potter thing again, but I like the idea of these pictures looking alive from the get-go, rather than having to be prodded into action. I’m sure the team at Apple responsible for the feature spent at least some time discussing this decision, and ultimately came down on the side of having them be still by default – perhaps because scrolling through moving photos was too distracting visually, perhaps because of the impact on battery life, or for some other reason. Perhaps it’ll change in time or become a user option.
Related to this, the blurry transition between the still and the video isn’t my favorite element here. I can see why the engineers thought it needed a clear visual transition from one mode to the other, but when you’re reviewing a bunch of pictures it’s an unnecessary visual obstacle and delay that adds little once you’re used to how the feature works. At the very least, it feels like it should be quicker.
The cameras have always been one of my favorite features of the iPhone, and I continue to find the cameras on the iPhone better than any other smartphone camera out there, at least for general use. The new cameras offer improvements over last year’s, which were already very good (see my review from last year here and this Flickr set for lots of pictures from last year’s phones).
My wife and I went to pick up my kids from her parents’ farm on Saturday, and I had a chance to take some pictures and video while we were there. We then went on a drive up the canyon near our home on Sunday afternoon, and I went on a brief hike with my son this morning too, so I’ve taken pictures and video in a few different settings over the last few days. Overall, I’ve been very impressed by the camera, both for photos and videos.
Below is a panorama I took this morning – it won’t look all that impressive below, because I’ve reduced it to fit here, but if you click on it, it’ll open the full-size image in a new tab or window.
The full image is over 13,000 by 3,600 pixels, and I think it looks fantastic (not my composition, but the various different levels of light and shade and how they’ve come out so well, while retaining a lot of detail). This is one of the huge strengths of the new cameras – the combination of high resolution and retention of detail, which will allow for much more usable cropping of pictures.
The two images below are a virtually complete crop and a partial crop of the same picture, both of which I’ve edited using Snapseed. I’m including them because of the detail that remains in the cropped version.
I’ve amped up the color in these pics a little, but I’ve included some other unedited shots below so you can get a sense of how these come out of the camera. In both cases, you can click on the picture and it’ll show full-size. For more photos, mostly unedited, see this Flickr set.
As for video, the iPhone continues to have a great slo-mo camera, but of course it now also has 4K video. I haven’t spent a ton of time using this, but one of the most striking things with video on the iPhone 6S Plus is how good the image stabilization is getting. I’m including below a few YouTube embeds which show off this capability – apologies for the slightly dizzying cinematography on some of the videos, but I was trying to test the camera’s ability to adjust to changes in lighting.
4K video – pan across mountain landscape:
This one was shot by one of my kids out the car window while driving on a bumpy, windy road through the canyon – it’s 1080p only:
This is another 1080 rather than 4K video, but it shows off the image stabilization quite well, as well as the quality of the video capture:
Other than the specific features I’ve reviewed, the one overarching theme with the new iPhone is speed. Touch ID is faster, as I’ve already mentioned, but everything else is noticeably faster too, as a result of the new chip, more RAM, and a variety of other improvements. There’s almost no lag now for a number of tasks which used to take time. And the overwhelming impression you’re left with is that you and your clumsy fingers are now the biggest source of latency for a lot of what you’re doing. I find myself more drawn to Siri and to voice dictation for text entry than before, simply because it now feels like I’m slowing everything down when I type things in.
All part of the pattern
In conclusion, the iPhone 6S range feels like a continuation of the pattern for Apple. In a piece I wrote on Techpinions a while back, I talked about the fact that Apple often builds new features and functionality incrementally over time, and it’s often not clear where a particular feature is heading until several years after its original launch. The iPhone 6Ss feature examples of both the outgrowth of earlier features (e.g. Force Touch on the Watch maturing into 3D Touch on the phone, the Touch ID sensor getting enormously faster), but also likely the beginning of new things that aren’t yet apparent. 3D Touch in particular feels like it’s just getting started, and could spread both to other parts of Apple’s product line and to other parts of iOS (count how many of Apple’s own apps don’t yet support it, for starters). But I’m sure there’s far more here, too, though it will probably only become clear as Apple launches future devices.
Last week, following the release of iOS 9 by Apple, Mixpanel (along with other analytics firms) began releasing data relating to the pace of adoption of iOS 9. That data suggested that iOS 9 was being adopted more rapidly than iOS 8, and also that it had reached around 30% by the end of the day on Saturday. Then, this morning, Apple issued a press release about the new iPhones, but in passing noted that iOS 9 was now on more than 50% of devices, based on data from