The following is a guest post by Kyle Samani, Co-Founder and CEO of Pristine.io. All views expressed are his own.
Wearables are one of the hottest trends in technology today. They are forecasted to grow tremendously over the next decade as consumers adopt these devices to track health data, interact with notifications, and look up information. Indeed PWC predicts that sales of wearables could reach 130 million units in 2018.
As CEO of a startup (Pristine) that builds enterprise software for Google Glass, I of course have a very vested interest in understanding how people interact with wearable devices. But more time I spend interacting with, thinking about, and talking about wearables, the more I realize that wearables aren’t really about wearables. Wearables are about the physical world.
A LOOK AT GLASSES
When Sergey Brin first unveiled Glass in 2012, he showed images from a concert in which hundreds of people were holding up their phones to record an artist. He showed a subway full of people, all with their heads down buried in their phones.
Google say they built Glass to solve this problem. Glass was intended to help people stop fiddling with technology but instead live in the moment. Glass was designed to free consumers from interacting with their personal technology.
At Pristine, we adopted this framework for our enterprise solutions from day one. We proudly tell our customers that the entire point of our solution is to wear Glass, but hardly interact with it. It’s counterintuitive, but our guiding hypothesis is that the wearer shouldn’t be forced to fiddle with the glasses, but rather should be focusing on the task at hand. Our tech for Glass helps mobile workers do exactly that. Hundreds of Glasses later, we can confidently say that we were right. Mobile workers don’t want to be distracted by interaction with Glass. They want to do their jobs, and be aided by Glass when they need it.
But what about solutions like Atheer and Meta? These devices promise an incredible future filled with rich interactivity through glasses. The problem is in the form factor. The “immersive glasses” are just that…immersive. They cover both eyes, look like a pair of ski goggles, and puts screens in front of each eye that create an experience closer to an Oculus Rift than a Google Glass (Microsoft’s Hololens presents an interesting “screenless” experience, though, with a lot of promise!). Touching virtual objects projected in the air is wildly un-natural and presents a huge educational challenge. They may have some chance in the enterprise, particularly in applications where sterility is important, but the consumer use cases outside of gaming are weak.
WHAT ABOUT WATCHES?
Watches are more nuanced than glasses. They don’t offer nearly the potential for enterprise use cases that glasses offer. On the other hand, it’s much more socially acceptable to wear a computer on one’s wrist than on one’s face.
Apple highlighted three tentpole functions for the Apple Watch: a timepiece, a health and fitness tracker, and a new way to communicate. The first two are intrinsically passive activities that require no input or interaction from the user. The third – a new way to communicate – refers to notifications, messages, favorites, sharing sketches, and sharing one’s heartbeat. But even Apple recognizes that the Watch isn’t about texts, it’s about connecting seamlessly with other people (see image below).
Apple’s messaging makes sense. They recognize that a device strapped to one’s wrist is not intrinsically designed for robust interactivity. If it can’t be interactive, it needs to passive. If it’s passive, it’s about getting back to the real world. And indeed, most of what Apple highlighted in the keynote was not about the device itself, but about how the device fits into real world.
The initial release of Android Wear was focused around expediting and improving the processes around notifications. Google wanted to make notifications more convenient and useful so that people spend less time on their phones, and more time on the real world. Several of my colleagues with Android Wear watches have verified that it excels in expediting notification management.
WHERE ARE WEARABLES GOING?
Although wearables may replace certain functions of the smartphone – especially around notifications – they will never provide the flexibility or interactivity that smartphones provide. Thus, wearables will never fully replace the smartphone, but rather will complement it. They may eventually assume 75% of the smartphone’s functionality, but they won’t provide core functionalities like a canvas for reading and writing on the go.
To put this idea in practice, consider reading and typing on wearables…can you imagine poking at your wrist for an hour? Or waving your hand in the air poking at virtual objects (this will fail for the same reason desktop touch screens never took off: it’s just not a comfortable user experience).
Instead, wearables will assume very specific functions from the smartphone so that you don’t have to perform that particular function on your smartphone anymore. Why record calories manually in your phone when your calorie-sensing tooth can record it for you? Why record your pulse manually when your smartwatch can do it for you? Why pull your phone out for a notification when you can look at your wrist?
So the future of wearables isn’t to totally replace the smartphone, and it’s not about ‘wearability’ necessarily either. It’s about dividing functional responsibility–breaking off certain functionality from the smartphone, one step at a time, to create the best experience on each device.