Understanding the missteps of today’s activity trackers


2014 was a breakout year for activity trackers, with device after device entering the market. While we made significant progress in the world of quantified fitness, many consumers still struggled with whether activity trackers truly addressed their needs.

The average consumer jumped at “smart” devices, hoping their ability to count steps, record activity, and log nutrition would be the magical cure to failed fitness attempts. What they found instead was akin to going out for tapas and leaving the table still hungry – a truth found in pages of one- and two-star Amazon reviews left by consumers frustrated by the inaccuracy and limitations of fitness devices. And it’s this frustration that causes one to wonder, “Why is it so difficult to accurately track gestures?”

To answer that question, it’s important to better understand how activity tracking works and what’s inside your wearable.

Wearables miscalculations

Your wearable has motions sensors inside that measure the way you move. Building a device that can use those motion sensors to distinguish between different movements can be tedious. First, engineers enlist users to test the devices by performing a set of activities — like walking or running — that the engineers can collect data from and analyze for specific patterns amongst the user base. When a pattern is recognized across a large set of users, an algorithm is written. This algorithm will identify a specific activity based upon motion data and will track the quantity with which that activity is performed.

From the user’s perspective, the processing of motion data by algorithms is seamless; a regularly updated step count on the device screen. But from the engineer’s perspective, this is an intricate, time-consuming process, and one that is difficult to get right. Five factors play heavily into this challenge:

 

  • Selecting the Right Features – Features, or derived values extracted from a set of measured data, help inform pattern learning and gestures recognition. Often, the number of useful features quickly rises, making it increasingly difficult to recognize a pattern. It is difficult to know which features are right to consistently recognize an activity’s pattern.
  • Your Unique Movements – How you move differs from the way other people move; however, wearables track movements through a set of algorithms that are tailored to the “average” person. So if your walking pattern falls outside the algorithmic average – say, due to a limp, variance in strides, or pushing a baby stroller – you’ll see inaccurate results.
  • Body Location Specific – Today’s wearables are configured to be worn on a specific body part, often the wrist. It allows engineers to write more accurate algorithms, but it also means that changing the wearable’s location on your body will affect the accuracy of your data, which causes frustrations for the user.
  • Robustness of Data Set – Accurate activity tracking requires a large data set from a pool of users that represent the audience. This poses a challenge for fitness tracking since the breadth of people – young and old; amateur and professional; men and women – who engage in some level of fitness makes it both time-consuming and costly to generate enough sample data to make activity tracking for all people.
  • Predefined Gestures – Because it is difficult for product teams to accurately recognize activities for everyone, they limit the quantity of activities supported to a few key movements such as walking, running, and walking up steps. Unfortunately, this decision prevents the user from getting a holistic view of her full day’s activities, limiting her reliability on the device.

Consumers heeded the wearable tech siren’s call to buy the first round of activity trackers. However, for devices to be truly useful and valuable to people, engineers, software providers, and consumers must work collaboratively to build a wearable technology that is both accurate and personalized to how a user moves. While it won’t happen overnight, I’d expect more sophisticated gesture recognition technology that addresses these industry-wide issues to be in market over the next year.

Adam Tilton is the CEO and Co-founder of Rithmio, a gesture recognition software and analytics company. Tilton has five years of experience in the development of algorithms and software tools for signal processing and control applications. In 2014, Tilton won the Cozad New Venture Competition and put his Ph.D at the University of Illinois on hold to run Rithmio full-time.

 

Join the Conversation

  • Jason

    Puff piece. – save your time and skip it

    • Jen Quinlan

      Hey Jason – what’s your take on gesture recognition in the wearables space and more specifically machine learning / pattern recognition?