Category Archives: analytics

Post Launch Metrics Analysis – It’s a Journey

It’s a journey

To be honest, I was dreading it. Then, I finally stopped procrastinating and dove in. After a few hours adding to my partner’s analysis, I was happily immersed in Google Spreadsheets, creating graphs and drill down reports.

I’m talking about crunching the numbers: gathering all of your data metrics you’ve painstakingly collected from Flurry, Google Analytics, iTunes Connect, Google Play and user surveys. Creating that presentation to back up the experience of the app, by stakeholders, users, and developers, and coming to that final answer: did we fulfill our goal?

1. First, create a specific goal

For our most recent app, our goal was pretty specific. I learned this lesson from a  collaborative project in the past – if your team is not clear on what the goal is, measuring its success afterwards can become a murky blame game, or a general and unhelpful “it was great.” All team members will agree that they want to learn from each release. The best way to learn is to have a clear testable hypothesis. Also, a single goal becomes a focus for all development. If it doesn’t contribute to the goal, it’s postponed. Understandably this is difficult in more evolved legacy software, but in prototypes, this should be front and center.

2. Uncover issues

We realized that one feature we popped in last minute- to handle an edge case- prevented users from fully realizing the goal. It wasn’t totally clear until we compared all the data souces, but it was a key learning. From this, the stakeholders and developers unanimously decided to risk the edge case to get the majority usage. This was uncovered by comparing the data metrics- web usage from Google Analytics –  to user surveys. Lining up different data sources, and clearly comparing them with sources reference, showed that with this bug fixed, user satisfaction would have been much higher. Part of it is facing the music- accepting the bad with the good- and diving into it and wondering why this was not what we expected.

3. Gathering Data

We use:

  • Flurry for in-app programmatic logging, iOS and Android
  • iTunes Connect gathers usage data, that we reference though it’s pretty high level
  • Google Analytics – most of this app was mobile web, so this was a necessity.
  • Database metrics– we store and capture data points in our remote database, if you think of logging and testing while modeling your database, this can be a great source of data to corroborate the other sources.
  • Google Surveys for user surveys. We have also used SurveyMonkey in the past.

4. Presentation

We usually go over the general responses, and then pick out a few odd results and dive into them with our full arsenal of data. Many times it’s uncovering an issue or realizing our assumptions were false. This is the most interesting part, and largely the biggest learning. These can come anywhere from general user latency issues, functional issues, design issues, etc. The next release can then pivot on this slightly to either define the issue, or we can go in an entirely new direction. It’s critical to be honest and clear in these sessions, to make sure there’s no defensiveness, and we’re all on the same page about what happened. Setting up the logging frameworks early on in the cycle and preparing the client for the results, even during the launch, can help manage some of the expectations about how apps perform.