Making Sense of Data
Thank you to everyone who has pre-ordered my book, The Adaptability Paradox: Be the Architect of Your Success! There is still time to pre-order! Please support my journey as an author by joining my author community. Your support means so much to me!
My four-year-old wasn’t feeling well last week. He woke up with a bad cough and a runny nose. I grabbed the forehead thermometer and took three measurements in short succession.
102.2
99.4
100.2
Hmm. I was 96.3.
I took out another thermometer and got a 99 reading. Then, I used my mom’s thermometer and got three 98-degree temperatures.
With three thermometers, we obtained three different results.
This is what performance marketing measures would be, too: three different sources, three different numbers.
In many of our roles, we’re surrounded by data. The problem is that a lot of that data isn’t good. A study by MIT Sloan found that the cost of bad data is 15 to 25% of revenue for most companies. An IBM study found that businesses lost $3 trillion annually due to bad data.
This means, in the same way, a simple question like “What’s my child’s temperature?” can be hard to answer, as can other seemingly simple questions: How many registrations did an app have on a particular day? How many sessions did a user have?
How do we make sense of all of the data?
Choose the source you trust
The source you trust should be consistent and valid. The second thermometer we used is our source of truth. It is very reliable (test, re-test consistency), unlike our first thermometer, which differs in its readings, and has high validity (measures the thing it’s supposed to measure)
Apple and Google would have the “real” install numbers in performance app marketing campaigns. They would be the most valid, being the app stores themselves. However, the app stores lacked granular source attribution information and downstream events. Though they are “truth,” we had to use something more actionable.
We used third-party Mobile Measurement Partners (such as Adjust, Singular, Kochava, or Appsflyer) to compare apples to apples with our campaigns. The MMP’a recorded installs were always slightly lower than the official app stores because they technically tracked “first open” and not just install. To a mobile marketer, though, if someone never opens the app, their initial app install is useless. They were valid at tracking the first open.
The MMP numbers were the main source for comparing campaigns against each other. They were actionable, our partners had access to them to optimize campaigns (unlike internal numbers), and they were straightforward. They had the necessary validity and consistency, but we must be transparent with other stakeholders about why we used the MMPs as our primary source.
Does it matter if the source is wrong?
From Emily Oster, I know that the exact temperature of a child’s fever doesn’t matter. For the most part, thermometer accuracy does not matter… except in the case of school.
According to the school, a temperature of 100.4 is a fever, and students must be fever-free for 24 hours before returning to school. So, if we use a thermometer that consistently gets our son’s temperature wrong (too hot), he’d miss a lot of school. Conversely, if we use a thermometer that is inaccurate in the other direction, we could send him to school when he should not be there.
When looking at data, deciding whether the source is inaccurate and when it could matter is essential.
When calculating LTV, I frequently based my ad revenue estimates on previous performance. Typically, that meant the ad revenue aspect of the formula was not going to be exact, but it didn’t particularly matter if it was off. With revenue from in-app purchases, subscriptions, and advertising, the ad revenue component wouldn’t matter unless eCPMs were wildly different than usual.
It’s important to be aware of when it matters if data is inaccurate because sometimes the work required to have completely accurate, perfect data is not worth the effort it takes to collect and maintain it.
Is the direction right? (Is it wrong but in the same way each time)
One of the tools I used frequently was an app intelligence product that gave me estimated figures for competitors’ DAU (daily active users), revenue, engagement, and more. It was a useful tool, but it was also incredibly inaccurate.
The tool I used when I was at TMG and others like it all used a similar method of estimating and extrapolating the metrics they showed. Whenever it was time for us to renew our contract, I would look into all the leading competitors and pull data for all the TMG apps to compare to. None of these third-party intelligence tools were accurate.
However, we would use one tool instead of the others because 1) it was the closest to the “real” numbers and 2) it was directionally correct. We could use it as a sort of screening tool for trends. If the DAU for a rival fell, we could be fairly confident that the DAU did indeed decrease, whether or not the reported exact value of DAU was correct.
This is why we keep the first thermometer around. We think it always overestimates the kids’ temperatures and gives temperatures that can be all over the place. But it’s directionally correct and super easy and fast to use. If it says no fever, it’s pretty definite there isn’t a fever. It’s a screening tool. If it says a fever, it warrants a second look with a more accurate device.
Many of us are surrounded by data in our roles, and it can be difficult to make sense of it. To make the best of a sea of data, find your source of truth, be aware of when inaccurate data matters or doesn’t matter, and allow data to be a screening guide.
Even if you know the data is “bad,” it can still have value if you believe it is an accurate signal (and treat it as such).
Thank you to everyone who has pre-ordered my book, The Adaptability Paradox: Be the Architect of Your Success! There is still time to pre-order! Please support my journey as an author by joining my author community. Your support means so much to me!