B-to-B marketers have a strange relationship with data.

We generate mountains of data and have all sorts of tools that let us easily digest and use that data. And yet, most of that data isn’t germaine to what matters most for profitable business: revenue and costs.

Take the marketing qualified lead (MQL) for example. MQLs are an important milestone. Understanding what channels generate the most MQLs is useful data. But if the marketing team optimizes the funnel and doubles the number of MQLs by doubling the lead to MQL conversion rate you aren’t likely to see a doubling in new revenue.

The reason for that is simple: the MQL is a leading indicator that is correlated with sales. Leading indicators are fine for comparing which campaigns are performing. But you can’t optimize against correlated data.

Adding to the problem is that the MQL represents two states: qualified and not qualified. But for us marketers to make good optimization decisions we need to understand four states:

  • Qualified, should be qualified
  • Qualified, should not be qualified
  • Not qualified, should be qualified
  • Not qualified, should not be qualified

Blindly increasing the MQL conversion rate (or virtually any other metric in a B-to-B setting) doesn’t tell you whether you are improving that metric or just increasing the number of bad leads to take up your sales reps’ time.

This is why the average tenure of a CMO was 42 months in 2016. We have lots of data to show that we are doing good things. But when you start reviewing revenue performance after year two and year three, all of those impressive-looking improvements to the top of the funnel aren’t translating into improvements in revenue on nearly the same scale as you would expect.

We marketers do a great job of using data to the point where we’ve basically fetishized it. But our zeal to use data to improve our decisions has blinded us to the weaknesses of that data.

Marketing data is not scientific data that represents the real world in an unbiased way. It is social science data that relies on assumptions and inferences by the user.

Presenting the data as if it is unbiased and without assumption and inference inhibits our ability to understand the data for what it is and use it effectively.

For example, a bounce rate on a web page doesn’t tell you whether the visitor came, puked, and left or whether the visitor came, found what they needed, and left. Those are both inferences marketers make based on the bounce rate.

What bounce rate is truly measuring is interactions between client machines and a server machine and whether or not each interaction is followed by a subsequent interaction before a cookie times out. Any conclusions about human behaviour are inferences.

Your audience is composed of real people. The data they leave is like a shadow cast on your marketing stack.

If you know the people well, then you can get a lot of useful insight from their shadow.

There’s nothing wrong with being data-driven. But marketers working in B-to-B can’t limit ourselves to just quantitative data or we will struggle to get value from that data even as our metrics show us doing just fine.

The best way to get to know people is to listen to them and find out what causes them to interact with your brand.

Does your marketing team pick up the phone and listen?

Absent the knowledge of what causes interactions, improvements in most B-to-B marketing metrics are meaningless until you have validated the changes with a significant sample of new revenue.

If your marketing team is listening and basing their decisions on cause rather than correlation, then you can feel confident that a doubling in lead to MQL rate (or other similar metric) represents and equal improvement in the experience of buyers. Your leading indicators suddenly become a lot more predictive because the team is optimizing for causes that don’t appear in your metrics.

Another challenge with quantitative data is reaching a significant sample size. B-to-B Marketers have fewer opportunities to test because it takes longer to get a significant sample.

Sample size isn’t an issue with the insights that you get from interviews meaning that marketing can make more better decisions where reaching a significant sample size is a challenge. If you can run a test in a reasonable time, then by all means test. But where you can’t then relying on insights from interviews is better than intuition.

Caret Juice combines insights gained from optimizing marketing resources until they well exceeded the top one percent of performance benchmarks with Jobs-to-be-Done innovation theory which orients product development around what the customer hopes to accomplish and applies them to optimizing complex B-to-B marketing systems.

Learn more about applying Jobs-to-be-Done innovation theory to B-to-B marketing systems.