The Value Of Information

Unknown unknowns are always a bear we have to wrestle in life. "The value of information" is the name this bear takes in the small-scale research world.

Unknown unknowns are always a bear we have to wrestle in life. "The value of information" is the name this bear takes in the small-scale research world.

Let's think about a simple research question: "who are a company's competitors and what do they do?"

Next, let's try to apply the ole' "begin with the end in mind" advice. What's our research plan? How will we gather the information needed to answer this question? Well, that depends! It depends on how valuable each piece of new information would be. How much it would contribute towards addressing our question?

To illustrate, let's look at two extremes on the spectrum of approaches to answering the question "who are a company's competitors and what do they do?"

The "Underachiever" Approach

The underachiever on our spectrum, "Mister U", is satisfied with building a list of competitors using a LinkedIn query, visiting each competitor's website, reading what they say about themselves there, and building his understanding of what these competitors do solely from that information.

The "Overachiever" Approach

The overachiever is a careful, neurotic person who suspects that whatever the company's competitors say about themselves on their website is at best partially true. Our overachiever, "Mister O", suspects that the competitors under-deliver in some areas, deliver forms of value they don't really understand and charge properly for, and might have an overall customer satisfaction rate that is ever so slightly less than what their website portrays. Mister O thinks that a through, high-resolution, trustable answer to the research question requires much more information than Mister U does. Mister O wants to:

  • Know every client that every competitor has ever worked with.

  • Have honest conversations with a large-enough subset of this pool of clients to build an accurate-enough picture of where the competitors succeed, fail, and so forth.

  • Gather facts that the competitors might find inconvenient or embarrassing and therefore would never publish on their website.

Mister U vs. Mister O

Who is right? Mister O's approach is obviously more expensive, difficult, and time-consuming. Is he right that the information he plans to gather is needed in order to properly answer the research question? Or is Mister U right? Is a much cheaper information-gathering approach sufficient?

Of course I've caused this ambiguity by not defining what a "sufficient" or "proper" answer to the research question is!

Here's the thing, though: nobody can define that for you. You, the small-scale researcher, have to decide where that threshold of "good enough" is.

Here's the other thing: you probably won't know where that threshold is when you start any given small-scale research initiative. In light of this, some recommendations:

1: If, at the outset of a small-scale research initiative, you find yourself making a lengthy "wishlist" of information to gather (you find yourself adopting that "Mister Overachiever" mindset), try to stop yourself, take a breath, and think about the cost and value of each element on your wishlist. If the wishlist feels like a big undifferentiated blob, try to decompose it into discrete elements. Consider rating each item on your list with 2 scores:

  1. How much uncertainty-reduction that kind of information could yield

  2. How costly it will be to gather that information

The bigger wins are those pieces of information that give a lot of uncertainty reduction at relatively low cost. Likewise, if a piece of information is going to be costly, difficult, or time-consuming to gather, then really, really ask yourself "what value would this particular piece of information have?"

2: The mindset you want to cultivate here is the iterative, lean mindset.

Sometimes the end result of a SSR initiative is a thing with well-defined shape and edges, rather like the well-defined form of an academic/scientific paper. You probably won't be using SSR to get an actual paper written and published, but maybe your end-point is nevertheless quite well-defined. As a goal, this is 100% fine (if unusual for SSR), but it creates a dangerous mindset because it encourages "waterfall thinking", and waterfall thinking encourages wishlist-making, and wishlist-making encourages you to ignore the cost vs. value of the elements of the wishlist.

I'm going to both reach a bit here -- and borrow from my friend Ian Crafford -- when I say the following:

What if, in each of 3 iterations, you learn 26% more than you did before you started the iteration? The math goes like this:

  • Iteration 1: you know 126% of what you knew before Iteration 1.

  • Iteration 2: you know 158.76% of what you knew before Iteration 1 (assuming the knowledge from each iteration compounds).

  • Iteration 3: you know 200% of what you knew before Iteration 1.

Knowledge doesn't really compound in a simple geometric way, and each iteration won't teach you the same amount, so don't think of this as proof that knowledge doubles over 3 iterations. Rather, think of it as a creative way to remember the power of small-but-generally-compunding gains over time in the face of the temptation to pursue a waterfall approach. Even if this 26% * 3 iterations = 2x thing is flatly unrealistic, I think we can depend on the ability of an iterative approach to help us course-correct and fine-tune more readily than a waterfall approach allows for.

So! To sorta conclude:

  • If information is likely to be costly or difficult to gather, assume it has low information value unless you can build a pretty strong case that it actually does have high value.

  • Use an iterative approach that starts with the most essential information or, if you don't even know what that is, with the information that promises to yield relatively high uncertainty-reduction at relatively low cost, and iterate towards a more robust answer to your research question. This iterative approach may force you to gather costly/difficult information somewhere in the project, but at least you will have proven to yourself in earlier iterations that the cost/difficulty is justified. This approach helps you avoid losing momentum early on in the project, which I've found is critical.

Finally, this value-of-information thing is an interesting lens through which to view the potential impact of large language models (LLMs) like ChatGPT on marketing.

My word of the week is "Bayescraft" (it was used in a delightfully snarky way here). My Bayescraft tells me that LLMs have changed the "zero point" for the value of information. Lots of information that would have previously been worth someone trading their email address, or looking through pages of SEOscaped product reviews, or suffering through a webinar/podcast is now going to be valued at zero because the experience of getting it from a LLM is better. Some people will still assign value to that kind of information, but ever-more will consider it to have less than zero value.

I don't pretend to think the implications will be simple to predict or easy to deal with. As ever in the face of commoditization, we need to move up the value chain.

What would the next step up the value chain look like when transposed to how you do marketing?

Last weeks' special on POV coaching went well. 4 sales closed and 2 more might close in the coming weeks.

Having a clear, incisive, defensible POV is definitely somewhere up the value chain from what LLMs do well today. I think it's fair to say that ChatGPT works quite hard to avoid having an identifiable POV at all. I have a book and coaching that can help with the POV thing if you'd like yours to be stronger or more clearly defined.

And as ever, I'd love to help some of you turn an ambition for small-scale research into a successful outcome. Reply with interest.

-P