Train both your LLM and intuition
Even in a time when data is devouring the world, we should treat intuition as a skill to be developed.
It’s midday at the office and I’m arguing with co-workers over how the product should work. I lead with my intuition–that hard-to-explain feeling in my bones born from years of compressed experiences, the thousands of micro-lessons that pattern together into something that feels like certainty.
The room drops into that specific flavor of silence that follows when someone disrupts a seemingly settled decision. A manager leans back, fingers laced behind his head, wearing that expression I’ve come to recognize. I can practically hear the words before they come: “What data do you have to support this?”
And there it is. The awkward question I’ve wrestled with in startups crystallizes at this moment. The trouble with answering that question lies in how the outputs of my intuition, like those of the Large Language Models (LLMs) that are devouring the world today, are difficult for others to comprehend and trust. Just as researchers struggle to explain an LLM's output, I often find myself challenged to articulate the precise mental steps to my own conclusions, even when right. It’s that professional sixth sense–my internal neural network–that whispers this is the right way before my conscious mind can build the argument.
Having been involved in enough startups, I find comfort in knowing that I am not alone in this struggle. I’ve begun to see broadly two distinct species of decision-makers, each coexisting in the same team while processing information very differently–much like the contrast between traditional rule-based systems and neural networks coexisting in the same application.
The first type–let’s call them the Validators–treats every decision like a rule-based system would. They require explicit frameworks and constraints to guide them towards an output. When faced with a problem, they want structure and reassurance they are making the right decision before taking any action. They schedule voice-of-customer sessions, analyze competitor strategies, constantly seek additional perspectives, and insist on A/B testing even minor changes. I don’t have to look far to find high-ranking Validators in the wild. Consider the New York City Mayor Eric Adams, who once spent $1.6 million on McKinsey’s best power pointers to do a 20-week study. This study helped Eric Adams validate that having trash bins in NYC is actually a good thing.
The second type of decision-maker–let’s call these the Intuitors–functions like neural networks. Just as LLMs compress down their original training data into weights, they compress life observations into weighted intuitions. This makes their inference queries–their gut reactions–fast and reasonably accurate, even if they can’t always explain the underlying reasoning. Back in the meeting room, my suggestion was dismissed for lack of explicitly supporting data. Nevertheless, weeks later, a user voiced the exact same concern I had. Only then, was action taken.
Putting aside my feelings about the situation's outcome, I understand why being a Validator is very attractive for many. After all, you're taking little risk. It doesn’t require sticking your neck out with a strong opinion. Moreover, a strong opinion requires good taste and that’s very difficult to both acquire and measure. Instead, if something goes wrong, well you’re abstracted away from the actual decision since it's the set of procedures that pointed in this direction…
Maybe the analytical approach of Validators has meaning in enterprises, where an abundance of data is available, where clear justification for decisions serves as bureaucratic cover, where accountability is diluted and not genuinely desired, and, ultimately, where personal risk is rarely rewarded. But I find this approach usually undesirable, and more importantly, crippling to a startup.
Consider Howard Schultz, the former CEO of Starbucks, who, after a couple trips to Europe and Asia, had narrowed his focus around Japan for the company’s first international market. The board was resistant and asked Howard Schultz to hire an outside firm to do a study. After some time, the hired consultant came back with a big book and presented it to the board. Inside it, it declared Japan as a non-starter, a market in which Starbucks could not succeed in. In an interview with Howard Schultz, he describes how his blood was boiling with every disapproving statement from this consultant. Today there are around 2000 Starbucks stores in Japan, but it required ignoring the data and intuiting what the spreadsheets could not yet prove.
I find that many problems tend to fall into this fuzzy category, where the data can easily point you in the wrong direction. Data only captures what is quantifiable. By definition, data ignores everything that is unquantifiable. In other words, data is important, but not everything valuable can be quantified. The situation is worsened when you’re operating in uncharted territories, where there often isn’t enough or any data to analyze.
It's in these uncharted territories where the risk of Validators slowing down iteration cycles unnecessarily and prioritizing activity over impact is high. I’ve once seen a 5-minute decision turn into a 2-week planning and scoping exercise. I see the obsession with data-driven decision making often lead startups down the classic paralysis by analysis path, where over-intellectualizing problems actually makes decisions harder, not easier, to make.
But just as LLMs come with probability scores for their predictions, we have varying levels of confidence in our own intuitions. We can be overconfident, we can hallucinate connections that aren’t there, and we can simply be wrong. But that’s precisely why–even in a time when data is devouring the world–we should treat intuition as a skill to be developed rather than a weakness to be suppressed until data can validate it.
Just as we train LLMs on quality data to improve their outputs, we can develop our intuition. But developing your taste for something–training your internal neural network–is scary, it requires accepting some failed predictions. But it also requires consuming a lot of content to learn what you like and don’t like and develop strong, reasonable opinions that people can trust, because after a few attempts–if you’re not right, you're just wrong.