How to Be Wrong

There are actually only three ways to be wrong about your impact: neglect, error and malice.

“What matters for most people is not how much they know, but how realistically they define what they don’t know” — Warren Buffett.

What impact are we having on the world?

That is, what is the cumulative effect of the work we do? What are the outputs or outcomes from the actions we take? When we bump into something how does it move?

It seems like a simple question, but it’s actually complicated and difficult (sometimes impossible) to answer.

There are actually only three ways to be wrong about your impact: neglect, error and malice.

Neglect

The first (and I think most common) way to be wrong about the impact we’re having is to simply avoid asking the question.

We do some work and observe some changes happening, but how do we know that the things we do are the cause of the changes that we see? If we don’t take time to try and answer this then we won’t be able to say with any conviction that there is a connection.

Behaving this way is the grown-up equivalent of the toddler hiding behind their hands in a game of hide-and-seek.

The world is never static, so it can take a lot of effort to isolate the impact of a specific action.

Let’s say we want to know if a new activity works (it could be anything from a new cancer drug to a new marketing campaign for your startup)…

The gold standard in terms of scientific rigour is to do a “double blind” study: that is, randomly split the population we’re trying to change into two groups: the test group, who receive the intervention, and the “control” group, who don’t. Importantly neither the test subjects or those running the test know which group any individual is in until after the test is completed and the results are collected.

If we can show that those who got the intervention have a different set out outcomes from those that didn’t then we’ve proven a connection between the action we took and the impact it had. And if not, then we’ve still potentially learned something useful.

It’s not always possible to be so clean in our testing. But, even in those cases there are still lots of ways that we can observe results and collect data try to establish this connection, if we’re so inclined. We can do a “single blind” test - where the subjects don’t know which group they are in but we do. Or even just a survey - it’s amazing how much we can learn by observing what people actually do compared to what they say they would do.

One common, but often overlooked, form of neglect is group think - that is, when there are enough people involved in an activity that everybody assumes that somebody else is doing the work to test the connection, but actually nobody has.

So, we can easily be wrong just by neglecting to try and find out what our impact actually is.

Error

The second way to be wrong about the impact we’re having is to make a mistake.

Testing for impact to a high standard is often difficult, expensive and time consuming. So there are almost infinite opportunities to introduce error, as we attempt to make it easier, cheaper or faster.

Any measurement is inherently inaccurate1.

In scientific testing, researchers typically make the distinction between accuracy (the difference between the measured value and the actual value) and precision (how similar multiple measurements of the same thing are to each other).

(Sometimes also called “validity” and “reliability”)

If we imagine throwing a bunch of darts at a dart board trying to hit the bullseye the accuracy of our throwing is measured by how close to the bullseye our darts are, while the precision is measured by how close to each other our darts are. If, for example, we throw five darts and they all hit double nineteen when we were aiming for the bullseye, then we could say our throwing is precise but inaccurate.

Accuracy vs Precision

Precise, but inaccurate (left) vs. Accurate, but imprecise (right)

There are lots of different types of errors that we can make when we’re trying to test for impact:

  • Human error - we just messed up the calculation.
  • Systematic error - there is a problem with our test that means every result is incorrect in the same way (i.e. our testing is not accurate). This is sometimes described as bias.
  • Random error - there is variability that means our tests are not repeatable (i.e. our testing is not precise). This can be an indicator that there is only a loose connection between the actions we’re taking and the impact they have, and that other things in the environment are actually a stronger influence.

So, even when we are testing for impact, it’s still possible to be wrong when errors are introduced.

Malice

Last, but not least, the third way to be wrong about the impact we’re having is just to be dishonest.

Often the underlying (and unspoken) motive for doing this comes down to personal interest - perhaps being honest would have a negative impact on our career or reputation, perhaps it would be politically expensive in our organisation or community, or perhaps too much has been invested already on the assumption that an action will have an impact for it to be comfortable for us to admit otherwise.

Either way, the con just requires us to know that there is no link between the things we do and the results we see. At that point we can either say nothing, and hope that nobody else notices, or (worse!) we can straight out lie and pretend that the opposite is true.

How to be right!

The good news is those are the only three ways we can be wrong, and there are relatively easy things you can do to avoid all of them.

Firstly, the most important thing is to stay skeptical and curious, until we have evidence. Don’t neglect the question. As Elon Musk said in an interview2:

“You should take the approach that you’re wrong…Your goal is to be less wrong.”

In other words, always assume that what we’re doing isn’t working and then challenge ourselves to prove otherwise.

Secondly, try to switch from thinking in absolutes to thinking about how confident we are in your assumptions. Rather than saying “I know… “ say “I’m X% sure that…” and base the X on the actual measurement we’ve done.

This forces us to think about what corners we might be cutting in our testing and what impact that might be having on the results. Listen to what others are saying, as that helps us to identify any bias we may have. And, it helps to create feedback loops in advance - especially when the results don’t go the way we’d hoped. Always debrief and try to understand why so that our results improve over time.

Lastly, be honest about the results. This is either very easy or very hard, depending on our character. It helps to setup external checks in advance, so that others who don’t have such a vested interest in the specific outcome can be the final judge. And, work hard to not take it personally if and when the testing shows that the actions we took didn’t have the result we were hoping for - the more important question should be what did you learn and how can you apply that to future actions to make them better?

Remember, all we need to do to be right is to avoid being wrong.

Easy!


  1. Experimental Errors and Uncertainty, G A Carlson
  2. The full quote is even better:
    “Constantly seek out criticism. A well thought-out critique of whatever you’re doing is as valuable as gold. You should seek that from everybody you can, but particularly from your friends. Usually your friends know what is wrong, but they don’t want to tell you because they don’y want to hurt your feelings. And listen very carefully to what they say. You should take the approach that you’re wrong. Your goal is to be less wrong.”