We’ve all laughed at detailed renderings of constellations overlaying a paltry set of stars that are in fact quasi-random. Like Fornax, which is just three stars:
But which humans have no trouble rendering as an intricate sequence of machinery:
It would be funny, if this natural compulsion didn’t also cause us to make bad decisions, all the time.
There’s a scientifically-measurable mechanism that causes us to be incorrectly confident in our knowledge and decision-making. We construct narratives of comprehension unconsciously, even when our command of the facts is feeble. Which all leads to this:
Once you realize this, you know that when you’re tackling a decision, especially one in which facts are few, you need to spend more time gathering as many facts as possible, then objectively analyze the totality, rather than forming a plausible narrative with 10% of the facts and falsely believing your theory is sound.
But this honorable behavior tends to create the opposite effect of your intention, which was to create clarity and confidence.
Rather, like clouding a stream by turning over rocks, your noble activity causes things to get murkier. Why?
Things don’t add up exactly. One “fact” contradicts another “fact,” or at least doesn’t fit into a clean narrative. This, too, is natural, because our “facts” are often crude proxies for ground truth, or opinions disguised as wisdom. Errors of uncertain magnitude are multiplied until the result is more noise than signal. The world is inconveniently complex and ambiguous.
In the face of increasing contradiction and murkiness, do you acknowledge that the situation is more complex than expected, perhaps so inscrutable as to defy analysis and explanation?
No, you fall back to your initial favorite theory, selecting the facts consistent with your convenient world-view, reinforcing your presumed wisdom, and dismissing the rest as less relevant, less impactful, less important, or less reliable.
I can prove it with a demonstration. Stop checking Twitter and dwell on the following phrase, re-reading it five times slowly and really thinking about what it means and how it makes you feel:
Now answer this question: Are school uniforms a good thing?
Most people say “no,” because they’ve just been primed to think about conformity as a negative. More specifically, many more people will say “no” after being primed with that phrase than a control group that wasn’t primed.
This is one of the more robust results of experimental psychology, with consistent results over a time-span of decades. It’s also why brand-recognition advertising works (even though most people claim “advertising doesn’t work on me”).
This is why additional fact-finding doesn’t lead to improved narratives. You accidentally self-prime with your hypothesis, your initial findings, your basket of biases, the understandable preference to be “proven right” rather than admitting a difficult truth.
In fact, uniforms are often liberating. Uniforms obscure a student’s socio-economic position, which removes one of the dimensions used for segregation, bullying, and the sense of self-worth. But, being primed, and in possession of a convenient narrative, the natural response is to quickly settle on the nearest available perspective.
It’s even worse, because this effect also biases your ability to reason about numbers.
In a famous experiment by Nobel Prize winners Danny Kahneman and Amos Tversky, each subject was asked to spin a “Wheel of Fortune” that displayed numbers from 1 to 100. The number they landed on was obviously random. Then the subject was asked to estimate the number of countries in Africa. This was compared to a control group who were asked the same question but never shown The Wheel. Subjects who received low numbers from The Wheel guessed lower than the control, and those who saw high numbers guessed higher!
If even our numerical sensibility can be biased by such a meaningless trick, how can we work consciously to avoid the trap?
Here are three things you can do.
Decide how to decide, ahead of time
Determine ahead of time how you are going to analyze the question, and perhaps (if the problem space is well-understood), how the analysis will determine the answer.
To determine whether a person might be a good manager, you don’t want to start with your first impression. The right way is to ask yourself “what are the attributes of a great manager,” (this list for example), then seek to understand whether a person exhibits those attributes. Perhaps you’d decide the answer is “yes” if the person scores highly on at least 70% of those attributes, and doesn’t score extremely poorly on any one attribute. (It’s best to hire based on the value of someone’s strengths, and veto only if a weakness is excessive and relevant.)
To run a quality discussion in a meeting, you don’t want to start in the usual way, setting context and framing the debate. This generates snap-judgements instead of contemplative thoughts. Worse, initial ideas will excessively influence the remainder of the debate. Instead, send the background information ahead of time, so people can digest and form independent opinions. It is that diversity of opinions that are valuable to a discussion. Changing your mind through debate is healthy and valuable; having it changed for you unwittingly is not.
The “pre-mortem” is a technique that not only has a body of research supporting the claim that it’s effective, but is also fun to do, especially as a group exercise.
Continue at: https://blog.asmartbear.com/low-fact-trap.html
The text above is owned by the site bellow referred.
Here is only a small part of the article, for more please follow the link