The signal and the noise
The author is a statistician famous for developing a system for forecasting the performance and career development of baseball players in the US, and successful prediction of the outcomes in 49 of the 50 states in the 2008 and all states in 2012 U.S. Presidential election. In 2016 election, however, he was wrong as any other polls, but perhaps a little bit less wrong.
The followings are what I find interesting in the book.
On prediction.
Prediction is indispensable to our lives. Every time we choose a route to work, decide whether to go on a second date, or set money aside for a rainy day, we are making a forecast about how the future will proceed—and how our plans will affect the odds for a favorable outcome. We can never make perfectly objective predictions. They will always be tainted by our subjective point of view. Prediction is important because it connects subjective and objective reality.
On information overload.
The amount of information was increasing much more rapidly than our understanding of what to do with it, or our ability to differentiate the useful information from the mistruths. We face danger whenever information growth outpaces our understanding of how to process it. The instinctual shortcut that we take when we have “too much information” is to engage with it selectively, picking out the parts we like and ignoring the remainder, making allies with those who have made the same choices and enemies of the rest.
It is, essentially, a confirmation bias which leads to political divide around the world. We should be aware that we are by no means arbiters of truth. We all suffer from different degree of all kind of biases.
On the accuracy of political expert predictions.
The experts in the survey—regardless of their occupation, experience, or subfield—had done barely any better than random chance, and they had done worse than even rudimentary statistical methods at predicting future political events. They were grossly overconfident and terrible at calculating probabilities. They were wrong 15% and 25% of the time with events they said would never happen and would definitely happen, respectively. It didn’t matter whether the experts were making predictions about economics, domestic politics, or international affairs; their judgment was equally bad across the board. On the losing side were those experts whose predictions were cited most frequently in the media. The more interviews that an expert had done with the press, the worse his predictions tended to be.
Evolutionary instincts sometimes lead us to see patterns when there are none. We tend to find patterns in random noise. This part of the book agrees with evidence-based concept that we should be very wary of expert opinion.
On your daily predictions.
You will need to learn how to express—and quantify—the uncertainty in your predictions. You will need to update your forecast as facts and circumstances change. You will need to recognise that there is wisdom in seeing the world from a different viewpoint. The more you are willing to do these things, the more capable you will be of evaluating a wide variety of information without abusing it.
This to me sounds very much like Bayesian statistics. Scientific findings are always provisional and always subjected to repeated refinement. Perhaps, to get closer to the truth, we need to intentionally seek out information that doesn’t agree with our world views. We need to stop relying on social media to serve us news and start reading, watching the news from newspapers, TV channels that may be leaning in the opposite direction to us politically. This may be how we drag ourselves out of confirmation bias trap.
On difficulties of acting upon probabilistic information.
We look at the probabilistic information and we’ve got to translate that into a decision. A go, no-go. A yes-or-no decision. We have to take a probabilistic information and turn it into something deterministic. Like weather forecasts determined from statistical records alone (it rains 35 percent of the time in London in March), they don’t always translate into actionable intelligence (should I carry an umbrella?).
This is very relevant to us especially in the field of diagnostic tests. When we look at PA radiograph and there’s no radiolucency we’ve got to look at negative predictive value of PA radiograph which ranges around 53% (Brynolf’67) to 74% (Green’97) which means when there’s no radiolucency there’s 53-74% chance that there’s really no disease (far lower than what we usually think i.e. 100%). The interesting take home of this is that most of our information gathered from research papers is in fact probabilistic so the impact of this concept is more far-reaching than we realised. As a user of scientific findings, we need to translate probabilistic information to actionable intelligence, which is far from straight forward. Patients find this concept very difficult to grasp. Why sometimes it’s not black and white when we discuss diagnosis. It’s because the information, on which our diagnosis is based, is probabilistic, we therefore can’t have clear cut diagnosis.
On models and our attempts to understand nature.
What Box meant by that is that all models are simplifications of the universe, as they must necessarily be. “The best model of a cat is a cat.” said another mathematician. Everything else is leaving out some sort of detail. How pertinent that detail might be will depend on exactly what problem we’re trying to solve and on how precise an answer we require. The key is in remembering that a model is a tool to help us understand the complexities of the universe, and never a substitute for the universe itself. We learn about it through approximation, getting closer and closer to the truth as we gather more evidence.
Value of screening is dependent on the risk level of different group of people.
When our priors are strong, they can be surprisingly resilient in the face of new evidence. One classic example of this is the presence of breast cancer among women in their forties. The chance that a woman will develop breast cancer in her forties is fortunately quite low—about 1.4 percent. But what is the probability if she has a positive mammogram? Studies show that if a woman does not have cancer, a mammogram will incorrectly claim that she does only about 10 percent of the time. If she does have cancer, on the other hand, they will detect it about 75 percent of the time. When you see those statistics, a positive mammogram seems like very bad news indeed. But if you apply Bayes’s theorem to these numbers, you’ll come to a different conclusion: the chance that a woman in her forties has breast cancer given that she’s had a positive mammogram is still only about 10 percent. These false positives dominate the equation because very few young women have breast cancer to begin with. For this reason, many doctors recommend that women do not begin getting regular mammograms until they are in their fifties and the prior probability of having breast cancer is higher.
This is counter-intuitive and goes against many popular media personalities who advocate early tests of all kinds regardless of signs or symptoms. The thinking goes as the quicker we can detect the abnormalities, the better. The above paragraph highlights that this is not necessarily true and early detection may do harm by subjecting patients to unnecessary treatments. That’s the topic of the next book-Overdiagnosed.