As usual, smart readers knocked the cover off the ball almost immediately. Some day I’ll stump you.
First question taken from Randomness by Deborah J Bennett:
“If a test to detect a disease whose prevalence is one in a thousand has a false positive rate of 5%, what is the chance that a person found to have a positive result actually has the disease, assuming you know nothing about the person’s symptoms or signs?”
First to answer correctly was Matt Crawford:
“Assume that the test is performed on everyone regardless of symptoms of the disease. Then out of every thousand people who receive the test, one has the disease and 999 do not. Further, assume that the test has no false negatives: anyone who actually has the disease gets a positive result. Then 1 out of every thousand tests are true positives. The remaining 999 should be negative results, but the 5% false positive rate means that 49.95 (so round to 50) of these people will receive false positive results. Then out of our 1000 tests, 51 return positive results. But only one of these is a true positive, so the chance that a positive test identified someone who actually has the disease is 1/51 or about 2%.”
You might quibble that 5% false positives means 50 false positive out of a population of 1000 (plus one correct positive) but this is close enough. It’s fair to make the assumption that there are no false negatives since this isn’t stated in the question (and otherwise you’d be unable to answer) but Aswath is right to point out this should have been specified.
Second question: “what percentage of the physicians, residents, and fourth year medical students at a prominent medical school who were asked this question got it right?”
jb guessed that 80% of those tested would give the tempting wrong answer of 95%. Actually, only 19% gave the right answer but only 50% said 95%. jb, you would’ve nailed it if you hadn’t given more detail in your answer than called for. Rob’s an optimist and hoped that 80% would get it right because their care is so important and getting into medical school requires critical thinking. He’s dead right that it’s scary that so many get it wrong.
Interesting answers to third question: “why is it critically important that doctors be able to get this one right? Give one example.” Most not about doctors, though. This type of bad thinking does cover lots of ground.
Matt Crawford cites the Red Cross using an HIV test on donated blood which is known to have a high incidence of false positives and speculates that many donors are probably panicked by the result. “However, the Red Cross continues to use the same test, probably because it combines low cost with very low false negative rate. In this case it may be justified to trade a high false positive rate for a low false negative rate, because a false positive merely requires a second test but a false negative would spread HIV through transfusions.”
Curtis Carmack says: “the medical profession as a whole has given insufficient thought to how to address the false positive issue with patients, leading to much more angst than is necessary when patients receive a positive test result -- invariably late on Friday -- and have to wait at least a couple of days to ask questions about it. ;-)”
Dennis Shanley posts: “This directly effects the overall cost of health care in a huge way. Assume that it costs $10,000 to cure a patient who presents positive. Not an unlikely assumption. Assume further that the 50 false positive patients do not exhibit negative effects as a result of their treatment that require further medical treatment and they do not litigate as a result of the unnecessary treatment. This is a highly improbable assumption made for the sake of simplicity.
“The true cost to cure 1 patient is $10,000.
“The cost to cure that one patient and treat the 50 false positives is $510,000.”
Aswath writes: “Suppose now we are told that the false positive predominantly affects a biological group - gender or a racial group. Will that decision stand reason? Let us assume that the situation is internment during WWII in US. A nation has to live with the effects of a callous operation decision to accept a large false positive.”
Otmar: “There is another interesting application for this kind of statistics: The beloved war on terror. The chance of a random person to be a terrorist is hopefully less than 1/1000. Imagine you manage to build some automated system which somehow claims to spot suspicious behavior, known faces, or miscreants by some other clever scheme.
These systems all have a non-negligible error-rate. If you're really lucky, you might push that one down to less than 1%.
“Now do the math again, assuming a 1/100000 terrorist-rate and 1% false positives. No wonder I read that one trial for such a system got terminated.”
The point is that you must weight the costs of being right and the costs of being wrong both for the positive and the negative case. Back to medicine, suppose your doctor is one of the benighted 81%. He or she tests you using the test in the first question and you come up positive. Let’s suppose that the disease is always fatal if not treated and there’s a treatment available but it has a 25% chance of killing you itself. If the doctor believes that there’s a 95% chance you have the disease, the dangerous treatment is clearly justified; but, since the true likelihood is less than 2%, the treatment is more dangerous than your untreated prognosis. Always a good idea to get a second opinion AND check your doctor’s math.