gusl: (Default)
[personal profile] gusl
MR's short post "Risk Analysis using Roulette Wheels" reads:
A PSA test can reveal the presence of prostate cancer. But not all such cancers are fatal and treatment involves the risk of impotence. Do you really want the test? It's difficult for patients to evaluate these kinds of risks. Mahalanobis points us to an article advocating visual tools such as roulette wheels to help patients understand relative risks and chance. Even better than the diagrams is this impressive video; the video may be of independent interest to the older men in the audience.


The basic problem is that the screening doesn't distinguish non-fatal cancer (presumably harmless) from fatal cancer. They say that, in case of a positive test result, one "must" pursue treatment, since the probability of death and is pretty high without it. They don't mention that one could just as well refuse treatment, since with treatment the probability is also high that one will get bad side-effects unnecessarily (in the cases of non-fatal cancer).

But if the screening is free, shouldn't we *always* have it done? Isn't this an axiom of rationality?
A rationalist would argue that one should choose not to do the screening ONLY IF one's decision were going to be the same in either case, i.e. no treatment regardless of the result of the screening.

My interpretation:


They say that one could choose not to be screened because people who get screened have a higher probability of bad side-effects. This is true, but only because people who get screened have a higher probability of finding something, and therefore a higher probability of getting treatment. A rationalist (like me) would argue that if you have the balls to face a higher risk of death in exchange for a smaller chance of getting side-effects when these probabilities are small, then you should have the balls to make the same choice when the chances are high (e.g. tumor strikes). But in practice, one might not trust oneself to.



Graphviz code:

digraph prostate_cancer
{
/* graph [fontsize=8];
edge [fontsize=8]; */
node [fontsize=10];

"fatal-type \n cancer" -> "test positive";
"non-fatal-type \n cancer" -> "test positive";

"treatment" [shape=box];
"fatal-type \n cancer" -> "death";
"treatment" -> "death" [style=dashed];
"treatment" -> "side-effects: \n impotence or\n incontinence";
}

(no subject)

Date: 2006-06-16 03:47 pm (UTC)
From: [identity profile] selfishgene.livejournal.com
How do they know the relative risk of treatment vs non-treatment? Treatment is assumed to be better but there is no actual evidence.

(no subject)

Date: 2006-06-16 04:04 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I'm not sure I understand the question.

The linked website suggests that there *is* evidence. I imagine that their numbers, e.g. 2% for non-African non-family-history no-screenings men, are just sample frequencies.

Another issue is the frequency of fatal vs non-fatal cancers, which is a piece of data that they don't give.

(no subject)

Date: 2006-06-16 04:53 pm (UTC)
From: [identity profile] selfishgene.livejournal.com
The only (semi-)reliable way to know if treatment works, is to allocate patients randomly to groups. You could have a treatment, placebo and non-treatment group. However, for diseases which already have an accepted treatment, you can't force anyone to not have that treatment. This means your research sample is biased from the start.
Maybe people who would get better anyway, because they are more sensible and eat healthy, are the same people who prefer to get treatment. The careless types who refuse treatment, or are slack about following the protocol, might have inherently higher or lower death rates.
My point is that there is no real way to be sure what the treatment risk ratio is. This is true of all difficult/complex diseases.
The medical profession of course favor the course of action which puts money in their pockets. I.e. to prefer the treatment rather than letting nature take it's course. I'm not saying they are knowingly dishonest, but it is very easy to believe something, when that belief is profitable for you.

(no subject)

Date: 2006-06-16 05:03 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Oh yes, of course. Correlation is not causation, but I don't know if modern ethical protocols get in the way of them performing controlled experiments... I consider it quite likely that this data is in fact experimental.

But in any case, we always need to make assumptions. In this case, we could assume that the bias is small.

It would be cool if the decision-guide asked you what course of action you would have taken without it, and then took that as a parameter to account for this exact bias.

(no subject)

Date: 2006-06-16 05:57 pm (UTC)
From: [identity profile] selfishgene.livejournal.com
1. We assume the bias is small because it is convenient to think that. It aligns with the interests of the medical profession and all decent right-thinking people. However, I always ask myself how a system could go wrong. How can you 'game' the system? Do people have an incentive to lie or hide evidence?
2. Choosing a response to a hypothetical question on a form is very simple. Anxiety about cancer and then undergoing months of expensive, risky, painful treatment is very complex. I don't think a decision-guide can really simulate what people will do in a high-stress situation extending over months.

(no subject)

Date: 2006-06-16 09:23 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
We assume the bias is small because it is convenient to think that.

It may also be true. We'd have to look at meta-analyses.


However, I always ask myself how a system could go wrong. How can you 'game' the system? Do people have an incentive to lie or hide evidence?

I have the same tendency to ask such questions. However, I think that the majority of doctors are honest, and that self-deception is not quite as rampant as you suggest. I do believe that there may be subtle "dishonesties" at the some levels, though: wherever you find this bottom-line optimization mentality.

The purpose of decision-guides is not to simulate behaviour. Rather, it is a normative tool: it's meant to help you make the right choice.

(no subject)

Date: 2006-06-19 02:56 pm (UTC)
From: [identity profile] frauhedgehog.livejournal.com
Same goes for breast lumps. Most are benign. But why not remove them anyway? Because of risk during surgery, and aesthetic considerations. But what exactly are those risks, and how do they compare? Risk evaluation is rarely quantified for the patient and most often left to doctor judgment.

(no subject)

Date: 2006-06-19 03:13 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Yes! And this is bad for 2 reasons:

* doctors don't know patients' preferences (i.e. how they feel about the negative consequences of treatment).
* doctors are more risk-averse than patients: they will tend to always play on the "safe" side, because they wouldn't want to be responsible for a death (I suspect that they would rather to be responsible for >100 unnecessarily mutilated breasts than for 1 death (i.e. the shortening of 1 life by about 20 years), but I'd like to see some figures... maybe Mr. Levitt can help me here).

Moreover, even if doctors were aligned with patients' preferences and overcame their risk-aversity, they still tend to be bad Bayesians in their decision-making (like most experts in non-mathematical fields). You never see a doctor grabbing a calculator to work out how to maximize utility for the patient.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags