Hello everyone,
I just wondered if I could ask your opinion on the situation where we are building a model and our goal is to obtain a valid measure of "effect" e.g. to obtain a valid estimate of the exposure-disease relationship via an odds ratio (rather than obtaining a good predictive model) but we have a large group of potential confounders to consider. I report a few techniques that I have seen (listed below) and I would appreciate your views.
I will take a hypothetical example where we have a binary outcome (say presence or absence of breast cancer) and a primary independent (exposure) variable (say "whether or not a woman has ever given birth" ["ever_birth"]).
The definition of a confounder is a covariate that is associated with both the outcome of interest and a primary independent variable or risk factor and adjusting for the confounder will change the primary effect estimate (odds ratio in this case). I know that, as a rule of thumb, if the odds ratio of "ever_birth" changes by 10% upon addition of a potential confounder then that potential confounder is considered to be a confounder and it should be retained in the model (https://online.stat.psu.edu/stat507/node/34/ ; Modern Epidemiology (3rd edition) , Greenland and Rothman 2008, Ch 15) . Also, if we think that a variable is a confounder, we do not statistically test that is associated with both the primary independent variable and outcome.
Most examples that we see in texts consider only a few potential confounders. In real life there are many potential confounders that are identified at the design stage of the study by taking into account findings from previous epidemiological research and what is known about the mechanisms of the disease.
Controlling for all confounders may produce the "gold standard" odds ratio estimate for "ever_birth", but entering many variables into a model can make the confidence interval around the "ever_birth" odds ratio estimate wide (less precise).
I have read one excellent text which deals with the issue of modelling with confounders: Kleinbaum and Klein's "Logistic Regression: a self learning text" (2010) (Chapter 6 & 7). For the situation when we are considering a model containing only potential confounders and not interaction terms (of the type exposure x potential confounder) they suggest:
1) Creating the "gold standard" odds ratio estimate of "ever_birth" by entering *all* potential confounders into the model. Observe the point estimate of the odds ratio for "ever_birth" and the 95% confidence interval around this odds ratio estimate.
2) Look at different subsets of the potential confounders. For each subset, observe the point estimate of the odds ratio for "ever_birth".
3) a) Select the subsets whose odds ratio estimate for "ever_birth" is approximately the same as that of the gold standard odds ratio estimate (each of these subsets "controls for confounding"), and hence
b)Control for that subset of potential confounders that produces the narrowest 95% confidence interval around the odds ratio estimate for "ever_birth" (providing it is narrower than the "gold standard" 95% confidence interval around the odds ratio estimate for "ever_birth").
4) For all subsets whose odds ratio estimate for "ever_birth" is approximately the same as that of the gold standard odds ratio estimate, if none produces a narrower 95% confidence interval around the odds ratio estimate for "ever_birth" (compared to that of the "gold standard"), it is scientifically better to control for all potential confounders (i.e. use the gold standard odds ratio estimate).
Using steps 2 and 3 above , we will have identified a specific subset of potential confounders which, when controlled for, has gained a meaningful amount of precision (i.e. narrowed the 95% confidence interval around the odds ratio estimate for "ever_birth" compared to the "gold standard" 95% confidence interval) without sacrificing validity (i.e. without changing the point estimate of the odds ratio for "ever_birth").
Additionally, as I understand it, Greenland and Rothman (Modern Epidemiology (3rd edition) , 2008, Ch 15), who refer to Kleinbaum et al. "Epidemiologic Research" 1984 , imply the following is also appropriate:
1) Adjust for all potential confounders and observe the odds ratio estimate for "ever_birth" (the "gold standard").
2) Delete potential confounders one by one and observe the resulting estimate of the odds ratio estimate for "ever_birth" each time.
3) The potential confounder having the smallest change in estimated odds ratio for "ever_birth" (say < 10% change) is removed from the potential confounder set.
4) Delete potential confounders one by one from the reduced confounder set, observing the resulting estimate of the odds ratio for "ever_birth" each time.
5) The potential confounder having the smallest change in estimated odds ratio for "ever_birth" (where "change" is evaluated by comparing to the estimated odds ratio for "ever_birth" obtained after adjusting for the reduced confounder set) is removed from the reduced confounder set.
6)Steps 4 & 5 are repeated until the total change in the estimated odds ratio for "ever_birth" accrued from the start of the process (when all confounders were included) exceeds the chosen limit of importance (say 10%).
The only other model building technique involving confounders that I have seen uses a "hierarchical" approach i.e. all known confounders are entered as a "first block" and either one or a group of predictor variables are entered as a "second block". The additional contribution of the predictors in the second block is assessed via statistical testing (e.g. F change test for multiple regression https://www.youtube.com/watch?v=xgA8qY63dX0 , likelihood ratio test for logistic regression). However, again, I have only seen examples where there are very few confounders entered into the "first block".
I assume that, if there was no previous research regarding the effects of the predictor variables in block 2 on outcome, we could employ a variable selection on the block 2 variables rather than "forced entry"(?).
Modelling with confounders is not easy as it is a marriage between clinical and statistical approaches. In recent times, I have seen (non published) examples where clinicians have stipulated that groups of potential confounders (i.e. where each group is comprised of clinically related sets of variables) should be considered *in order* of clinical importance/interest...so, for example,
1) The unadjusted odds ratio estimate for "ever_birth" is observed.
2) The odds ratio estimate for "ever_birth" is observed when base variables (which a clinician states should always be controlled for e.g. "age", "study site" etc) are included in the model. The odds ratio estimate for "ever_birth" in 2) is compared to 1).
3) The odds ratio estimate for "ever_birth" is observed when base variables and group A variables (e.g. group A variables appertaining to reproduction) are included in the model. The odds ratio estimate for "ever_birth" in 3) is compared to 2) i.e. here you would be evaluating group A confounding with background adjustment for the base variables.
4) The odds ratio estimate for "ever_birth" is observed when base variables, group A variables and group B variables (e.g. where group B variables appertain to socio economic class) are included in the model. The odds ratio estimate for "ever_birth" in 4) is compared to 3) i.e. here you would be evaluating group B confounding with background adjustment for the base variables and group A.
5) Etc. for subsequent (ordered) groups of stipulated variables.
What are your views on the above approach?
This not a simple area of statistics and I appreciate any views that you may have - especially as regards entering confounders which have been "grouped" into related sets.
Many thanks in advance,
Kim
Dr Kim Pearce PhD, CStat, Fellow HEA
Senior Statistician
Faculty of Medical Sciences Graduate School Room 3.14 3rd Floor Ridley Building 1 Newcastle University Queen Victoria Road Newcastle Upon Tyne
NE1 7RU
Tel: (0044) (0)191 208 8142
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|