Hello everyone,
Many thanks to those who replied to the question I posed last Friday as regards the consideration of confounding variables when modelling.
I provide edited highlights of the replies I received (below) for those who are interested.
As I had expected, views were wide ranging.
Just a few comments from me:
1) One view was that to assess which "other factors" (from a subset of F1,...,Fm) should be included alongside the primary independent (exposure) variable, E, in the model, we could force E into the model and then select one (or more) of this set via an automated variable selection procedure i.e. FWD, BACKWARD, Stepwise (or via "bestsubsets"). Automated variable selection procedures obviously make use of significance testing but I've seen quite a few authors who argue against using statistical testing to select confounders (Miettinen, 1976; Breslow and Day, 1980; Greenland and Neutra, 1980; Greenland, 1989; Kleinbaum and Klein, 2010). Although Greenland and Rothman (2008) do admit that "one often sees statistical tests used to select confounders (as in stepwise regression), rather than change in estimate criterion....it has been argued that these testing approaches will perform adequately if the tests have high enough power to detect any important confounder effects. One way to ensure adequate power is raise the alpha level for rejecting the null (of no confounding) to 0.2 or even more instead of the traditional 0.05 level (Dales and Ury, 1978)]". Also a few texts state that automated variable selection is not recommended when model building unless they are used when no previous research exists....and in situations where causality is not of interest and you merely wish to find a model to fit your data (e.g. Field 2013, Agresti & Finlay, 1986; Menard, 1995).
2) Causal diagrams are a natural first step before attempting to construct the model.... Thus we would be considering the possible causal relationships between the exposure, outcome, potential confounders and other relevant variables.
3) Comparing adjusted and unadjusted estimates of the exposure effect using the "10% change" criterion is, I agree, a very rough rule of thumb.....its arbitrary nature is discussed at length in Greenland & Rothman (2008).
4) I accept that entering *many* confounders in a model is problematic and, in such circumstances, forward selection of confounders (as opposed to backward elimination - of the type I spoke about in my email) is, instead, recommended.....this point is also mentioned in Greenland & Rothman (2008).
4) Yes, risk ratios are more easy to understand (when compared to the odds ratio I spoke about in my email last week) and I have read quite a few papers which talk about the comparison of the values of relative risk and odds ratio - in particular, in what circumstances these values are similar and hence valid to interpret the odds ratio as a relative risk.
Here are a few which I found particularly informative:
~When can odds ratios mislead? Davies HT, Crombie IK, Tavakoli M https://www.ncbi.nlm.nih.gov/pubmed/9550961
~When to use the odds ratio or the relative risk? Schmidt, Kohlmann
~Understanding relative risk, odds ratio, and related terms: as simple as it can get. Andrade https://www.ncbi.nlm.nih.gov/pubmed/26231012
Finally, here are the full details of some of the informative texts which detail the complex area of model building with confounders:
Modern Epidemiology (3rd edition). Greenland and Rothman, 2008 (Chapter 15)
Logistic Regression: a self learning text. Kleinbaum and Klein, 2010 (Chapter 6 & 7)
____________________________________________________________________
ALLSTAT REPLIES:
Dear Kim
I think this is a big topic and I have many criticisms of some of the proposals below.
I don't immediately recognise the legitimacy of some of what is proposed.
I just make a few Logical remarks.
1.Expert Medical Opinion for example.
The definition of a medical expert is a medical doctor 5 miles from home. So we could be forgiven for dismissing that approach.
( I once worked with a group of nephrologists and had their data for a month - I knew more about what was going on in survival than they did and they had 40 years of treating patient
between them. Similarly with Bio-engineers - what they regard as scientific fact was mere hypothesis in my statistician's book. The maxim "Let the data decide" is a good one.)
2.Independent Effect
Logistic Model Pr(Y=1) =a+b.E (Un-adjusted model)
I presume that one is dealing with covariates selected on Bradford Hill criteria -i.e. other factors which could reasonably be competing explanations for the Primary Exposure covariate (E).
In order to test whether there is a true effect of E on Outcome (Y) [ ordinarily called the "independent" effect of E on Y for now obvious reasons] we must adjust for the other factors (F1..etc.).
If a combination of the other factors abolish the effect E or attenuate the effect of E (e.g. render non-significant) then it is unlikely that E explains Y . For example, one cannot abolish the effect of cigarette smoking E on the incidence of lung cancer Y. We are always looking for independent effects.- this is the main goal - all else is nugatory. [cf confounders]
3. Confounders
Logistic Model Pr(Y=1)=a*+b*E+cF1
Let F1 be a second covariate and let it contribute significantly to the model - such that it should be included.
Now F1 may contribute independently (of E) to Y. Then we have learned that F1 is also important. In this case b is not similar to b*
Alternatively the inclusion of F1 might modify the effect of E on Y . If so, irrespective of the amount of modification, provided F1 is a significant contributor to the model, it should be included.
Now, going back one step, immediately after baseline we have F1,...,Fm ( ie m) other factors.
So we see that any selection method is reasonable in our pursuit of independent effects.
We can force E into the Model and then select one (or more) of this set ( FWD,BACK,Stepwise, Bestsubset ).
Better may be to Bootstrap Y 1000 times (sampling separately from the 1 and 0s in the correct proportions) and fit 1000 models and do 1000 analyses and discover which of F1-Fm appear most frequently in the analysis.
4. m very Large
There are many problems with m large.
If m is large you will use FWD - nothing else routine will work [the search space is enormous].
Or perhaps a penalised likelihood Logistic Model (with Lasso, or SCAD, or H-likelihood penalty).
Breaking the factors into sets is ad-hoc - I know of no general theory to guide one.
Also, typically, the interactions cannot be considered. This latter point is critical and brings much published work into disrepute - especially in genetics.
Remember in all of this you are trying to assess the independent effect of E, not simply build a model.
__________________________________________________________________
Hi Kim,
I think you need to look at causal inference. The approaches you have outlined are both outdated and troublesome as putting in lots of variable that are confounded and causal can create spurious relationships. You need to think about the causal relationship before doing the model in code and that will help determine what should go into the model.
The 10% rule is a bit of a worrying misnomer when it comes to model building.
Look at Judea Pearl's book The book of why and Miguel Hernan is another relevant author.
Also, as an aside, odds ratios are being shunned for measures of risk as few people seem to understand them and risk ratios are not much harder to calculate.
______________________________________________________________________________
My response is to be very wary of any completely automated process for modelling with "large numbers" of potential covariates. It's the classic fishing exercise leading to "X causes cancer" [small study of badly-selected sample shows] Daily Mail headline. Plus, in a medical context, many of your covariates will be self-reported based on memory.
___________________________________________________________________________
-----Original Message-----
From: Kim Pearce
Sent: 21 February 2020 15:25
To: [log in to unmask] ([log in to unmask]) <[log in to unmask]>
Subject: Modelling with a large number of confounders: your views
Hello everyone,
I just wondered if I could ask your opinion on the situation where we are building a model and our goal is to obtain a valid measure of "effect" e.g. to obtain a valid estimate of the exposure-disease relationship via an odds ratio (rather than obtaining a good predictive model) but we have a large group of potential confounders to consider. I report a few techniques that I have seen (listed below) and I would appreciate your views.
I will take a hypothetical example where we have a binary outcome (say presence or absence of breast cancer) and a primary independent (exposure) variable (say "whether or not a woman has ever given birth" ["ever_birth"]).
The definition of a confounder is a covariate that is associated with both the outcome of interest and a primary independent variable or risk factor and adjusting for the confounder will change the primary effect estimate (odds ratio in this case). I know that, as a rule of thumb, if the odds ratio of "ever_birth" changes by 10% upon addition of a potential confounder then that potential confounder is considered to be a confounder and it should be retained in the model (https://online.stat.psu.edu/stat507/node/34/ ; Modern Epidemiology (3rd edition) , Greenland and Rothman 2008, Ch 15) . Also, if we think that a variable is a confounder, we do not statistically test that is associated with both the primary independent variable and outcome.
Most examples that we see in texts consider only a few potential confounders. In real life there are many potential confounders that are identified at the design stage of the study by taking into account findings from previous epidemiological research and what is known about the mechanisms of the disease.
Controlling for all confounders may produce the "gold standard" odds ratio estimate for "ever_birth", but entering many variables into a model can make the confidence interval around the "ever_birth" odds ratio estimate wide (less precise).
I have read one excellent text which deals with the issue of modelling with confounders: Kleinbaum and Klein's "Logistic Regression: a self learning text" (2010) (Chapter 6 & 7). For the situation when we are considering a model containing only potential confounders and not interaction terms (of the type exposure x potential confounder) they suggest:
1) Creating the "gold standard" odds ratio estimate of "ever_birth" by entering *all* potential confounders into the model. Observe the point estimate of the odds ratio for "ever_birth" and the 95% confidence interval around this odds ratio estimate.
2) Look at different subsets of the potential confounders. For each subset, observe the point estimate of the odds ratio for "ever_birth".
3) a) Select the subsets whose odds ratio estimate for "ever_birth" is approximately the same as that of the gold standard odds ratio estimate (each of these subsets "controls for confounding"), and hence
b)Control for that subset of potential confounders that produces the narrowest 95% confidence interval around the odds ratio estimate for "ever_birth" (providing it is narrower than the "gold standard" 95% confidence interval around the odds ratio estimate for "ever_birth").
4) For all subsets whose odds ratio estimate for "ever_birth" is approximately the same as that of the gold standard odds ratio estimate, if none produces a narrower 95% confidence interval around the odds ratio estimate for "ever_birth" (compared to that of the "gold standard"), it is scientifically better to control for all potential confounders (i.e. use the gold standard odds ratio estimate).
Using steps 2 and 3 above , we will have identified a specific subset of potential confounders which, when controlled for, has gained a meaningful amount of precision (i.e. narrowed the 95% confidence interval around the odds ratio estimate for "ever_birth" compared to the "gold standard" 95% confidence interval) without sacrificing validity (i.e. without changing the point estimate of the odds ratio for "ever_birth").
Additionally, as I understand it, Greenland and Rothman (Modern Epidemiology (3rd edition) , 2008, Ch 15), who refer to Kleinbaum et al. "Epidemiologic Research" 1984 , imply the following is also appropriate:
1) Adjust for all potential confounders and observe the odds ratio estimate for "ever_birth" (the "gold standard").
2) Delete potential confounders one by one and observe the resulting estimate of the odds ratio estimate for "ever_birth" each time.
3) The potential confounder having the smallest change in estimated odds ratio for "ever_birth" (say < 10% change) is removed from the potential confounder set.
4) Delete potential confounders one by one from the reduced confounder set, observing the resulting estimate of the odds ratio for "ever_birth" each time.
5) The potential confounder having the smallest change in estimated odds ratio for "ever_birth" (where "change" is evaluated by comparing to the estimated odds ratio for "ever_birth" obtained after adjusting for the reduced confounder set) is removed from the reduced confounder set.
6)Steps 4 & 5 are repeated until the total change in the estimated odds ratio for "ever_birth" accrued from the start of the process (when all confounders were included) exceeds the chosen limit of importance (say 10%).
The only other model building technique involving confounders that I have seen uses a "hierarchical" approach i.e. all known confounders are entered as a "first block" and either one or a group of predictor variables are entered as a "second block". The additional contribution of the predictors in the second block is assessed via statistical testing (e.g. F change test for multiple regression https://www.youtube.com/watch?v=xgA8qY63dX0 , likelihood ratio test for logistic regression). However, again, I have only seen examples where there are very few confounders entered into the "first block".
I assume that, if there was no previous research regarding the effects of the predictor variables in block 2 on outcome, we could employ a variable selection on the block 2 variables rather than "forced entry"(?).
Modelling with confounders is not easy as it is a marriage between clinical and statistical approaches. In recent times, I have seen (non published) examples where clinicians have stipulated that groups of potential confounders (i.e. where each group is comprised of clinically related sets of variables) should be considered *in order* of clinical importance/interest...so, for example,
1) The unadjusted odds ratio estimate for "ever_birth" is observed.
2) The odds ratio estimate for "ever_birth" is observed when base variables (which a clinician states should always be controlled for e.g. "age", "study site" etc) are included in the model. The odds ratio estimate for "ever_birth" in 2) is compared to 1).
3) The odds ratio estimate for "ever_birth" is observed when base variables and group A variables (e.g. group A variables appertaining to reproduction) are included in the model. The odds ratio estimate for "ever_birth" in 3) is compared to 2) i.e. here you would be evaluating group A confounding with background adjustment for the base variables.
4) The odds ratio estimate for "ever_birth" is observed when base variables, group A variables and group B variables (e.g. where group B variables appertain to socio economic class) are included in the model. The odds ratio estimate for "ever_birth" in 4) is compared to 3) i.e. here you would be evaluating group B confounding with background adjustment for the base variables and group A.
5) Etc. for subsequent (ordered) groups of stipulated variables.
What are your views on the above approach?
This not a simple area of statistics and I appreciate any views that you may have - especially as regards entering confounders which have been "grouped" into related sets.
Many thanks in advance,
Kim
Dr Kim Pearce PhD, CStat, Fellow HEA
Senior Statistician
Faculty of Medical Sciences Graduate School Room 3.14 3rd Floor Ridley Building 1 Newcastle University Queen Victoria Road Newcastle Upon Tyne
NE1 7RU
Tel: (0044) (0)191 208 8142
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|