JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for ALLSTAT Archives


ALLSTAT Archives

ALLSTAT Archives


allstat@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

ALLSTAT Home

ALLSTAT Home

ALLSTAT  February 2020

ALLSTAT February 2020

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Summary of Replies: Modelling with a large number of confounders

From:

Kim Pearce <[log in to unmask]>

Reply-To:

Kim Pearce <[log in to unmask]>

Date:

Fri, 28 Feb 2020 09:05:22 +0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (174 lines)

Hello everyone,

Many thanks to those who replied to the question I posed last Friday as regards the consideration of confounding variables when modelling.

I provide edited highlights of the replies I received (below) for those who are interested. 

As I had expected, views were wide ranging.

Just a few comments from me:

1) One view was that to assess which "other factors" (from a subset of F1,...,Fm) should be included alongside the primary independent (exposure) variable, E, in the model, we could force E into the model and then select one (or more)  of this set via an automated variable selection procedure i.e.  FWD, BACKWARD, Stepwise (or via  "bestsubsets").  Automated variable selection procedures obviously make use of significance testing but I've seen quite a few authors who argue against using statistical testing to select confounders (Miettinen, 1976; Breslow and Day, 1980; Greenland and Neutra, 1980; Greenland, 1989; Kleinbaum and Klein, 2010).  Although Greenland and Rothman (2008) do admit that "one often sees statistical tests used to select confounders (as in stepwise regression), rather than change in estimate criterion....it has been argued that these testing approaches will perform adequately if the tests have high enough power to detect any important confounder effects.  One way to ensure adequate power is raise the alpha level for rejecting the null (of no confounding) to 0.2 or even more instead of the traditional 0.05 level (Dales and Ury, 1978)]".  Also a few texts state that automated variable selection is not recommended when model building unless they are used when no previous research exists....and in situations where causality is not of interest and you merely wish to find a model to fit your data (e.g. Field 2013, Agresti & Finlay, 1986; Menard, 1995). 

2) Causal diagrams are a natural first step before attempting to construct the model.... Thus we would be considering the possible causal relationships between the exposure, outcome, potential confounders and other relevant variables.

3) Comparing adjusted and unadjusted estimates of the exposure effect using the "10% change" criterion is, I agree, a very rough rule of thumb.....its arbitrary nature is discussed at length in Greenland & Rothman (2008). 

4) I accept that entering *many* confounders in a model is problematic and, in such circumstances, forward selection of confounders (as opposed to backward elimination - of the type I spoke about in my email) is, instead, recommended.....this point is also mentioned in Greenland & Rothman (2008). 

4) Yes, risk ratios are more easy to understand (when compared to the odds ratio I spoke about in my email last week) and I have read quite a few papers which talk about the comparison of the values of relative risk and odds ratio - in particular, in what circumstances these values are similar and hence valid to interpret the odds ratio as a relative risk.  

Here are a few which I found particularly informative:

~When can odds ratios mislead? Davies HT, Crombie IK, Tavakoli  M https://www.ncbi.nlm.nih.gov/pubmed/9550961
 
~When to use the odds ratio or the relative risk? Schmidt, Kohlmann

~Understanding relative risk, odds ratio, and related terms: as simple as it can get.  Andrade https://www.ncbi.nlm.nih.gov/pubmed/26231012 

Finally, here are the full details of some of the informative texts which detail the complex area of model building with confounders:

Modern Epidemiology (3rd edition). Greenland and Rothman, 2008 (Chapter 15)

Logistic Regression: a self learning text. Kleinbaum and Klein, 2010 (Chapter 6 & 7)

____________________________________________________________________
ALLSTAT REPLIES:

Dear Kim 

I think this is a big topic and I have many criticisms of some of the proposals below.

I don't immediately recognise the legitimacy of some of what is proposed.

I just make a few Logical remarks. 

1.Expert Medical Opinion for example. 

The definition of a medical expert is a medical doctor 5 miles from home. So we could be forgiven for dismissing that approach.
( I once worked with a group of nephrologists and had their data for a  month - I knew more about what was going on in survival than they did and they had 40 years of treating patient
between them. Similarly with Bio-engineers - what they regard as scientific  fact was mere hypothesis in my statistician's  book. The maxim "Let the data decide" is a good one.)

2.Independent Effect

Logistic Model Pr(Y=1) =a+b.E   (Un-adjusted model)

I presume that  one is dealing with covariates selected on Bradford Hill criteria -i.e. other factors which could reasonably be competing explanations for the Primary Exposure covariate (E).
In order to test whether there is a true effect of E on Outcome (Y) [ ordinarily called the "independent" effect of E  on Y for now obvious reasons] we must adjust for the other factors (F1..etc.).
If a combination of the other factors abolish the effect E or attenuate the effect of E (e.g.  render non-significant)  then it is unlikely that E  explains  Y . For example, one cannot abolish the effect of cigarette smoking E on the incidence of lung cancer Y. We are always looking for independent  effects.- this is the main goal - all else is nugatory. [cf confounders]

3. Confounders 

Logistic Model Pr(Y=1)=a*+b*E+cF1

Let F1 be a second covariate and  let it  contribute significantly to the model - such that it should be included.
Now F1 may contribute independently (of E) to Y.  Then we have learned that F1 is also important. In this case b is not similar to b*
Alternatively the inclusion of F1  might modify the effect of E on Y . If so, irrespective of the amount of modification, provided F1 is a significant contributor to the model, it should be included. 
Now, going back one step, immediately  after baseline  we have F1,...,Fm ( ie m) other factors.
So we see that any selection method is reasonable in our pursuit of independent effects.
We can force E into the Model and then select one (or more)  of this set  ( FWD,BACK,Stepwise, Bestsubset ).
Better may be to Bootstrap Y 1000 times (sampling separately from the 1 and 0s in the correct proportions)  and fit 1000 models and do 1000 analyses and discover which of F1-Fm appear most frequently  in the analysis.

4. m very Large

There are many problems with m large.

If m is large you will use FWD - nothing else routine will work [the search space is enormous].
Or perhaps a penalised likelihood Logistic Model (with Lasso, or SCAD, or  H-likelihood penalty). 

Breaking the factors into sets is ad-hoc - I know of no general theory to guide one.

Also, typically, the interactions cannot be considered. This latter point is critical and brings much published work into disrepute - especially in genetics.

Remember in all of this you are trying to assess the independent effect of E, not simply build a model.
__________________________________________________________________
Hi Kim,

I think you need to look at causal inference. The approaches you have outlined are both outdated and troublesome as putting in lots of variable that are confounded and causal can create spurious relationships. You need to think about the causal relationship before doing the model in code and that will help determine what should go into the model. 

The 10% rule is a bit of a worrying misnomer when it comes to model building. 

Look at Judea Pearl's book The book of why and Miguel Hernan is another relevant author. 

Also, as an aside, odds ratios are being shunned for measures of risk as few people seem to understand them and risk ratios are not much harder to calculate. 
______________________________________________________________________________

My response is to be very wary of any completely automated process for modelling with "large numbers" of potential covariates. It's the classic fishing exercise leading to "X causes cancer" [small study of badly-selected sample shows] Daily Mail headline. Plus, in a medical context, many of your covariates will be self-reported based on memory.
___________________________________________________________________________

-----Original Message-----
From: Kim Pearce 
Sent: 21 February 2020 15:25
To: [log in to unmask] ([log in to unmask]) <[log in to unmask]>
Subject: Modelling with a large number of confounders: your views


Hello everyone,

I just wondered if I could ask your opinion on the situation where we are building a model and our goal is to obtain a valid measure of "effect" e.g. to obtain a valid estimate of the exposure-disease relationship via an odds ratio  (rather than obtaining a good predictive model) but we have a large group of potential confounders to consider.  I report a few techniques that I have seen (listed below) and I would appreciate your views.

I will take a hypothetical example where we have a binary outcome (say presence or absence of breast cancer) and a primary independent (exposure) variable (say "whether or not a woman has ever given birth" ["ever_birth"]).  
The definition of a confounder is a covariate that is associated with both the outcome of interest and a primary independent variable or risk factor and adjusting for the confounder will change the primary effect estimate (odds ratio in this case). I know that, as a rule of thumb, if the odds ratio of "ever_birth" changes by 10% upon addition of a potential confounder then that potential confounder is considered to be a confounder and it should be retained in the model (https://online.stat.psu.edu/stat507/node/34/ ;  Modern Epidemiology (3rd edition) , Greenland and Rothman 2008,  Ch 15) .  Also, if we think that a variable is a confounder, we do not statistically test that is associated with both the primary independent variable and outcome.

Most examples that we see in texts consider only a few potential confounders.  In real life there are many potential confounders that are identified at the design stage of the study by taking into account findings from previous epidemiological research and what is known about the mechanisms of the disease.

Controlling for all confounders may produce  the "gold standard" odds ratio estimate for "ever_birth", but entering many variables into a model can make the confidence interval around the "ever_birth" odds ratio estimate wide (less precise).

I have read one excellent text which deals with the issue of modelling with confounders:  Kleinbaum and Klein's "Logistic Regression: a self learning text" (2010) (Chapter 6 & 7).  For the situation when we are considering a model containing only potential confounders and not interaction terms (of the type exposure x potential confounder) they suggest:

1)	Creating the "gold standard" odds ratio estimate of "ever_birth" by entering *all* potential confounders into the model.  Observe the point estimate of the odds ratio for "ever_birth"  and the 95% confidence interval around this odds ratio estimate.

2)	Look at different subsets of the potential confounders.  For each subset, observe the point estimate of the odds ratio for "ever_birth".

3)	a) Select the subsets whose odds ratio estimate for "ever_birth" is approximately the same as that of the gold standard odds ratio estimate (each of these subsets "controls for confounding"), and hence
b)Control for that subset of potential confounders that produces the narrowest 95% confidence interval around the odds ratio estimate for "ever_birth" (providing it is narrower than the "gold standard" 95% confidence interval around the odds ratio estimate for "ever_birth").

4)	For all subsets whose odds ratio estimate for "ever_birth" is approximately the same as that of the gold standard odds ratio estimate, if none produces a narrower 95% confidence interval around the odds ratio estimate for "ever_birth" (compared to that of the "gold standard"), it is scientifically better to control for all potential confounders (i.e. use the gold standard odds ratio estimate).

Using steps 2 and 3 above , we will have identified a specific subset of potential confounders which, when controlled for, has gained a meaningful amount of precision (i.e. narrowed the 95% confidence interval around the odds ratio estimate for "ever_birth" compared to the "gold standard" 95% confidence interval) without sacrificing validity (i.e. without changing the point estimate of the odds ratio for "ever_birth").

Additionally,  as I understand it, Greenland and Rothman (Modern Epidemiology (3rd edition) , 2008,  Ch 15), who refer to Kleinbaum et al.  "Epidemiologic Research" 1984 , imply the following is also appropriate:

1) Adjust for all potential confounders and observe the odds ratio estimate for "ever_birth" (the "gold standard").
2) Delete potential confounders one by one and observe the resulting estimate of the odds ratio estimate for "ever_birth" each time.
3) The potential confounder having the smallest change in estimated odds ratio for "ever_birth" (say < 10% change)  is removed from the potential confounder set.
4) Delete potential confounders one by one from the reduced confounder set, observing the resulting estimate of the odds ratio for "ever_birth" each time.
5) The potential confounder having the smallest change in estimated odds ratio for "ever_birth" (where "change" is evaluated by comparing to the estimated odds ratio for "ever_birth" obtained after adjusting for the reduced confounder set) is removed from the reduced confounder set.
6)Steps 4 & 5 are repeated until the total change in the estimated odds ratio for "ever_birth" accrued from the start of the process (when all confounders were included) exceeds the chosen limit of importance (say 10%).

The only other model building technique involving confounders that I have seen uses a "hierarchical" approach  i.e. all known confounders are entered as a "first block" and either one or a group of predictor variables are entered as a "second block".  The additional contribution of the predictors in the second block is assessed via statistical testing (e.g. F change test for multiple regression https://www.youtube.com/watch?v=xgA8qY63dX0 , likelihood ratio test for logistic regression).  However, again, I have only seen examples where there are very few confounders entered into the "first block".

I assume that, if there was no previous research regarding the effects of the predictor variables in block 2 on outcome, we could employ a variable selection on the block 2 variables rather than "forced entry"(?).

Modelling with confounders is not easy as it is a marriage between clinical and statistical approaches.  In recent times, I have seen (non published) examples where clinicians have stipulated that groups of potential confounders (i.e. where each group is comprised of clinically related sets of variables) should be considered *in order* of clinical importance/interest...so, for example, 

1)	The unadjusted odds ratio estimate for "ever_birth" is observed.

2)	The odds ratio estimate for "ever_birth" is observed when base variables (which a clinician states should always be controlled for e.g. "age", "study site" etc) are included in the model. The odds ratio estimate for "ever_birth" in 2) is compared to 1).

3)	The odds ratio estimate for "ever_birth" is observed when base variables  and group A variables (e.g. group A variables appertaining to reproduction) are included in the model. The odds ratio estimate for "ever_birth" in 3) is compared to 2) i.e. here you would be  evaluating group A confounding with background adjustment for the base variables. 

4)	The odds ratio estimate for "ever_birth" is observed when base variables, group A variables and group B variables (e.g. where group B variables appertain to socio economic class) are included in the model. The odds ratio estimate for "ever_birth" in 4) is compared to 3) i.e. here you would be  evaluating group B confounding with background adjustment for the base variables and group A.

5)	Etc. for subsequent (ordered) groups of stipulated variables.

What are your views on the above approach?

This not  a simple area of statistics and I appreciate any views that you may have - especially as regards entering confounders which have been "grouped" into related sets.

Many thanks in advance,
Kim

Dr Kim Pearce PhD, CStat, Fellow HEA
Senior Statistician
Faculty of Medical Sciences Graduate School Room 3.14 3rd Floor Ridley Building 1  Newcastle University Queen Victoria Road Newcastle Upon Tyne
NE1 7RU

Tel: (0044) (0)191 208 8142

You may leave the list at any time by sending the command

SIGNOFF allstat

to [log in to unmask], leaving the subject line blank.

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

March 2024
February 2024
January 2024
December 2023
November 2023
October 2023
September 2023
August 2023
July 2023
June 2023
May 2023
April 2023
March 2023
February 2023
January 2023
December 2022
November 2022
October 2022
September 2022
August 2022
July 2022
June 2022
May 2022
April 2022
March 2022
February 2022
January 2022
December 2021
November 2021
October 2021
September 2021
August 2021
July 2021
June 2021
May 2021
April 2021
March 2021
February 2021
January 2021
December 2020
November 2020
October 2020
September 2020
August 2020
July 2020
June 2020
May 2020
April 2020
March 2020
February 2020
January 2020
December 2019
November 2019
October 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
October 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999
1998


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager