Marilyn, you are right, regression to the mean would affect both groups equally – provided no bias was introduced by selection or analysis – however, regression to the mean adds a little more (unnecessary) noise) to the signal – this is only a small red flag

 

 

More questions raise themselves each time I look at the paper

 

“At each site, we invited 120–140 participants with the highest risk of cardiovascular disease (based on the ratio of total cholesterol to HDL; mean: 5.18; range: 1.8–14.8).”

 

Table 1 shows “Baseline characteristics of participants at high risk of cardiovascular disease”

 

But: a ratio of total cholesterol to HDL cholesterol of 1.8 (see the range above) would be quite low.

 

Also there were 124 participants in the naturopathic group and 122 in the control, but only 105 and 112 of these contributed to the average total-C to HDL-C ratio and fewer still contributed to the averages of the individual components of the lipid profile. Doesn’t make sense and is incompatible with the description of the selection procedure.

 

Michael

 

 

From: Marilyn Mann [mailto:[log in to unmask]]
Sent: 14 May 2013 13:24
To: Michael Power; [log in to unmask]
Subject: Re: why did CMAJ publish this study?

 

Dear Michael and Christie

I share your skepticism with respect to this study.

Michael, can you explain why regression to the mean would produce a bias in favor of the naturopathy group? Why wouldn't it affect both groups equally?

I am particularly skeptical about any intervention involving the use of plant sterol supplements or functional foods containing plant sterols because, while plant sterols lower LDL, they have never been shown to prevent cardiovascular events (i.e., there have been no RCTs testing whether plant sterols lower the risk of clinical endpoints). Moreover, there is uncertainty as to whether plant sterols are safe given that the use of plant sterol supplements increases serum plant sterols. We know that very high plant sterol levels, as in sitosterolemia, cause xanthomas and premature cardiovascular disease. Whether more moderate elevations in plant sterols cause harm is not known.

Regards
Marilyn Mann

Twitter:  @MarilynMann
Blog:  http://marilynmann.wordpress.com

Sent from my Verizon Wireless Phone



-----Original message-----

From: Michael Power <[log in to unmask]>
To:
[log in to unmask]
Sent:
Tue, May 14, 2013 08:50:35 GMT+00:00
Subject:
Re: why did CMAJ publish this study?

Christie

I shall resist the temptation to speculate why the CMAJ published the trial, or with what expertise the peer reviewers reviewed the paper. But, I do wonder if they would have published the trial if the results had been "negative".

I have not seen the press release, but it would have been irresponsible of it not to have commented on the size of the effect, the precision of the results, the risks of bias and error, the applicability of the results and implications for practice, and how the results fit with similar studies.

If I had been asked, I would have recommended against funding the study, because the prior probability of it producing a useful result was very low if you are a skeptic and very high if you are a believer, and thus the study would have been unlikely to change anything. I wonder if this is the reason no sample size calculation was reported?


There are a number of significant risks to bias, including:

Randomization was conducted by the Canadian College of Naturopathic Medicine

Participants were selected on the basis of higher ratios of total cholesterol to HDL cholesterol - because cholesterol measurements are quite variable, and selection seems to have been done on only one measurement, this would have introduced a risk of bias from "regression to the mean".

"The naturopathic doctors collected all biometric and validated questionnaire measures", and were not blinded to study group.

Tables 1 (baseline) and 2 (results) are not comparable. First, table 1 shows data plus/minus standard deviations, while table 2 shows data plus/minus standard errors of the means. This is more a problem of spin than substance, as using the SE makes the data in table 2 look more precise than the data with SD in table 1. Secondly, and most problematically, table 1 shows real data, i.e. data with real numerators and denominators, while table 2 shows imagined, or at least engineered data - the real numbers have been adjusted for baseline measure of outcome variables, and missing data have been created by "a multiple imputation". This data engineering is particularly problematic given the opacity of the imputation process and the 30% drop out rate. With a 30% drop out rate the missing data rate would be > 30%. I wonder why a table 3 with real outcome data was not published? If it had been, we would be able to see whethere or not the data engineering had created statistically significant results.

This seems to be the first trial of its kind - and we know that first publications often turn out to have results that are more extreme than subsequent trials.

These issues are a forest of flapping red flags for bias.

The response to these issues, especially the data engineering, should have been to adjust the confidence intervals (i.e. widen them). ButI can only speculate that the reason this was not done is that all significant differences would have evaporated faster than n-butyl glycol (i.e.163 times faster than ether http://www.siegwerk.com/fileadmin/user_upload/cc/Data_Sheets/TM/Verdunstungsgeschwindigkeit_e.pdf).

Even if the results are real, the implications for practice would be to do nothing different - the intervention is so vaguely specified that it can not be repeated, and the outcomes are patient-unimportant outcomes, and thus should not be the basis for offering treatment.

The funders were poor stewards of the money that was entrusted to them to manage.

Michael