Print

Print


A small but necessary inclusion in Tom's third point to Ahmed:

We need to also specify the minimum duration within which the study results need to be made available publicly after its completion. I guess the idea for the study and study design automatically become public once it is registered as a protocol driven trial. What about those studies that are exploratory (even go under the guise of exploratory pilots)? What is the time duration provided for exploring ideas with small pilots before they can be shared? regards,

rakesh

PS: I am not sure if Ben's English was pointed out with regard to 'Satisficing?' It is an established portmanteau "combining satisfy with suffice and seems to be an important decision making tool in itself." :-)

On Thu, Aug 18, 2011 at 1:05 AM, Tom Jefferson <[log in to unmask]> wrote:
Hilda I am not sure whether it applies to other drugs in other areas (although the bottom line for Vioxx, Riboxetine, Paroxetine, Rosilgitazone etc is similar) the only thing I know is that access to the whole oseltamivir's trial programme's clinical study reports has oveturned our understanding of how the drug works. I do not read trials in jouranls any longer as I am not sure I can trust anything that's written down. Where does leave reserach synthesis?

 Ben can you translate your letest into English please?

Ahmed we could start by pushing for any doctor or nurse involved in human experiments such as trials which are not (fully) published (in a repository as you suggest) to face disciplinary proceedings - in UK the General Medical Council a fate generally considered worse than death.


On 17 August 2011 21:09, Bastian, Hilda (NIH/NLM/NCBI) [C] <[log in to unmask]> wrote:
G'day!  This will be like everything else, I think: sometimes it will make a big difference, and sometimes it won't. Whether or not something is worth doing for every single review is a big question. There are many issues like this:  what is the impact of doing systematic reviews without including and translating all non-English literature? Should all systematic reviews also always include non-randomised studies? Sometimes these things will change the results – and people can show even dramatic cases of that – but other times it will dramatically increase the resources necessary without any real improvement in the results. Every time the resources needed for what should be done in a systematic review are increased, there is an opportunity cost too – in time taken to do the review, and, therefore, in the number of questions that will have systematic reviews that can answer them.

It's like a methodological arms race, really, isn't it? There is no end to what could be done. And yet, when methodological reviews are done, as with any question, the compelling cases derived from a few examples do not necessarily reflect into overall benefits from doing it routinely. And the opportunity cost remains – the questions left unanswered because the task of finding a better answer than we have now is getting harder*.

Maybe considering whether something less could be done is also an important question, if we want to achieve the best possible outcomes for human health. Or at least requiring a good evidence-based look before deciding we need ever bigger more expensive arms. (In communities that want good health information – but also want good schools, good roads, good justice/public safety, pollution control, safe food……)

Hilda

* Self-citation sorry – it was just quicker: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2943439/?tool=pubmed



From: "Djulbegovic, Benjamin" <[log in to unmask]<mailto:[log in to unmask]>>
Reply-To: "Djulbegovic, Benjamin" <[log in to unmask]<mailto:[log in to unmask]>>
Date: Wed, 17 Aug 2011 14:12:46 -0400
To: "[log in to unmask]<mailto:[log in to unmask]>" <[log in to unmask]<mailto:[log in to unmask]>>
Subject: Re: Something else does indeed need to be done

This is a poignant reminder of the fact that cognitive processes in science and medicine are similar to cognitive processes in other areas of human activity, and rely both on intuitive (system I) and deliberative inferential processes (system II) as it has been often debated on this list…when faced with a deluge of data, people have to “satisfice”…what to select is in heart of all our activities (as in this case, which pages/data to select …)
It is actually wonder we ever agree on anything (but, somehow we often do…)
Ben djulbegovic
From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Tom Jefferson
Sent: Wednesday, August 17, 2011 8:07 AM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: Something else does indeed need to be done

Amy, you did not miss anything. Most researchers think they can do reviews base on publications. So did I. No more. The tamiflu published iceberg is round 40% of randomised data and what is published is a spun extreme synthesis of the source clinical study report. It took us 2 years to assemble a reasonably incomplete reconsruction of the oseltamivir trial programmes.

Tom

Sent from my iPhone

On 17/ago/2011, at 13:34, "Dr. Amy Price" <[log in to unmask]<mailto:[log in to unmask]>> wrote:
“The average length of a oseltamivir/Tamiflu trial's clinical study report is 8500 pages divided into 4 to 5 modules. We also realised that looking at single trials can be highly misleading and when reviewing and synthesising data we should look at a whole trial programme (i.e. prophylaxis indication) not single trials. This point was made about a year ago by John Ioannidis and our experience supports this.”


I understand the concept of making data available upon request but even this is met with the data is proprietary etc. It seems that the thought is that is if the product was developed without public money there is no accountability or responsibility to share data not published. This is even though the product will be purchased by government and other publically funded entities. I am not sure why I missed the blindingly obvious 8500 pages of data crunched into an acceptable journal length submission means something is going to be excluded....I would be interested in hearing more on these points as well especially in regards to ways forward...

Amy

Amy Price
Http://empower2go.org
Building Brain Potential





From: Evidence based health (EBH) [mailto:[log in to unmask]] On Behalf Of Tom Jefferson
Sent: 17 August 2011 04:53 AM
To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: Something else does indeed need to be done

The three points made by Ahmed

Dear Ahmed and all, this is an important and intelligent discussion. I would to sound a word of caution, based on our current experiences from updating our Cochrane review on oseltamivir for influenza using only regulatory data (we did not even look at any published items).

Your three points are ethically and theoretically strong, but at points 2 and 3 you cite publication of and availability (presumably to a journal or to Johnny Public) of all primary data relating to a trial.

The average length of a oseltamivir/Tamiflu trial's clinical study report is 8500 pages divided into 4 to 5 modules. We also realised that looking at single trials can be highly misleading and when reviewing and synthesising data we should look at a whole trial programme (i.e. prophylaxis indication) not single trials. This point was made about a year ago by John Ioannidis and our experience supports this.

We are reviewing these tens of thousands of pages that we know at least one of the regulators did not do at the time, but have so far have found no shortcut to (say) review 300 "key" pages of the 8500 total.

Can you explain to me which journal could do the same?
If they do not have the resources to review "all data" even of single trials should they ask for the clinical study reports at all?
Is there not a risk of data mining or cherry picking as industry point out?


I'd be fascinated by a discussion on these issues.

Tom.



On 16 August 2011 23:35, Dr. Amy Price <[log in to unmask]<mailto:[log in to unmask]>> wrote:
Dear Ahmed,

This is a thoughtful and important list. I am still new to this field but increasingly recognise that implementing change is not for the more important, experienced or properly placed person who reads an idea to implement it, but that rather change comes from each one of us acting as a united front. I am happy to contribute to be a small part of an answer, please count me in as an agent for positive change.

Amy

From: Evidence based health (EBH) [mailto:[log in to unmask]<mailto:[log in to unmask]>] On Behalf Of Ahmed Abou-Setta
Sent: 16 August 2011 01:39 PM

To: [log in to unmask]<mailto:[log in to unmask]>
Subject: Re: Something else does indeed need to be done

In my opinion, what needs to be done is:


1)      All prospective trials on humans must be registered in a recognized trial register.

2)      All registered trials must be published in full, as per the published study protocol.

3)      Upon submission, or acceptance by a journal, all primary data should be also available for public scrutiny, not just the summary trial results.

How to accomplish this will need a unified stance by members of the scientific community to make sure that these steps are taken by all researchers, including and especially ones with conflicts of interest. But also no law is effective without enforcement. Therefore punishment must be handed down to researchers/ organizations that fail to follow these basic rules.

This may sound feasible, or may not, but even the first step has proven difficult to implement and has been abused by many in the scientific community. The International Committee of Journal Medical Editors made it a requirement to register trials before the enrollment of the first patient. Even so, several studies have shown that researchers have registered trials after the start of recruitment or even after the completion of the trial in order to get a registry number. Even ones who register a priori have sometimes been found to switch outcomes, or not report certain outcomes (e.g. selective outcome reporting). Without any teeth, these rules are just ink on paper. What we need to make sure people comply to accepted standards just like we do with plagiarism, and other forms of deception.

Ahmed




--
Dr Tom Jefferson
Scientific Editor PLoS ONE
Reviewer, Cochrane Acute Respiratory Infections Group
tel 0039 3292025051



--
Dr Tom Jefferson
Scientific Editor PLoS ONE
Reviewer, Cochrane Acute Respiratory Infections Group
tel 0039 3292025051