Print

Print


Laurel
 
This is not a new problem and is an issue that is unfortunately one which is increasing.
See http://archinte.jamanetwork.com/article.aspx?articleid=2109855 which is a review of FDA studies and subsequent mal-reporting (or lack thereof).
There is also an editorial in the same issue.
 
Sanjay Gupta has a related commentary on the article:
http://www.medpagetoday.com/PublicHealthPolicy/ClinicalTrials/49958?xid=nl_mpt_DHE_2015-02-11&utm_content=&utm_medium=email&utm_campaign=DailyHeadlines&utm_source=ST&eun=g35520d0r&userid=35520&email=grantw%40upstate.edu&mu_id=5035033&utm_term=Daily
 
IMHO you have a couple of responsibilities/actions.
 
If you find discrepancies that you can specifically identify, then the authors should be contacted and asked whether or not your interpretation is correct.
You can then report the request(s) for clarification, the corrections if any and/or author response/non-response
 
Yes, you are free to re-analyze data that is reported and if such re-analysis would impact the findings yes, you should report that contention and supporting evidence
 
Discrepancies noted should be a part of and inform your conclusions and discussion sections
 
I would suggest that once you have a draft pretty much in place that  you get someone not involved in the data extraction/re-analysis to serve as a fairly independent
editor to go through the draft and any data re-analysis to make certain you have not missed something yourselves.
 
With increasing pressures to publish and an increasing number of questionable journals, it is imperative to help each other understand what is real and what is not.
 
Bill
 
PS there is a reason that sites such as Retraction Watch http://retractionwatch.com/ exist and it is sad that these sites are needed.
 
 

>>> Laurel Graham <[log in to unmask]> 2/12/2015 10:58 AM >>>


Hello,
 I have a problem and need your collective advice. I am a member of a systematic review workgroup and we are in the process of extracting data from the investigations that fulfilled our inclusion criteria and are running into issues where the results in tables do not match the results in the text and where the results are reported without including previous failures.

The expert extractors (experts in the field under study, and one expert with extensive stats knowledge but no experience doing sys revs) have encountered either sloppy reporting or outright lies in the reporting of data. One of the expert extractors has been recalculating results based on the stated methodology (it was reported but not followed by study).


Is this OK if we report our recalculation of outcomes and use those in meta-analysis and report the Bias?


Thanks!


Laurel

 

AAPD

[log in to unmask]