That's a useful observation Boon. Thank you. I looked through the code
changes yesterday and nothing jumped out at me saying here's the bug. It
may actually be a bug not in the MFITTREND code itself, but in a
supporting subroutine. A bell is ringing faintly of a fix to do with
the rejection threshold. Armed with this extra information should help
pinpoint the cause.
Looking at the trendview plots I noticed that the anomalous fits occur
where all the data are masked, i.e. the rejection is more severe, hence
explains why it preferentially affects the highest and lowest signals.
They're essentially regarded as outliers. If that's what it did, it
ought to have filled the baseline fits with bad values. Using
method=single should be better when you have many spectral pixels. The
global method was added for low S/N data. In Jamie's data, the single
method looks fine and ejects around noise spikes as well as spectral
lines.
Sorry if an unreported bug is going to cause lots of extra work. How
much manual interaction was there, and how much can be repeated using
scripts or pipelines? Having discovered the best approach, recorded the
data-processing history, it's surely much easier to repeat than starting
from scratch.
Malcolm
|