On 24 August 2017 at 19:14, NATHAN BRUNETTI <[log in to unmask]> wrote:
> Hi David
>
> Thanks for taking a look into this and for the explanation of what is
> happening. Just to make sure I understand, you wrote that "the
> ClumpFind.Tlow parameter causes the low-valued edges of the clump to be
> excluded from the size calculation." Are you referring to the pixels below
> the Tlow contour?
Yes.
> And this changes what is accepted/rejected because the
> size calculated as the RMS deviation from the centroid is reduced because it
> does not include all pixels far from the center that are at very low data
> values (below Tlow)?
That's right.
> I'm wondering if your suggestion of setting FWHMBeam=0 would actually be
> enough for what I need. I could use minpix (as calculated in ClumpFind)
> along with the FWHM set to zero so that the size criterion is ignored but
> clumps are still rejected if they are too small in their total number of
> pixels. I've tried this on the test I sent and it does not reject the
> source, and I've tried it on a few of my maps with real mm sources and it
> doesn't appear to do anything nasty. Nothing super narrow and long is picked
> up as a clump (the simplest thing I expected to go wrong). Since I have only
> a handful of fields, each with less than ~20 sources, it is pretty easy for
> me to carefully inspect the clump finding results. Am I missing anything
> that would make this an unsafe approach?
Not that I can think off. The danger of long thin sources is the only
thing I can think of.
> On the other hand, applying the same threshold to the beam sounds like a
> good idea. Without doing so it seems like the beam and the clumps are being
> treated differently. Having the size check take that truncation into account
> sounds analogous to the way the minpix value is calculated with the log(
> minhgt/thresh ) factor applied to it in cupiddefminpix.c at line 102. This
> would probably make the completeness testing I'm doing work and might help
> with some of the real source finding I've been doing where fairly obvious
> features were being thrown out because of the same size rejection criteria.
> And as a last resort I could fall back to setting the beam to zero and using
> the minpix parameter. This has been very informative.
I've been thinking further about this. There are some extra problems,
like what to about the deconvolution controlled by the DECONV
parameter - should it use the original or the corrected beam size? On
the face of it, the original size should be used since that is what
determines the degree of smoothing produced by the beam. But if the
size rejection test is then based on the corrected beam size, it is
then possible for deconvolved clumps with negative sizes to make it
into the final catalogue. Which is not nice.
Because of this and other questions, and because you say you can make
progress by simply setting fwhmbeam to zero, I'm inclined to leave
things as they are for the moment.
David
----
Starlink User Support list
For list configuration, including subscribing to and unsubscribing from the list, see
https://www.jiscmail.ac.uk/cgi-bin/webadmin?A0=STARLINK
|