The precision must be obtained either from multiple measurements which must be representative of the measurements you propose to make, or if the measurement consists of a count (say of photons) then from counting statistics, or a combination of the two.  This must be done by either by prior calibration (by say the manufacturer or by you) of the experimental setup, or in the course of making the measurements themselves.  Either way there will be an experimental estimate of the standard deviation of the quantity you are trying to measure, against which you can compare individual or averaged measurements for significance using P values, confidence intervals etc.

Now of course there may be variances that are not being explored by the current setup, but if the setup is redefined it must be recalibrated so the new estimates of the SDs are applicable to the new setup.  To answer the question from your email just in, if experimental setup is changed in any significant way the experimental precision is likely to change and it is likely to require recalibration.

So I don't see there's a question of wilfully choosing to ignore. or not sampling certain factors: if the experiment is properly calibrated to get the SD estimate you can't ignore it.

-- Ian


On 13 March 2013 18:59, Ed Pozharski <[log in to unmask]> wrote:
Kay,

>  the latter is _not_ a systematic error; rather, you are sampling (once!) a statistical error component.

OK.  Other words, what is potentially removable error is always
statistical error, whether it is sampled or not.

So is it fair to say that if there are some factors that I either do not
know about, willfully choose to ignore or just cannot sample, then I am
underestimating precision of the experiment?

Cheers,

Ed.


--
After much deep and profound brain things inside my head,
I have decided to thank you for bringing peace to our home.
                                    Julian, King of Lemurs