Julie,
this is a common problem which occurs with iron-rich silicates as
Fe-garnets and olivine. I personally observed this with different types of
microprobes, no matter which correction procedure I used. I did some
off-line calculations to trace the reason for this and I also think that in
most cases improper macs are responsible for the high totals. There is a
good document on the web which faces this problem:
http://www.seismo.berkeley.edu/geology/labs/epma/problems.htm
I don't know of a Fe-rich garnet which is really suitable as a calibration
standard. Personally, I always try to avoid using minerals which have not
endmember composition as primary (calibration) standards. This is mostly
because I fear that it is easy to generate serious errors in the analysis
of the unknown, which are solely due to uncertainties in the knowledge of
the exact composition and inhomogenity of such standard material. I think
it is almost always a better choice to use well characterised endmember
silicates which also should have high contents of the calibration element.
My suggestions (possibly biased) based on personal experience are:
-use endmember Fe-silicate as calibration standard for iron (I use
smithsonian fayalite with success for silicate analysis; do not use
metallic iron).
-check the set of mass absorption coefficients your correction software
uses and, if necessary, modify them with the help of your local microprobe
guru. Follow the suggestions given in John Donovans web-document as a
starting point. In this document the problem is discussed by the example
for the mass absorption for MgKa with Fe as absorber. This is not of
relevance for your specific problem. However, there are similar
inconsitensies between different sets of macs for other x-ray absorbers for
FeKa as emitter.
Hope I helped.
-Peter
--
Peter Appel, [log in to unmask]
|