Dan,
Yes, but it also tempts to you make an error.
If you have people aged 15 to 20 in your sample, and I have people
aged 15 to 60 in my sample, and the effect of age is the same in both
samples (and it's non-linear, so the effect doesn't change across age)
then my standardized effect will be much bigger than your standardized
effect, even though the effects are the same. The unstandardized
effects will be the same though.
And to complicate things, what if weight also had the same effect, but
your sample had a much greater range in weight than mine. Then the
standardized effects would swap over - I'd say that age was much more
important, you'd say that height was much more important, and even
though our effects were exactly the same we'd have a big fight about
our differing conclusions.
That's the problem with standardizing, in order to compare variables.
You can get around this by either making sure you have a random sample
of the population, or by using sampling probability weights to match
your sample to the population of interest (or better, both). Unless
you're using some pretty big national datasets, you're unlikely to
have that - I've only been involved in collecting data for one study
where we did.
J
On 31 January 2010 04:56, Dan <[log in to unmask]> wrote:
> Thanks Thom
>
> It seems to me it should be a basic element of the output, as in OLS
> regression with the standardised beta values. Surely comparing the effect
> of different variables is fundamental for most research.
>
> Dan
>
--
Jeremy Miles
Psychology Research Methods Wiki: www.researchmethodsinpsychology.com
|