Hi
I am currently work on a project which involves the feasibility of
signifance testing the difference between 2 correlations.
There are a number of methods / approaches I have used and would welcome
any comments or suggestions as to which is the better one to use and in
which context. Also any others that could be looked into.
The approaches I have used are
1) Run linear regression on the standardised data of the two measures I am
correlating and used the standard error of the estimate as my standard error
of the correlation.
Treating the correlations as normally distributed I then created a
confidence interval for the correlation. Compare confidence intervals and
see how they overlap.
Pearson's correlation is not normally distributed when the values are near
to 1 but it is more so lower down the scale. The data that we work with in
Market Research rarely has correlations over 0.5 when looking at respondent
level correlations.
(*THIS IS MORE CONSERVATIVE - NOT TAKING INTO CONISIDERATION THE JOINT
DISTRIBUTION OF THE CORRELATIONS)
2) Use Bootstrapping to compute the confidence intervals of correlations (
came up with very similar answers to the above method.
3) Use Bootstrapping to look at the joint distribution of Correlation 1 -
Correlation 2. This was to see HOW conservative approach 1 is. Approach 1
is the easier to run on many studies.
4) Use Fishers Z' transformation and using the fact that Z' has a given
standard error treat as a normal variable. This was something I read up on
on the internet but had not come across previously in my 28 year life!!
Again I would be interested to read about anyone else's experience and / or
comments on this and welcome any other approaches that you think would be
worthwhile looking into.
Many thanks
Natalie
Analytical Specialist
Millward Brown UK Ltd
Warwick
Work: [log in to unmask]
Home: [log in to unmask]
|