Print

Print


I would second the idea that Michail needs a confidence interval rather 
than a test, and needs to specify a parameter for which the confidence 
interval must be estimateed. If Michailhas access to the Stata 
statistical language, then the parameter could be either Somers' D of 
the outcome with respect to membership of Group A instead of Group B or 
the Hodges-Lehmann median diference in outcome between an individual in 
Group A and an individual in Group B. These can be estimated, with the 
option of confidence intervals adjusted for clustered sampling (eg 
paired sampling), using the somersd add-on package, which Stata users 
can download either from my website or from the Statistical Software 
Components (SSC) website.

The case of Somers' D is discussed at length in Newson (2006)[1]. And 
the case of the Hodges-Lehmann median difference is discussed at length 
in Newson (2006)[2]. And a more readable and less equation-intensive 
introduction to the parameters behind "non-parametric" statistics is 
Newson (2002)[3].

I hope this helps.

Best wishes

Roger

References

[1] Newson R. Confidence intervals for rank statistics: Somers' D and 
extensions. The Stata Journal 2006; 6(3): 309-334. Download from
https://www.stata-journal.com/article.html?article=snp15_6

[2] Newson R. Newson R. Confidence intervals for rank statistics: 
Percentile slopes, differences, and ratios. The Stata Journal 2006; 
6(4): 497-520. Download from
https://www.stata-journal.com/article.html?article=snp15_7

[3] Newson R. Parameters behind "nonparametric" statistics: Kendall's 
tau, Somers' D and median differences. The Stata Journal 2002; 2(1): 
45-64. Download from
https://www.stata-journal.com/article.html?article=st0007

Roger B Newson BSc MSc DPhil
RDS Advisor
NIHR Research Design Service London
Department of Primary Care and Public Health
Imperial College London
351 The Reynolds Building
St Dunstan's Road
London W6 8RP
United Kingdom
Phone number: +44(0)20 7594 2784
Email: [log in to unmask]
Website: http://www.rogernewsonresources.org.uk/
RDS Website: rdslondon.co.uk
Opinions expressed are those of the individual, not of the institution.

On 27/09/2018 11:09, Robert Newcombe wrote:
> An interesting problem! I think you really need something that isn't just a test, rather, some way of characterising the difference in distributional form between the two samples, which also enables you to get an associated p-value. I guess that the most helpful approach here ISN'T the usual one for paired data that starts by taking paired differences as in paired t / Wilcoxon etc.  I guess the best approach would be to calculate the K-S statistic, and to investigate its distribution on the null hypothesis that the two samples have the same distribution, but with the additional assumption that the two samples are not independent - perhaps with some measure of correlation as a nuisance parameter.
>
> Robert Newcombe
> Cardiff
>
>
> -----Original Message-----
> From: A UK-based worldwide e-mail broadcast system mailing list [mailto:[log in to unmask]] On Behalf Of Michail Tsagris
> Sent: 27 September 2018 10:59
> To: [log in to unmask]
> Subject: Comparing distributions from 2 dependent samples
>
> Hello.
>
> Does anyone know of a test for comparing 2 distributions from dependent samples?
> Kolmogorov-Smirnov and Anderson-Darling work with independent samples. Is there a modification for dependent samples?
> Any suggestion could prove useful.
>
> With kind regards,
> Michail
>
> You may leave the list at any time by sending the command
>
> SIGNOFF allstat
>
> to [log in to unmask], leaving the subject line blank.
>
> You may leave the list at any time by sending the command
>
> SIGNOFF allstat
>
> to [log in to unmask], leaving the subject line blank.
>


You may leave the list at any time by sending the command

SIGNOFF allstat

to [log in to unmask], leaving the subject line blank.