Print

Print


Thanks, Anderson! One quick followup question: would there be a way to convert/normalize the input data so that the betas are in units of Cohen's d (or some standardized measure)? If not, I'll have to think more about how to translate between the standardized effect size and the actual effect size (in original units).

Cheers,
David



> On Jul 31, 2016, at 3:29 AM, Anderson M. Winkler <[log in to unmask]> wrote:
> 
> Hi David,
> 
> Please see a custom version of PALM with this feature at the link:
> https://dl.dropboxusercontent.com/u/2785709/outbox/palm/palm-alpha98a.tar.gz <https://dl.dropboxusercontent.com/u/2785709/outbox/palm/palm-alpha98a.tar.gz>
> 
> It has the option "-effectdelta <filename>", which allows loading a file containing the "d" for testing hypotheses that C'*beta > d, instead of the usual C'*beta > 0.
> 
> The input file for this option is a Matlab .mat file containing a variable named "effectdelta". This variable should have 3 dimensions: as many rows as inputs (same as the number of -i), as many columns as designs (same as the number of -d) and as many levels as the largest number of t contrasts for any design (i.e., largest number of contrasts across the -t entered). To ensure Octave compatibility, when creating this file, use the option '-v6' in either Matlab or Octave. Note that "d" isn't a standardised effect size: it is the actual effect size, in the original units.
> 
> As implemented, it works for t and v statistics, but not for F, G, r, or R^2. It doesn't work either for MAN(C)OVA statistics, but it should work for NPC. It should also work with the tail approximation.
> 
> I am reluctant in making this change definitive as it adds some extra computation at every permutation that isn't otherwise needed. For now it isn't available in the version in the website or Github.
> 
> The complete description of this approach was presented in a poster during the OHBM2013 (Seattle) by Torben Lund and Kim Mouridsen (Aarhus University, Denmark). I can't find the poster online but you can probably ask Torben via email.
> 
> Hope it helps.
> 
> All the best,
> 
> Anderson
> 
> 
> On 8 July 2016 at 14:38, David V. Smith <[log in to unmask] <mailto:[log in to unmask]>> wrote:
> Thanks, Anderson! I don't mind waiting. 
> 
> Cheers,
> David
> 
>> On Jul 8, 2016, at 2:35 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>> 
>> Hi David,
>> 
>> Ok. If you don't mind waiting, I'll try to have this included at some point by the end of the month. Will update here in the thread then.
>> 
>> All the best,
>> 
>> Anderson
>> 
>> 
>> On 7 July 2016 at 18:29, David V. Smith <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>> Hi Anderson,
>> 
>> Thanks a lot -- that would be great! No rush at all, and I appreciate you taking the time to add it as a feature for PALM.
>> 
>> Best wishes,
>> David
>> 
>>> On Jul 7, 2016, at 2:57 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>> 
>>> Hi David,
>>> 
>>> Not that I'm aware of. If it's something you really need, it can be added to PALM quite straightforwardly, such that instead of testing if the null is zero, one would test if it's smaller than a certain predefined threshold. If it's something that can be really useful for your analysis, let me know and it can be added (please give me a few weeks, won't have a chance of looking into it now).
>>> 
>>> All the best,
>>> 
>>> Anderson
>>> 
>>> 
>>> On 6 July 2016 at 15:38, David V. Smith <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>> Hi Anderson,
>>> 
>>> Thanks for the feedback. All of those points make sense. I was thinking about this in the context of the non-ironic appendices of Friston's 2012 Neuroimage commentary. If we go with that and assume that d = 0.125 is "trivial", would there be a straightforward way to conduct a protected inference with any of the FSL tools (e.g., randomise, flameo, palm)? 
>>> 
>>> Cheers,
>>> David
>>> 
>>> 
>>>> On Jul 3, 2016, at 4:26 AM, Anderson M. Winkler <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>> 
>>>> Hi David,
>>>> 
>>>> Here are some thoughts:
>>>> 
>>>> - A half-sample doesn't have half-power. How much power is lost depends on the effect size and on the size of the full sample.
>>>> - A conjunction of two half-samples is less powerful than the full sample.
>>>> - Two half-samples aren't completely independent if the strategy used to recruit subjects was the same for both halves and subjects come from the same population.
>>>> - Meta-analyses generally have a joint null hypothesis that an aggregate effect across the separate (partial) tests is zero. This isn't the same as searching for replication across samples -- the latter is a conjunction.
>>>> - There are multiple ways in which a sample could be divided. Which to take? Repeating many times and averaging would seem an option, but that also corresponds to a type of meta-analyis.
>>>> 
>>>> My suggestion is to use the full sample. Trivial effects are indeed a concern, and to get around it one must know what a trivial effect would be, which implies knowing something about the effect size, or about the relevance of that effect. The p-value is only one piece of a larger puzzle.
>>>> 
>>>> Cheers,
>>>> 
>>>> Anderson
>>>> 
>>>> 
>>>> On 28 June 2016 at 14:34, David V. Smith <[log in to unmask] <mailto:[log in to unmask]>> wrote:
>>>> Hello,
>>>> 
>>>> I have a stats/fMRI question that isn't specific to FSL. Let's say I'm unsure about the population effect size for the contrast I'm examining, but I want to avoid reporting effects that are trivially small. If I download 200 subjects worth of data, would it be better to a) focus on results derived from the full sample of 200 subjects; or b) split the sample of 200 into two or more groups and focus on results that replicate across subsamples? Or, is there another approach to consider here?
>>>> 
>>>> Thanks!
>>>> David
>>>> 
>>> 
>>> 
>> 
>> 
> 
>