Hi Mike,
1. If you want to truly estmate the chance level, you have to re-run the
hyper-parameter optimization (otherwise the null hypothesis in your
permutation is different). To decrease the computational costs, you can
use k-folds cross-validation, with k=5. You can also run the
permutations on cluster.
2. I would say this approach makes sense, but I am unsure how you can
draw conclusions from negative results. As you would be using the same
data (although with different labels), you might also have a bias
towards positive results (especially if your scales are correlated).
HTH,
Best,
Jessica
On 19/04/2018 12:05, Mike Myers wrote:
> Dear Jessica
>
> Many thanks for your prompt and helpful answer.
>
> 1. It is quite heavy to do lots of permutations with hyperparameter optimization as you know :) I was asking myself if it's legal to assess the optimal hyperparameter in the model computation and then run the permutations only with this hyperparameter ? This would massively shorten the computation time....
>
> 2. Yes, the sparse approach might be problematic in this case. But: Lets assume I have 5 significant questionnaire models (assessed through correlation and MSE) with different regions weights and ranks. I then go on and take from each model the best predicting brain areas (lets say >10% of the weights) and use this as a new feature set in the other models. In other words, for each model, I use the predictive brain set of the other model. If the results of these models are not significant, this would add evidence that the models are "dissociable"regarding their neural predictors. Would you agree ?
>
> many thanks, mike
|