Hi Pablo,
It seems some voxels are present in just very few subjects. In the case of the error it seems just 2 subjects had data. Since the bare minimum for the design is 2 EVs (for the three groups, and even so after mean-centering the data), this gives no margin for the analysis.
You can still try a step back and replace the full design (discussed in the other email a few days ago) for subtractions. But before spending time writing a script to subtract the 3 possible pairs of the 3 images for each subject, consider this simple experiment with your data: take just 1 image per subject (say, just timepoint 1), and run a simple 3-group comparison, and use the setup_masks to mask the lesions out and see if randomise works for you. If it does, and if the masks are the same for the three timepoints of each subject, then you can do the subtractions.
Otherwise, you may need to consider other strategies, e.g., dropping completely certain brain regions that are in just a few subjects, using instead an overall mask (option -m in randomise).
Another point to consider is that setup_masks shouldn't be used when it can cause dramatic differences in the degrees of freedom across the brain (that is, some voxels with tiny df, whereas others with comfortable df). Although the test is non-parametric, and does not depend on the df for the uncorrected p-values, the statistic still needs to behave similarly across tests (it has to be pivotal), and with the df varying too much across space, this property will be lost, thus affecting the correction for multiple testing (it can go either way: more conservative or invalid). It can also be harmful to TFCE.
All the best,
Anderson