Dear All,
I am looking to compute the signal-to-noise ratio from ECoG signals that
I pre-processed. I performed a Morlet wavelet time frequency analysis on
the signals, then used the LogR method to rescale each epoch to its
baseline. I hence obtain my signals in dB, if I'm correct.
Let's say I have 2 conditions in my epochs (A and B), and I'd like to
compute the SNR between A epochs and B epochs. There are multiple
formulations for the SNR, depending on whether we consider these signals
as power, as amplitude, in dB or not.
Can you please help me figure out which formulation of the SNR is suited
for my case?
My goal is to create simulated data with a fixed SNR, i.e. fixing the
signal in condition A based on the signal in condition B (acting as
noise) and the desired SNR. What range of SNR should I be testing for,
knowing that I'd like to range from little discrimination between A and
B, to good discrimination?
What I've tested so far:
- use mest = std(B trials, either averaged or concatenated)
- ratio = 0.5:0.5:10
- signal = signal in A epoch + rectangular window of amplitude (ratio*mest)
I tried a few different things, but I often end up with significant
differences between A and B when using the 0.5 ratio, which gets me
confused. I also tried based on z-scored data instead of LogR, and it
provided more meaningful results, which is even more confusing.
Can someone help or point to related references?
Thank you.
Best regards,
Jessica
|