Sorry for the delayed response -
The situation of 'normal' drug-lead molecule, no restriction of solvent
channel access, no other hindrance to mobility, and rapid on-rates
and/or/with a low Kd driving/maintaining the concentration gradient might be
considered almost optimal. But let us assume that it is indeed typical, and
put it in perspective: One of the remarks made previously was that even
seconds can suffice (for ions). Say 5 sec. The time factor between that and
the 'typical' 30 min soaking then is 360, while the factor to 10 hrs (movie
time) is only 20, that is, 10 hrs being an 18x more typical soak time than 5
seconds ;-).
But seriously now, why would I beat the dead 5 sec horse dead again? Because
of the cautionary tales of failed 'typical' 5-sec ligand soaks where beating
proteolysis by 'flash-soaking'
was apparently the motivation to ignore prior odds:
http://www.ruppweb.org/cvs/br/rupp_2001_NSB_questions_BotA.pdf
http://www.nature.com/nsmb/journal/v16/n7/full/nsmb0709-795.html
8 years this model stayed in the literature, frequently cited and presumably
used...
Its little small-molecule friend did not live as long:
http://pubs.acs.org/doi/pdf/10.1021/ja025109g
While advertising again the perils of too short soaking and subsequent
pressure for optimistic interpretation, I think that Dale's assessment of
faster diffusion vs slower binding in the lysozyme-methylene blue case is
correct.
Maybe growing a clear crystal first in a counter diffusion tube so that it
fills the entire tube, and then sticking it into the blue dye and
documenting the dye diffusion in solution vs in crystal might work.
Could be a summer student project...
Soak boldly and stay off the twilight list,
BR
|