This is looks like clever stuff. There's a piece about waiting lists at
http://www.cse.clrc.ac.uk/Publications/1335/nature.pdf
and I saw something in a comic the other day about a new larger study on
waiting lists showing the same sort of thing.
I can follow the idea that small things happen often, big things happen
rarely, and huge things happen once in a...... And the idea of the normal
distribution tells us that anyway - so hardly ground breaking science. But
as far as I understand, the thing about big events when there is power law
scaling is that they happen more than they would in a random system.
Am I right in thinking that it's not just to do with the size of events but
their sequence so that, for instance:
- if you took a series of values from a complex / SOC system and then
measured the intervals [value(n+l) - value (n)] and log-plotted the count of
each interval you would get a nice straight line showing power law scaling.
- but would the same thing happen with the values themselves?
- and if you shuffled the readings so that you kept the actual values but
produced a new set of intervals what would happen?
- I've tried a couple of simulations using 5000 random numbers, and using
the count of the interval
[value(n+l) - value (n)] I find that the simple linear plot approximates to
a straight line. The log plot on the other hand is a big curve.
- I've also done this using 5000 random numbers with a normal distribution
(there's an excel function to do both of these) I get fairly similar
results.
- It would be nice to do this with a block of raw data from an SOC but I
haven't got 5000 reading to play with.
Can any one take this further / correct my misunderstandings?
If anyone wants to see the worksheets they're welcome to.
Chris
|