On 11/02/13 14:37, James Wilson wrote:
> Here at the Damaro Project we’ve started fretting about the increased risk of
> anonymized medical (& social sciences) data being de-anonymized as more and more
> datasets become available and the opportunities for cross-searching increase.
> We’ll be preparing some RDM training for medical researchers shortly, and it
> would be good if we knew a bit more about the issues involved. Is this even
> something worth worrying about (I’m not very familiar with medical data)? Could
> any of you point us in the direction of any advice?
I think you are right to be concerned, although I would also hope that
researchers in the relevant fields are even better informed about the risks.
Indeed, I sometimes feel that researchers have been excessively cautious about
disclosure in the past.
There's no doubt that increasing amounts of data combined with increasing
amounts of processing power to mine that data all lead to increased disclosure
risks. It's one thing to be aware of the risks - it's another to decide how to
manage them. Refusing to disclose *any* data except under very carefully
controlled circumstances is one approach, and it's probably valid for data
where the reuse potential is likely to be limited to a few instances at most.
For data with greater reuse potential techniques adopted for some government
datasets may be appropriate. These include perturbation of some of the numbers
or suppression of some numbers in cases that might lead to disclosure even in
aggregated data. Both need expert statistical advice to ensure that the
resultant data can still be used to do something useful but isn't disclosive.
Examples of perturbation include varying a subject's age by a few years in
either direction. An example of suppression I am aware of comes from the Schools
Census - in any school where the number of pupils receiving free school meals
is below 5, the exact total is redacted from the published data.