Dear Collective
For me, the problem is the accuracy of the data used to make the recommendations in the report.
The data collection template issued to trusts to assess activity has been open to misinterpretation.
A very simple quality control check would be to look at the tests declared (from the NHSIA documents) and very simply use the population served as the denominator in order to work out the number of tests done
per head of population.
I have done this for our proposed network in the SW
If you look at the populations covered by the individual trusts in our proposed network from
Here
https://www.newdevonccg.nhs.uk/about-us-100131
And also here
You can work out of tests per head of population, this varies by 100% in our
geographical region?
Trust |
Population |
Tests declared |
tests per patient |
Plymouth (hub) |
370800 |
7449690 |
20.1 |
Exeter |
378600 |
6403656 |
16.1 |
North Devon |
157700 |
5353434 |
34.0 |
Torbay
|
278566 |
6403656 |
23.0 |
Cornwall
|
551000 |
not declared |
|
|
|
|
|
This is crucial to the selection of hubs based on their selection (more or less, but size mattered also) on being able to process a test based on the lowest unit cost.
The whole point of the recent Carter report is to reduce variation, which I absolutely applaud. So just having hubs and spokes would not tackle this disparity. That of course is if the data returns are true and
consistent. Alternatively, if the above is telling us that the bean counting exercise has been subject to misinterpretation then surely someone at the NHSIA needs to undertake a reality check on the data submitted given the predicted savings that are going
to fall out of the new network topographies.
It would be quite interesting for the same exercise to be applied to all the other 28 proposed networks.
On the other hand I am one of those nearing the end of my NHS career and as Jonathan has alluded previously, plenty of consultancy to drop out of this initiative so perhaps I shouldn’t rock the boat too much!!
BW John