Wouldn't it be more interesting to present some evidence on issues of
reliability with axial maps instead of debating it in the abstract? There
is obviously ~some~ error margin and a question about reproducibility when
using axial maps, so how significant are these issues? Sheep asserts that
hand-drawn axial maps are robust. Maybe they are, but without any evidence
its just guesswork. What is the error margin involved in a sample of
'trained' people drawing the same axial map to any particular
specification? What is the control procedure for differences in drawing
style that makes maps comparable and reproducible? How much error can you
ignore and how much becomes a problem? Until someone publishes this kind of
study, Tom Dine's original question about arbitrariness will remain
unanswered.
One of the reasons we chose to develop Visibility Graph Analysis software
for our work at Intelligent Space is the issue of reliability with
hand-drawn representations of spatial structure such as the axial map. The
automated spatial sampling technique provided by VGA resolves this issue
and makes the whole analysis methodology a lot more open to scrutiny.
Whichever techniques are used, I think the aim for all of us involved in
socio-spatial research should be to develop and use methodologies that are
non-controversial. Don't we want people to be able to see past the
methodology so we can get back to talking about what matters- how space may
or may not influence people's lives? The axial map has been a useful tool
for investigating spatial questions about society. Visibility Graph
Analysis offers a step further in the right direction for the development
of open methodology in this field.
Jake
_____________________________
Dr. Jake Desyllas
Partner
Intelligent Space
68 Great Eastern Street
London EC2A 3JT
t: +44 (0) 20 7739 9729
f: +44 (0) 20 7739 9547
e: [log in to unmask]
w: http://www.intelligentspace.com
|