Print

Print


Yet another missive from SHEEP sent via me. Sorry about this, he will
get it sorted out soon. Any personal comments directed to
[log in to unmask] please.


------------------------------------------------

>  Always a danger of rubbish-in  = rubbish-out,

Not always so, what makes integration such a nice measure is that it
tends to hang in there. The worse the map approximates reality the
worse the correlation gets, how ever a non perfect map does not give
a completely erroneous result. so
rubbish-in <> rubbish-out.

This is way integration can cope with the case of visibility is not
accessiblity the errors introduced do not throw the whole thing off
completely.

>so I am puzzled that Sheep writes that "fractional analysis will
>make such map based observation questions irrelevant".

Meaning I agree the current 'fix it in the art of digitising lines
mentality' is wrong, we should be improving processing techniques.
How ever most of the people doing syntax on work can only use what
software they are given. As such, the only way of making an axial map
correlate is to modify the map rather than do fundamental research on
altering the processing mechanism.

I belive The axial  maps should reflect literal geometry.  That said
axial maps do have a number of advantages at the urban level, I'm
personally  not in favour of dropping the axial representation for
the sake of it. (Axman and Meanda have been able to process places
like Tokyo with over a million axial lines - no one is proposing
doing this with with V.A.G. style analysis).

>Especially
>as he follows this with "One thing we do not understand clearly from
>traditional space syntax is when the visibility matrix (where I can
>see) and the permeability matrix ( where I can go ) differ. For
>example an office with half-height partitions, or an office with
>glass walls."       This is surely a critical point,  in particular
>because on first impressions it would seem that visibility maps
>reflect the experience of strangers, and permeability maps reflect
>the experience of ‘familiars’ (residents?).

Who can say ? With out some understanding of either mechanism the
whole visibility/accessiblity thing keeps appearing hanging around..
Currently I feel syntax works best where movement and accessiblity is
the same which is most of the city most of the time.  There is no
available observation counts for large spaces where the two  do match
up most of the time. If some one has some data I would like to see it
I have some theories which might
be able to make some headway in this area.

>The ‘fractional’ analysis of the axial lines that make up a curve is
>most interesting – but why draw short, straight lines around a curve
in the first place?

Think about it before dismissing it out of hand. If you
knew/remembered differential calculus you would realise that if you
have a mechanism where lines are getting shorter but there are more
of them, makes it possible
to apply differential calculus.  With fractional integration but NOT
traditional integration you can make an infinite number of lines go
to zero length yet still end up with a depth which has some value.
adding more and more shorter lines is a rough approximation to this.
Calculus could represent movement over an arbitrary curve with some
representative number for depth of the curve. While I don't have any
observation data for an area
include many curved walkways I can't yet test this theory. I can at
the moment only point out that fractional analysis is capable of
doing this where as traditional syntax included in Axman does not.
Without doing the differential calculus, users of Meanda can
experiment by doing successively smaller and smaller axial lines
filling a curve. The value for depth over the curve stays about
constant.

A curved space may also be  is also an example of
visibility/accessiblity problem which Fractinal analysis does not
solve. for example walking  between the library/theatre and Town Hall
in Manchester. This depends upon how wide the street and how tall the
buildings and how curved the space - perhaps you could be more
specific in what you mean.

I also agree that there should be a more accessible 'how to space
syntax' guide. I keep suggesting a  book. In "Axman, The User Guide"
tried mentioning some of the main pitfalls in digitising but it would
be useful to have a reference for things like doing observations and
most importantly interpreting the results. If well written it
could handle the difficult questions architects and students ask
'what is good/what is bad'.


>See my note above about 'fewest', for those outside the field the problem is
>that jargon is also historical - the rules changed but the names used for
>the maps did not. The original rule used to generate axial maps was somthing
>like 'draw the fewest and longest lines that cross all convex spaces and
>make all rings of circulation' - the process of automating this rule by
>Stefan Czapski in the mid 80's led us to invent the all line map, the
>overlapping convex space map and to alter the rule used by human researchers
>to 'draw the set of longest lines that cross all convex spaces and make all
>rings of circulation whilst minimising the depth between any pair of
>lines' - Stefan's software automated the production of these maps, and
>although there is no mathematical proof that the software works consistently
>for any 'arbitary' input map, it was good at producing maps that a trained
>human would agree with.


Well if anyone wants to play all line axial maps production there is
a program called "Infinity Within" which can take an outline of a
number of buildings and generate the all line axial map. Most people
try this out after a lot of processing that the all line axial map
does not give significantly different results. "Infinity Within" is
available from Space Syntax Lab or me if I can find the disk I wrote
it on.

For all line axial map production  you can also use SpaceBox, which
has a nifty auto convex space production algorithm. This takes the
model of solid stuff (walls) and can product all the convex spaces
automatically.

If I can stand on my soap box for the moment, what I want to
understand is the social process about the adoption of software. For
example, no one doing Space Syntax has ever bothered to use automatic
convex space analysis - i.e. thoroughly tested it against real world
observations.

You might have thought that this whole  'what is  an axial line ? why
do we use them ?' debate could have been neatly answered more than 13
years ago when I wrote the SpaceBox software. As it was I noticed
that no one was in the least bit interested in using automatic convex
space production. So I removed it from the bits that people did use
to produce Axman. Over the last 10 years no Masters/PhD student has
ever been desperate enough to bother asking 'does automatic
generation of axial lines or convex spaces correlate with real world
movement ?'. So one answer to your original question.


>1) I know lines are not drawn at will, but do they always represent
>exactly the same thing in the real world? (Is there a rigid protocol
for extracting information from the environment?  )

>The first concern relates to the fact that the impressive computing
>only applies after axial maps have been drawn. Many architects
>assume that the computing is a way of extracting spacesyntax
>information out of raw environmental data, much the same way that
>daylight programs take data about solid walls and windows and tell
you about ‘brightness’.  I

Is really that after 13 or so years no one can be bothered to do the
hand work after getting the software to automatically generate the
result to correlate it with real movement. Most researchers in the
field are satisfied with the general visual appearance of similarity
between hand-drawn axial lines and automatically generated convex
spaces and generated all line axial maps. I believe most researchers
are happier dropping in to complex arguments about Popper and
fallibility rather than investigating why certain lines of research
are followed and others ignored. I think Mike Batty too has for some
time had problems with the human intervention  axial line choice
process. Yet has this been enough to produce a paper finally using
SpaceBox to answer the questions it was originally designed to
answer? No. For me this is the key question, do we actually have any
reason for using
axial lines over other forms of representation ? Originally I was
told that the reason students where using axial lines and not testing
convex spaces was that axial lines where well tested and the new
techniques where not. I think a social research inertia took hold -
we use axial lines because it's the software we have been taught -,
we use the software we have been taught on the projects we do, we
teach axial line software because it's the  software we use.

Everything else comes down to

a)  Effort
----------

It's quicker to digitise a few axial lines in. Do some obersvations
on the streets and come up with  a correlation. Than it is to
accurately digitise every building ground plan, make sure this
matches reality, process the whole thing and then do some complex
process of extracting the all convex space integration data and
correlating it with real world data.

b)  Software Inertia
--------------------

As described above, how many people on a UCL short course or masters
get to hear about:

"James Choice" - Axman documents processed for the Choice measure.

"Infinity Within" - axial all line map production from boundary descriptions.

"Orange Box" - super fast processing of axial maps.

"Hard Wave" - integration processed from street center lines where a
space is defined any street centerline with a name. This works in
conjunction with a GIS system (Arc Info) to process integration
analysis for streets by name.

"New Wave" - integration analysis from a text file of numbers.

"NetBox" - a program which lets you process buildings by making up a
map of nodes and links.

"Pesh" - a simple drawing tool do which does by hand convex space
analysis (used by Bill Hillier in Space is the Machine) a.k.a. as
"Hyper Hyper Pesh and "aaaapesh".  I know this software is widely
used but has anyone published a paper giving a correlation between
convex space integration and observed movement?

"NextPesh" - super advanced version of Pesh which could handle curved
shapes + layers + one way streets and no right turns at junctions.

"The Urban Machine" - a program which could permit the interactive
analysis/characterization of hundreds of different cities
simultaneously.

"Loglady" - Unix based super processor of axial maps.

"FarmerBrown" - Transputer based processing of integration via a
vastly parallel supercomputer.

"Omnivista" - a VGA style analysis with new measures including
"drift" and "restricted field of view" paths.

"Spacebox" - automatic generation of convex spaces and all line axial
maps from a boundary/wall description.

Some of these programs mentioned approach your questions and others
(does syntax work for cars with restrictions on turning/one way
streets) I won't bother to cover the software written by others in
the field (such as  "Spatialist" by John Peponis, "Axwoman" by CASA,
software from Cardiff, Ruth's Isovist Generatosr written in Pangea in
1996 (IsoCam and AxialCam), A. Turner's VAG stuff, The VGA software
commissioned by Jake De Syllas - sorry don't know what it is called).
Why does one program get chosen and others ignored ?

c) Our amazing ability to post-rationalize.
------------------------------------------

All in all, I think your questions actually give rise of a more
detailed question of how does software get adopted in the space
syntax community ? What makes students go for one than another. What
gets software tested and why. Why are there no common data sets of
observation/spatial descriptions with which to test new
software/theories  and compare across programs and against reality.
Why are there no lists of difficult situations. Ultimately this
suggests to me that Space syntax in still in it's infancy.

hope this helps
Sheep