Thanks, Jasper and Mark, for this really useful exchange. I agree totally: no hacks, no 'clever' solutions. If it's psychologically plausible, we should allow it, but of course the machine has to work. That's why I think this discussion is so important. In working on my new book I've become clearer than before about the difference between psychology, on the one hand, and linguistics or AI on the other. Psychology uses experiments that can prove the existence of a connection (e.g. through priming), but can't tell us anything at all about the nature of that connection; and psychologists never talk about default inheritance. Linguistics/AI builds detailed models which can distinguish different kinds of connections, including those of default inheritance, but doesn't know for certain what connections there are. What we're trying to do is to produce a model which combines the two approaches, with the links that psychology proves and the detail that comes from linguistics/AI.

So that's agreed (I think). Next question: exactly how does the model work? More on this in the other messages.
    Dick

Mark P. Line wrote:
[log in to unmask]" type="cite">
Yeah, I agree completely. There are plenty of hacks out there if all you
want is a black box (google for "statistical parsing"). What I want is a
glass box -- that's the whole point of computer simulation, especially
when  the model is supposed to be informed by theory.

We've begun exploring some techniques for transforming an underlying
network data structure into something simpler to visualize in the user
interface of the simulator (e.g. each property arc uses a singleton node
to represent the relation, and these all have to be distinguished
internally with something like a serial number -- but you don't see those
numbers except in debugging mode).

Obviously, there's no limit to the complexity of the transformation
between the actual network being simulated and the view of it presented in
the user interface -- as long as we're always straight on what's really
there underneath. So my vote, as you predicted, would be to make the
linguistic representations as complex as they need to be in order to
capture the phenomena. The more mechanical bits (as in your hack-free
solution, which I like) can be abstracted out when we don't need to see
them.

-- Mark


jasper holmes wrote:
  
So the question is, if both of these solutions can be made to work,
which should we prefer. The answer to that is, of course, that it
depends what you want it for. You might take the view (I do) that the
second is the most psychologically real. But then you might feel at
one and the same time that the first would be easier to implement in
an algorithm, especially if you were already using [x before y] and [x
after y] in your computer application to save hassle (and I know it
does save hassle: when you start counting every little arc they soon
add up). You may or may not (I do) want to bear in mind that this is
and remains a hack.

So what do you want? Psychological reality or a working computer
model? Mark, I guess, would argue that if your hacks don't correspond
to psychological reality then sooner or later you are going to come up
against something that you can't model any more without going back and
undoing the hacks. Dick might also say that the whole point of the
computer model is to represent what we really think grammar is like
(to test our theory of grammar as much as anything else). The
principles of WG plus the correct grammar structures should always
give the correct result; and if they don't then you need to revisit
the grammar (or the principles!).

Japs

On 9/4/08, jasper holmes <[log in to unmask]> wrote:
    
I don't mean by cutting in at the beginning like this to discount the
 very useful discussion that has already taken place. It's just I
 couldn't find anywhere else to sensibly come in.

 This is a tricky one, isn't it and I remember we struggled with it
 when we were in the same situation before. I think, though, that we
 largely solved it, or caused it to disappear.

 I think we looked at possible ways of putting two relationships into
 conflict without one (transitive)isa-ing the other, rather as Lyne has
 been suggesting, so that 13 would block 14 because of 'after'
 overriding 'before'. We tried to state this in terms of some common
 ancestor, as Dick is doing below. I think my favourite solution along
 these lines was to say that there are some relationships that you can
 only have one of: you can have as many dependents as you like so [X
 extractee Y] doesn't override [X object Y], but you can only have one
 'ordering' (let's call it that for now, until the next paragraph
 anyway), so [X after Y] does override [X before Y].

 Can't remember exactly why this didn't work out, but I don't think it
 did. Fortunately, however, there is another solution, and this follows
 Lynes other suggestion: [X after Y] and [X before Y] are not the right
 way to represent the ordering relationships. I'd say something more
 like this:
 [91: word dependent X]
 [92: word time Pw]
 [93: X time PX]
 [94: Pw < PX] ('less than')
 [95: subject isa dependent]
 [96: word subject Y]
 [97: Y time PY]
 [98: Pw > PY] ('greater than')

 Then, for what it's worth, 97 overrides 93, since Y isa X.

 I've been trying to think of other examples, in case they aren't
 amenable to this kind of solution, but I can't. Perhaps there are
 some; it's a bit of a hostage to fortune I guess to rely on none
 turning up.

 I think I've presented two almost solutions. I evaluate them (very
 briefly) in the next message. Then there follows a message with a red
 herring.


 Jasper



 On 8/28/08, Richard Hudson <[log in to unmask]> wrote:
 >
 >  Dear All,
 >  After a long silence on this list, here's a question for you all.
It's
 > about how to make default inheritance work properly. We (at least,
Mark Line
 > and I) have an algorithm which promises to work reasonably smoothly
in a
 > computer system that Mark is building (and that should work more
generally
 > as well, of course), but only for one of the two kinds of situation
that
 > default inheritance has to deal with. My question is whether anyone
has any
 > bright ideas for handling the other kind. Here goes with the problem.
 >
 >  DI has to take as input a proposition [1: A R V], where A is the
argument,
 > R is the relation and V is the value, and apply it to some instance
of A,
 > called A'. "Applying it" means deciding whether or not to inherit
[2:A' R'
 > V'], a copy of [1], in the light of  the store of propositions P
already
 > stored for A'; and the crucial question is whether P contains a
proposition
 > which overrides [2].
 >
 >  For example, assume this database:
 >  [3: Bird locomotion flying]
 >  [4: Penguin locomotion swimming]
 >  [5: Penguin is-a bird]
 >
 >  Store of propositions about Penguin', some particular penguin:
 >  [6: Penguin' isa Penguin]
 >  [4': Penguin' locomotion' swimming']  {inherited from [4]}, where
 > [locomotion' is-a locomotion]
 >
 >  Question: can Penguin' also inherit [3']?
 >  [3': Penguin' locomotion' flying']
 >
 >  The question assumes a potentially inheritable stored proposition IP
and
 > some potential overriding proposition OP - e.g. in the above IP = [3]
and OP
 > = [4'].
 >
 >  Type 1 inheritance:
 >  Where IP and OP have the same relation but different values. This is
easy,
 > because we can define 'the same relation' as being where:
 >
 > IP = [A1 R1 V1]
 >  OP = [A2 R2 V2]
 >  and [R2 is-a R1].
 >  There's not even any need to check the relation between V1 and V2,
because
 > it doesn't matter whether or not they're related; either way, the
 > inheritance system ignores IP.
 >
 >  Type 2 inheritance:
 >  Where IP and OP have the same value but different relations. This is
the
 > hard one, and I'm embarrassed to say that although I've been aware of
the
 > problem for years, I've also managed to avoid thinking about it. It's
 > painfully easy to illustrate from word order rules:
 >  [7: word dependent X]
 >  [8: word before X]
 >  [9: word subject Y]
 >  [10: subject is-a dependent]
 >  [11: word after Y]
 >
 >  Precisely what is it that prevents some word W from inheriting the
 > following?
 >  [12: W subject Z]
 >  [13: W after Z]
 >  [14: W before Z]
 >  I've had various thoughts, but none that I really like, so I'd be
 > interested to hear other ideas.
 >
 >  Best wishes, Dick
 >
 >
 >
 > --
 >
 >
 > Richard Hudson, FBA. Emeritus Professor, University College London
 >
 > My web page: www.phon.ucl.ac.uk/home/dick/home.htm
 > Why I support the academic boycott of Israel:
 > www.phon.ucl.ac.uk/home/dick/home.htm#boycott
 > My latest book: Language Networks. The New Word Grammar
 >
 >

      
    


-- Mark

Mark P. Line
Bartlesville, OK


  

--
Untitled Document

Richard Hudson, FBA. Emeritus Professor, University College London