> -----Original Message-----
> From: Word Grammar [mailto:[log in to unmask]]On Behalf Of And
> Rosta
> Sent: 09 June 2003 02:09
> To: [log in to unmask]
> Subject: Re: [WG] psychological reality of dependencies
>
>
> Jasp:
> > Dick,
> >
> > Some comments (mostly to And)
> >
> > I don't know if it is just my Word (God, how I hate Gates; I DO NOT WANT
an
> > animated paperclip to underline half of what I've written in squiggly
lines
> > OR start a paragraph in italics just because I started the previous one
that
> > way), but wherever it should be ")." I have ")," (warning: some of them
> > *should* be "),")
>
> I have no idea what you're talking about, but seeing as I have
> just had the last 3 days ruined by malfunctioning MS software, I
sympathize.
Sorry to hear it. It is really all rubbish.
I am talking about what looks like a typo (or a whole series of typo's,
which is why I think it must be the product of some stupid automatic
process).
>
> > > >...
> > > >
> > > >My general response (as usual) is to wonder what properties PSG
> > > >has that make it not just a notational variant of DG. My usual
> > > >answer is certain sorts of nonterminal nodes, most notably
> > > >wholly or partially exocentric ones. Since not all varieties of
> > > >PSG allow exocentricity (and so forth), DG's argument is really
> > > >against exocentricity (etc.), & it it misleading to blunderbus
> > > >the whole of PSG, when the attack properly applies only to
> > > >varieties of PSG that differ substantively from DG
> >
> > This objection only applies to some of Dick's arguments: the argument
from
> > dependency distance is unaffected, the argument from dependency
direction
> > probably is affected (not sure), the argument from classification is
unaffected,
> > the argument from prototypes is unaffected, the argument from
> > parsing is unaffected (in fact, I think this is particular problem for
> > purely binary-branching X' structures), the argument from lexicalisation
is
> > unaffected and so is the one from learning. (I've just reviewed that
list,
> > and I see I think only one of the arguments is unaffected.)
>
> I'm not sure exactly what "unaffected"/"affected" mean here. My contention
> is that that most flavours of PSG are notational variants of DG & the
> arguments pro DG could be translated to apply to those flavours of PSG.
>
I mean (un)affected by your objection. What I'm saying is that the majority
of Dick's arguments are not con PSG but pro DG. ie they exploit particular
properties of dependency structure.
Incidentally, I don't see how it saves PSG if one version of it is
translationally equivalent to DG: that means either you have to arbitrarily
limit your PSG to a part of the possible range (and then you still only get
something as good as a DG) or you have to include the full range of features
of a PSG (in which case any arguments con PSG still apply).
Anyway, dealing with Dick's arguments individually (and without looking up
what he actually said!):
Dependency distance seems to be a good measure of complexity, and it can
only really be measured in a dependency structure (actually, I think this
may be one of yours: I think dependecy distance can be just as equally
modelled in a strict binary brancing structure)
Dependency direction is either all in the one direction (as predicted by
considerations of complexity involving eg dependency distance) or, if not,
then at least motivated by other considerations in volving dependency
distance. This is actually an area where X' syntax looks particularly
embarassed (to my eyes) - I mean if you were having strict binary branching,
you wouldn't end up with X, X' and X'', you'd just have X and X'; the extra
bar level seems to exist just to capture SVO patterns.
Classification of dependencies follows the same pattern (projects directly
out of) as classification of other (incl semantic) relationships. No one,
whatever flavour of PS they adopt, suggests that kinship relationships
conform to it too, do they??!!
The argument from prototypes would seem to favour D over even the most
ascetic version of PS (see your exchange with Nik).
Parsing is much easier in dependency structure. Even X' structures have to
backtrack every time a new higher category is doscovered. In fact this gets
even worse if you rule out unary branching (if you permit unary branching
you can at least have these highr categories sitting about twiddling their
thumbs until needed).
Lexicalisation often isolates parent/dependent pairs (not PS nodes, of any
flavour).
Dependencies can be learnt by simply attending to adjacent pairs of words.
PSGs (of all flavours, I think) have categories you would have to learn that
extend far beyond adjacent pairs.
> > > >Furthermore,
> > > >even an argument pro DG structures is not in itself an argument
> > > >against exocentricity or whatever, because there might be
> > > >different levels of structure. E.g. the processor might operate
> > > >on a DG structure when parsing, and build other structure
> > > >derivatively. This actually seems very plausible to me -- that
> > > >the parser builds a surface dependency tree out of phonological
> > > >words, and then derives a much more elaborate syntacticosemantic
> > > >structure from it. (One of my syntactic arguments for that view
> > > >is that conjuncts in ordinary coordination don't match surface
> > > >dependency structure, but RNR -- which is barely syntactic --
> > > >does operate on surface dependency structure -- possibly because
> > > >RNR obeys prosodic constraints, and prosody reflects surface
> > > >dependency structure (because it is made out of phonological
> > > >words).)
> >
> > Would that mean generalisations over dependencies would only let you
learn
> > facts about the phonological dependency structure?
>
> I wouldn't have thought so, since some generalizations would pertain to
> the semanticosyntactic structure that corresponds to the phonological
> dependency structure.
Yes, but they wouldn't be generalisations in dependency structure (you'd
have to learn phrasal categories).
> > > >Another point, is that I'm far from convinced that the processing
> > > >evidence is 100% good news for WG. Processing works only on
> > > >surface dependencies. WG distinguishes surface dependencies,
> > > >but not as naturally as a transformational model with traces
> > > >does
> >
> > In my experience traces work very badly in machine parsers
>
> Without knowing anything at all about the subject, my hunch would be that
> this is a fault with the way the parsers work, rather than a problem with
> traces per se. After all, the differences between a WG with traces and a
> WG without are very slight.
>
> > They do use simulated annealing with some success in PSG parsers. This
is
> > when the structure is not fixed until all the words have been
processed,and
> > it does seem intuitively plausible as a model of how we do it, con what
Dick
> > says. (In practise I think it is mostly achieved by running all possible
> > structures at the same time, in parallel!)
>
> But surely it can't be denied that we process left to right with very
> limited lookahead.
Yes. What is bwing argued is that if you do that then you are severly
handicapped if you have a PSG, as opposed to a DG, because you have to keep
restructuring your parse.
>
> > > >Regarding the subject prototype, you say "No doubt we could
> > > >demonstrate the same kind of patterning for all the other
> > > >dependency categories". I'm not convinced of that. To me the
> > > >evidence for the illusory nature of the subject prototype
> > > >is much stronger than any evidence for prototypicality in the
> > > >other categories
> >
> > Of course subject is 'wider' than say object: it is more general (more
kinds
> > of words have one)
>
> But subject is also 'wider' than dependent, though less general.
> It's really
> only with Subject that you get this cluster of discrete and partially
> dissociable properties.
>
I need a bit more convincing of that.
> --And.
>
|