jasp:
> And:
> >
> > > Incidentally, I don't see how it saves PSG if one version of it is
> > > translationally equivalent to DG:
> >
> > It doesn't save PSG. It means the attack is not on PSG per se but on
those
> > aspects of PSG that make it nonequivalent to DG
>
> Well, no. Many of Dick's arguments focus on the benefits of dependencies,
> rather than the drawbacks associated with emtpy categories etc
>
> If a PSG is DG equivalent, then how do you choose between it and a
> (PSG-equivalent) DG? When it comes down to it you choose (or you can
choose)
> on the basis of the properties that make both non-equivalent (both kinds
of
> formalism seem to need some extra machinery)
I essentially agree. For something like HPSG, it would be pretty
straightforward
(I don't say "trivial") to turn it into a DG. So it seems reasonable to
claim
"HPSG would be better if it took the small step of becoming a DG".
And I've been trying to make the point you make here: the best way to
approach the DG v PSG question is to ask how a theory should choose between
DGhood and PSGhood.
> I've always though that these are more embarassing for PSGs (empty
> categories, traces, features, the LAD, ...) than they are for DGs
> (non-surface dependencies, semantic phrasing, err..)
I, on the other hand, think empty categories are more explanatory than
nonsurface dependencies, and constituency is more explanatory than
precedence concord ('No Tangling'). Plus a whole other list of reasons
I've mentioned in the past, though they most don't apply to actual PSG
models.
> > > Dependency direction is either all in the one direction (as predicted
by
> > > considerations of complexity involving eg dependency distance) or, if
> not,
> > > then at least motivated by other considerations involving dependency
> > > distance
> >
> > Surely PSG handles this equally well?
>
> Not sure how they'd do it, to be honest
Notions of head-first/head-last are common in PSG too. It used to be an
early P&P parameter, indeed.
> > > The argument from prototypes would seem to favour D over even the most
> > > ascetic version of PS (see your exchange with Nik)
> >
> > In a pure PSG, the equivalent of dependency types is types of
nonterminal
> > node (e.g. XP can be defined as the node that has the specifier, X' as
the
> > node that has the complement). And there there is room for prototypes if
> > you want them, maybe. For example, anything that can be said about the
> > Dependent prototype can be said about the (Nonhead) Daughter prototype;
> > anything that can be said about the Subject prototype can be said
> > about the mechanism by which the PSG represents subjects
>
> No (twice). The whole point about PS that they keep braying on about is
that
> the structural position actually *explains* the properties of the argument
> that occupies it
That is not a point about PS. That is a point about certain high profile
models that use PS. If the only contenders in the competition are a DG
that represents GRs and a PSG that truly doesn't, then the model with the
GRs wins hands down.
However, GB did have GRs, in the form of theta roles. Also, the use of
functional projections is a way of representing GRs by means of turning
them into nodes rather than relations. And IMO there is a lot to be said
for this.
> > [I favour a variety of PSG that replaces DG's "X is object of Y" by
> > an 'object regent' node whose head daughter is Y and whose nonhead
> > daughter is X. So WG's dependency types get turned into types of
> > nonterminal regent node. These nodes can be classified taxonomically
> > just as dependency types can.]
>
> ? Then they aren't really phrase structure configurations??
>
> > > Parsing is much easier in dependency structure. Even X' structures
have
> to
> > > backtrack every time a new higher category is doscovered. In fact this
> gets
> > > even worse if you rule out unary branching (if you permit unary
> branching
> > > you can at least have these highr categories sitting about twiddling
> their
> > > thumbs until needed)
> >
> > My main answer to this is that in building semantic structure we *do*
> > backtrack (that is, we do revise structure that has already been built)
>
> Do you have any evidence for this? I'm sure that some backtracking must go
> on, but it has got to be rather costly
I was thinking that the syntax process of going back to adding an extra
nonterminal node to accommodate an incoming adjunct is parallelled by
what happens in semantics. E.g. in "drive slowly carefully", when you
hit "carefully" you have to redo the sense of "drive".
> > So backtracking is plausible for meaning-building syntax, even if not
> > for the bit of the processor that reads in phonological words & assigns
> > a rudimentary surface tree structure to them
> >
> > > Lexicalisation often isolates parent/dependent pairs (not PS
> > > nodes, of any flavour)
> >
> > The converse of this argument is that the norm is that the parent is
> impervious
> > to the lexical identity of the dependent, especially when the dependent
is
> from
> > an open word class. So that is actually an argument for saying that the
> parent
> > can see nothing but (say) a VP node, and cannot see which particular V
it
> contains
>
> It is common, though, to have selection by semantic properties
That does not require the parent to see the particular V, though. I wouldn't
see semantic selection as selection, since conflict between selectionally
imposed semantic properties and lexically encoded semantic properties is
addressed only at the level of pragmatics.
> > The main cases where there is a lexeme-specific association between
parent
> and
> > dependent are either (a) idiomatic
>
> all of language is idiomatic; the question is, how do we learn the
idiomatic
> relationships between categories
OK -- if all of language is idiomatic, then we can distinguish between those
idioms that collectively compose the elaborate mechanism that is the
grammar,
and those that don't (the latter being what would normally be called
'idiomatic').
> > > Dependencies can be learnt by simply attending to adjacent pairs of
> words
> > > PSGs (of all flavours, I think) have categories you would have to
learn
> > > that extend far beyond adjacent pairs
> >
> > I don't know much about this, but why is it easier to assume that the
> > child learns that the members of the adjacent pair are related by a
> > dependency rather than being grouped together into a whole (phrase)?
>
> It is not easier to assume. It is easier to do
I don't see why.
> > > > But subject is also 'wider' than dependent, though less general
> > > > It's really only with Subject that you get this cluster of discrete
> and partially
> > > > dissociable properties
> > > >
> > > I need a bit more convincing of that
> >
> > See if you can come with examples at all similar to Subject: that is,
> > categories that involve a list of discrete default features that need
> > to be overridden for nontypical instances
>
> Resultative adverbials: are predicated of the object (1,3), are selected
> semantically (1,2)
>
> 1. He put foam on my cappucino
> 2. It went cold
> 3. I sneezed it off
>
> Indirect objects do the same sort of thing, err..
Maybe the task I set you wasn't fair, because their acceptability as
examples will depend so much on one's analysis.
Regarding the resultatives, I don't think that (as was the case with
WG subject) you need to describe 1-3 in terms of overridable default
properties. PUT always takes a result complement; other verbs do only
sometimes. According to my analysis of resultatives, 1-2 involve more
specific semantics properties than 3, so there's no overriding.
I accept that prototype effects are partly in the eye of the beholder.
But I do see prototype effects in language; just not in the definition
of dependencies.
--And.
|