JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for WORDGRAMMAR Archives


WORDGRAMMAR Archives

WORDGRAMMAR Archives


WORDGRAMMAR@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

WORDGRAMMAR Home

WORDGRAMMAR Home

WORDGRAMMAR  2003

WORDGRAMMAR 2003

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: psychological reality of dependencies

From:

And Rosta <[log in to unmask]>

Reply-To:

Word Grammar <[log in to unmask]>

Date:

Wed, 11 Jun 2003 13:13:02 +0100

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (201 lines)

jasp:
> And:
> >
> > > Incidentally, I don't see how it saves PSG if one version of it is
> > > translationally equivalent to DG:
> >
> > It doesn't save PSG. It means the attack is not on PSG per se but on
those
> > aspects of PSG that make it nonequivalent to DG
>
> Well, no. Many of Dick's arguments focus on the benefits of dependencies,
> rather than the drawbacks associated with emtpy categories etc
>
> If a PSG is DG equivalent, then how do you choose between it and a
> (PSG-equivalent) DG? When it comes down to it you choose (or you can
choose)
> on the basis of the properties that make both non-equivalent (both kinds
of
> formalism seem to need some extra machinery)

I essentially agree. For something like HPSG, it would be pretty
straightforward
(I don't say "trivial") to turn it into a DG. So it seems reasonable to
claim
"HPSG would be better if it took the small step of becoming a DG".

And I've been trying to make the point you make here: the best way to
approach the DG v PSG question is to ask how a theory should choose between
DGhood and PSGhood.

> I've always though that these are more embarassing for PSGs (empty
> categories, traces, features, the LAD, ...) than they are for DGs
> (non-surface dependencies, semantic phrasing, err..)

I, on the other hand, think empty categories are more explanatory than
nonsurface dependencies, and constituency is more explanatory than
precedence concord ('No Tangling'). Plus a whole other list of reasons
I've mentioned in the past, though they most don't apply to actual PSG
models.

> > > Dependency direction is either all in the one direction (as predicted
by
> > > considerations of complexity involving eg dependency distance) or, if
> not,
> > > then at least motivated by other considerations involving dependency
> > > distance
> >
> > Surely PSG handles this equally well?
>
> Not sure how they'd do it, to be honest

Notions of head-first/head-last are common in PSG too. It used to be an
early P&P parameter, indeed.

> > > The argument from prototypes would seem to favour D over even the most
> > > ascetic version of PS (see your exchange with Nik)
> >
> > In a pure PSG, the equivalent of dependency types is types of
nonterminal
> > node (e.g. XP can be defined as the node that has the specifier, X' as
the
> > node that has the complement). And there there is room for prototypes if
> > you want them, maybe. For example, anything that can be said about the
> > Dependent prototype can be said about the (Nonhead) Daughter prototype;
> > anything that can be said about the Subject prototype can be said
> > about the mechanism by which the PSG represents subjects
>
> No (twice). The whole point about PS that they keep braying on about is
that
> the structural position actually *explains* the properties of the argument
> that occupies it

That is not a point about PS. That is a point about certain high profile
models that use PS. If the only contenders in the competition are a DG
that represents GRs and a PSG that truly doesn't, then the model with the
GRs wins hands down.

However, GB did have GRs, in the form of theta roles. Also, the use of
functional projections is a way of representing GRs by means of turning
them into nodes rather than relations. And IMO there is a lot to be said
for this.

> > [I favour a variety of PSG that replaces DG's "X is object of Y" by
> > an 'object regent' node whose head daughter is Y and whose nonhead
> > daughter is X. So WG's dependency types get turned into types of
> > nonterminal regent node. These nodes can be classified taxonomically
> > just as dependency types can.]
>
> ? Then they aren't really phrase structure configurations??
>
> > > Parsing is much easier in dependency structure. Even X' structures
have
> to
> > > backtrack every time a new higher category is doscovered. In fact this
> gets
> > > even worse if you rule out unary branching (if you permit unary
> branching
> > > you can at least have these highr categories sitting about twiddling
> their
> > > thumbs until needed)
> >
> > My main answer to this is that in building semantic structure we *do*
> > backtrack (that is, we do revise structure that has already been built)
>
> Do you have any evidence for this? I'm sure that some backtracking must go
> on, but it has got to be rather costly

I was thinking that the syntax process of going back to adding an extra
nonterminal node to accommodate an incoming adjunct is parallelled by
what happens in semantics. E.g. in "drive slowly carefully", when you
hit "carefully" you have to redo the sense of "drive".

> > So backtracking is plausible for meaning-building syntax, even if not
> > for the bit of the processor that reads in phonological words & assigns
> > a rudimentary surface tree structure to them
> >
> > > Lexicalisation often isolates parent/dependent pairs (not PS
> > > nodes, of any flavour)
> >
> > The converse of this argument is that the norm is that the parent is
> impervious
> > to the lexical identity of the dependent, especially when the dependent
is
> from
> > an open word class. So that is actually an argument for saying that the
> parent
> > can see nothing but (say) a VP node, and cannot see which particular V
it
> contains
>
> It is common, though, to have selection by semantic properties

That does not require the parent to see the particular V, though. I wouldn't
see semantic selection as selection, since conflict between selectionally
imposed semantic properties and lexically encoded semantic properties is
addressed only at the level of pragmatics.

 > > The main cases where there is a lexeme-specific association between
parent
> and
> > dependent are either (a) idiomatic
>
> all of language is idiomatic; the question is, how do we learn the
idiomatic
> relationships between categories

OK -- if all of language is idiomatic, then we can distinguish between those
idioms that collectively compose the elaborate mechanism that is the
grammar,
and those that don't (the latter being what would normally be called
'idiomatic').

> > > Dependencies can be learnt by simply attending to adjacent pairs of
> words
> > > PSGs (of all flavours, I think) have categories you would have to
learn
> > > that extend far beyond adjacent pairs
> >
> > I don't know much about this, but why is it easier to assume that the
> > child learns that the members of the adjacent pair are related by a
> > dependency rather than being grouped together into a whole (phrase)?
>
> It is not easier to assume. It is easier to do

I don't see why.

> > > > But subject is also 'wider' than dependent, though less general
> > > > It's really only with Subject that you get this cluster of discrete
> and partially
> > > > dissociable properties
> > > >
> > > I need a bit more convincing of that
> >
> > See if you can come with examples at all similar to Subject: that is,
> > categories that involve a list of discrete default features that need
> > to be overridden for nontypical instances
>
> Resultative adverbials: are predicated of the object (1,3), are selected
> semantically (1,2)
>
> 1. He put foam on my cappucino
> 2. It went cold
> 3. I sneezed it off
>
> Indirect objects do the same sort of thing, err..

Maybe the task I set you wasn't fair, because their acceptability as
examples will depend so much on one's analysis.

Regarding the resultatives, I don't think that (as was the case with
WG subject) you need to describe 1-3 in terms of overridable default
properties. PUT always takes a result complement; other verbs do only
sometimes. According to my analysis of resultatives, 1-2 involve more
specific semantics properties than 3, so there's no overriding.

I accept that prototype effects are partly in the eye of the beholder.
But I do see prototype effects in language; just not in the definition
of dependencies.

--And.

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

April 2024
June 2021
October 2020
April 2020
March 2020
September 2019
June 2019
May 2019
April 2019
December 2018
September 2018
August 2018
July 2018
June 2018
April 2018
June 2017
April 2017
March 2017
February 2017
January 2017
December 2016
September 2016
August 2016
July 2016
June 2016
February 2016
November 2015
July 2015
April 2015
March 2015
February 2015
September 2014
August 2014
July 2014
June 2014
May 2014
March 2014
February 2014
October 2013
July 2013
June 2013
April 2013
March 2013
February 2013
January 2013
February 2012
February 2011
January 2011
June 2010
April 2010
March 2010
December 2009
August 2009
June 2009
April 2009
March 2009
February 2009
November 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
December 2007
October 2007
September 2007
August 2007
July 2007
June 2007
March 2007
February 2007
January 2007
2006
2005
2004
2003
2002
2001
2000
1999


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

For help and support help@jisc.ac.uk

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager