Nik:
> At 09:56 AM 3/9/00 +0000, you wrote:
> >Nik:
> >>It's not! But I've been looking at syntactic arguments for argument
> >>structure in LFG and have realised (dawn breaks over Marblehead) that the
> >>reason LFG has argument structure is that the c-structure 2 f-structure
> >>mapping relationships they permit don't allow a verb to assign more than
> >>one GR to any given valent. (This applies to valent GRs only, you can be
> >>SUBJ + Focus.) Bcs in LFG you can't have conflations of GRs, you have to
> >>represent GRs on 2 distinct levels: ergo you invent (syntactic) argument
> >>structure.
> >## This is interesting but I still haven't quite got it. Could you give an
> >example?
>
> WG permits the following:
>
> (1) <---s&o-------
> w1 w2
>
> LFG doesn't:
> (talking about p1 & p2 bcs LFG has GFs between phrases, or words and
> phrases, but not between words)
>
> (2) *<----s&o------
> p1 p2
>
> The reason (2) is illegal in LFG is to do with c-structure 2 f-structure
> mapping. To quote Alsina (1996: 24) "a given c-structure node cannot map
> onto two different f-structures, because, in order to map onto the same
> c-structure node, the two f-structures would have to have the same index,
> which violates the Uniqueness of F-Structures".
I don't understand the implications of the quote, but I guess I'll have
to read Bresnan's book to properly grock it.
But anyway, although I dimly recall something reminiscent of this from
1990s LFG, 1980s LFG at least allowed (2) de facto, in that a lexical
rule could change OBJ into SUBJ, no? Is that no longer the case?
> This principle only seems to apply to valents. (3) is legal.
>
> (3) <--s & focus---
> p1 p2
... suspiciously. How is this apparent inconsistency rationalized?
> Now the illegality of (2) is a problem in the analysis of certian
> constructions: unaccusatives, raisers, passives, and middles. In the case
> of all of these, a structure like (1) [for WG] or (2) [for LFG] would
> appear to be necessary, bcs there is good evidence that a single word (or
> phrase) is in more than one grammatical reltionship with its head. One
> solution is to argue the toss, and to say that it isn't true for raisers
> and unaccusatives, hence the LFG arguments against (1)/(2) as an account of
> unaccusativity.
In the 90s. Simpson's classic resultative paper essentially argued for
(2). As argument structure was around then, I don't see how you can argue
that a-structure is a necessary consequence of the impossibility of (2).
> But this still leaves passives and middles. So here's what
> you do:
>
> a) you say the architecture of the grammar is as in (4). Each "structure"
> corresponds to a plane.
>
> (4) semantic structure
> | | |
> argument structure
> | | |
> functional structure
> | | |
> constituent structure
>
> The vertical lines represent correspondence rules.
>
> b)you allow mismatches between these planes.
>
> c) you say that functional structure and constituent structure represent
> what you find in utterances -- i.e. these are instantiations of the grammar.
>
> d) you say that argument structure is the representation of the verb's
> syntactic valents in the lexicon.
>
> e) bcs you have no derivations, you have a separate representation in the
> lexicon for the passive of a verb where you say something like "_driven_ is
> the passive form of _drive_, and it has the same argument structure as
> _drive_, but its functional structure realisations are different" [I'm a
> bit hazy on the details at this point bcs I want to use default inheritance
> which they don't allow in LFG].
>
> Then, instead of saying that passive verbs have a realisation like (1)/(2),
> you say that they involve a mismatch between argument structure and
> functional structure. In order to sustain these positions you have to argue
> that argument structure is *syntactic* (the thrust of most of Alsina's
> published work).
And what is your view of all of this?
It all seems entirely correct to me, except for (a) the a priori prohibition
of conflations, (b) either f-structure being distinct from both of the other
structures or, at least, f-structure necessarily mediating between a- and c-
structure.
And I'm not at all sure that WG is different from this. I can't really
tell, because I don't know how WG gets from "dependency structure" to
"semantic structure".
> (I'm sorry if I've misrepresented anything here.)
>
> So it seems to me that the reason LFG has a-structure is that they don't
> permit structures like (1)/(2).
Sorry to keep banging on about this, but from your excellent description
(which accords with my understanding), LFG does allow this:
(4) <----x&y------
p1 p2
where x & y are relations from a-structure or f-structure. Certain
combos it excludes -- valent combos and (iirc) a-relation combos.
But basically the picture is that LFG allows conflation in principle
but excludes certain sorts. I don't agree with excluding the conflations,
but the important point here is (a) that the essential difference from
WG is the exclusion of certain sorts of conflation, and (b) that one does
not have to see LFG as importing, in the form of a-structure, a totally
unnecessary level of structure.
Going back to my message that started this thread, my question is really
about how sure we are that we need f-structure, for a given language.
> They go on to be uncertain about what they
> think semantic structure looks like -- some of them like Levin and
> Rappaport; some of them like Jackendoff; some of them like Dowty.
Is semantic structure the same thing as conceptual structure? For them,
I mean. The fact that some linguists answer that with a Yes and others
with a No causes no end of perplexity.
Which are the relevant bits of Levin/Rappaport and of Dowty that they
like? And why should they take a position on what they think semantic
structure looks like? Do they think they ought to?
> These
> authors make wildly different predictions about the nature of semantic
> structures, and there is no default set of assumptions in LFG about the
> nature of semantic representations (that I've noticed).
>
> This position on a-structure is most clearly worked out in a conference
> paper by Bresnan dating to 1995.
Which was that?
> Where WG is more successful is in having a much smaller set of ontolgcial
> categories that it has to work with: relationships are basic, as are
> categories.
What are LFG's additional ontological categories?
> You may form a view on whether (1)/(2) is monostratal or not.
> But it probably is, because there is no derivation or movement assumed.
I agree it's monostratal. If I'm correct that LFG could once upon a time
and some guise or other render (1)/(2) by means of a lexical rule, then
it would seem to be to be polystratal in a nonstandard but real sense,
because an X relation converted into a Y relation would contrast with
a Y relation converted into an X relation. Thus, X>Y contrasting with X<Y,
as opposed to a monostratal X&Y. This generalizes to lexical rules in
general: there is the possibility of the ordering of application of
rules making a difference. In some cases, as when affixation is involved,
this is clearly correct, but in other cases it's not clearly correct
(e.g. would there be a difference between a middle formed from a progressive
and a progressive formed from a middle, and if there is no observable
difference is the unrealized potential for a difference a defect of the
analysis?).
> It
> is simply declaratively stated that relationships x & y both hold between
> words 1 & 2.
This was a hugely educational message! Thanks! I hope you'll find the time
to answer my quezzies.
--And.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|