And Rosta wrote:
> Aye, but isn't Mark claiming that if you extend (1) beyond a certain
> degree, you forget what you said, and the model should capture this?
Yes.
> If not, then I wonder what counts as a long range dependency that the
> model shd exclude.
>
> WG imposes no absolute limits on how far fillers can be from gaps. And it
> imposes no absolute limits on the length of parent--dependent relations.
My take is that the best models will tend to do what humans tend to do.
If the typical human you're trying to model tends to forget the first part
of a dependency before the second part is uttered in a particular case,
then the model should forget it as well. Obviously this opens up an entire
universe of data that would have to be collected (and is collected to some
extent, by psych types) using methods other than traditional linguistic
discovery methods.
WG and Lamb's neurocognitive linguistics are the only approaches I know of
that imply realistic-enough processing regimes that they will warrant this
kind of performance data in the foreseeable future. Croft's model could
probably be transformed into a networky thing easily enough, but somebody
would have to start thinking a bit harder about processing ramifications
at that point.
-- Mark
Mark P. Line
Bartlesville, OK
> Richard Hudson, On 26/06/2008 21:14:
>> Dear And and Mark,
>>
>> I wondered whether to comment on this, so here goes. Long-distance
>> dependencies
>> are reduced in WG to a series of local dependencies, so there's not in
>> fact any
>> particular load on memory. For example:
>> (1) What do you think Mary told me that she'd heard Bill is planning to
>> do.
>> 1. /What/ depends on /do/, so it's one of the latter's properties and
>> it's no
>> further than /do/ is from /think/.
>> 2. /what/ depends on /think/, so ... /told./
>> 3. /what/ depends on /told/, so .... /heard/.
>> and so on. No need for anything like a stack with unlimited capacity:
>> just the
>> ability to remember the last few (3 or 4) words. I suppose the same is
>> true of
>> any 'hopping' analysis of extraction.
>>
>> Best wishes, Dick
>>
>>>> WG, of course, allows very long range dependencies.
>>>>
>>>
>>> Yes, and I think I've complained to Dick about that. :)
>>>
>>> The underlying reason for this constraint is that the short-term
>>> activation of nodes is called short-term for a reason: the activation
>>> decays over time. A very-long-range dependency is of little use if
>>> you've
>>> forgotten the first part by the time you've gotten to the second part.
>>> So
>>> scientific models of syntax should limit the range of dependencies in a
>>> way that fails to be falsified by empirical data.
>>>
>>>
>>
>> --
>>
>> Richard Hudson, FBA. Emeritus Professor, University College London
>>
>> * My web page: www.phon.ucl.ac.uk/home/dick/home.htm
>> * Why I support the academic boycott of Israel:
>> www.phon.ucl.ac.uk/home/dick/home.htm#boycott
>> * My latest book: /Language Networks. The New Word Grammar/
>>
>>
>>
>
>
-- Mark
Mark P. Line
Bartlesville, OK
|