Andy said:
> No, dumb-down is (has to be!) that simple/blunt. By
> definition, the systems performing dumb-down don't understand
> the scheme, therefore they simply have to ignore it. If they
> did understand it, they wouldn't be doing dumb-down. ??
I read this last clause as meaning "they wouldn't need to be doing
dumb-down" but maybe the intended meaning is "the process they are doing
wouldn't be categorised as dumb-down"...?
I can imagine that if my system understands qualified DC, it may still
be necessary for that system to transform qualified DC instances into
simple DC instances e.g. for use elsewhere in a system which only
understands simple DC.
So my system _could_ dumb-down "intelligently" (i.e. with understanding
of schemes and proper handling of any lexical-value mapping involved),
rather than, ahem, "dumbly" (discarding scheme-related information).
I think the issue Roland is highlighting is that what I call
"intelligent dumb-down" here (which I think is what Roland calls
"simple" in his follow-up) and "dumb dumb-down" (Roland's "easy") may
produce different outputs.
But maybe "intelligent/simple dumb-down" isn't "dumb-down" as we have
known it.... ;-)
Pete
|