Tangentially to the substance of Stefan's algorithm and demo, I'd like
to raise a point of language:
As in discussions of the "default value" model in the past, we have
slipped into talking about the process of reading that default value
(or DumbDown value) as "dumbing down".
I would like to suggest that we find another term for this. In current
usage, "dumbing down" means "ignoring the qualifiers". The idea -- or
principle -- is that an application will not know all of qualifiers in
use and so should always be able to just ignore them and still use an
unqualified literal for discovery. The principle can also be used as a
conceptual test of the appropriateness of a qualifier.
Just like any other property value for a Dublin Core statement, the
value of an rdf:value (or rdfs:label) will itself always be subject to
qualification (e.g., "<rdf:value>Europe -- History</rdf:value>" as a
Library of Congress subject heading). And it will therefore also be
subject to "dumbing down" -- e.g., ignoring "LCSH" and processing
"Europe -- History" as a generic value for Subject.
If the process of "reading" or "extracting" [1] the "default value" of
an INTNODE is _also_ called "dumbing down", it bothers me that we are
not differentiating between two processes which, though not entirely
unrelated, are conceptually quite different, making it that much more
difficult to distinguish the two in our explanations.
I would prefer we say "reading the default value".
Tom
[1] Stu wrote:
>stu >>> I believe that Tom's grammar paper essentially proposes a data
>model for DC metadata, but does not, nor should it, constrain an encoding of
>that data model. I don't see a conflict here, as long as a sensibly written
>parser extracts a "metadata sentence" that is consistent with the grammar.
_______________________________________________________________________________
Dr. Thomas Baker [log in to unmask]
GMD Library
Schloss Birlinghoven +49-2241-14-2352
53754 Sankt Augustin, Germany fax +49-2241-14-2619
|