JiscMail Logo
Email discussion lists for the UK Education and Research communities

Help for DC-ARCHITECTURE Archives


DC-ARCHITECTURE Archives

DC-ARCHITECTURE Archives


DC-ARCHITECTURE@JISCMAIL.AC.UK


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

DC-ARCHITECTURE Home

DC-ARCHITECTURE Home

DC-ARCHITECTURE  September 2003

DC-ARCHITECTURE September 2003

Options

Subscribe or Unsubscribe

Subscribe or Unsubscribe

Log In

Log In

Get Password

Get Password

Subject:

Re: Dublin Core Abstract Model

From:

"Thomas G. Habing" <[log in to unmask]>

Reply-To:

DCMI Architecture Group <[log in to unmask]>

Date:

Mon, 8 Sep 2003 22:19:15 -0500

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (119 lines)

Andy Powell wrote:

>On Fri, 29 Aug 2003, Pete Johnston wrote:
>
>
>
>>>In your scenario, the 'value dumb-down' was essentially
>>>throwing away all the 'rich values'. However, it is
>>>conceivable that a dumb-down processor with some knowledge of
>>>different 'rich value' types could successfully convert some
>>>of those 'rich values' into 'string values'.
>>>
>>>
>>I also like your emphasis that what one processor treats as
>>"complex"/rich values, another processor may treat as "simple"/value
>>strings. (I think I read something by Roland making a similar point, but
>>I can't locate it just now.)
>>
>>This suggests that somewhere in Andy's document maybe we need the notion
>>of "a DC metadata processor" or "a DC dumb-down engine", but I haven't
>>thought that through.
>>
>>
>
>In the past I have always tended to think of 'dumb' engines when I thought
>about the concept of dumb-down because that seemed to be the more common
>scenario on the Web.  For example, a general search service gathering
>metadata from lots of relatively unknown sources and having to deal as
>best it can with relatively unknown metadata. The idea of 'intelligent
>dumb-down' implies to me some sort of specific conversion service that
>only works in-front of a known set of sources?  However, I suppose that in
>many cases there will be a mix of known and unknown stuff in the metadata
>that is harvested, so there will need to be a mix of intelligent and dumb
>approaches.
>
This probably reflects all the OAI'ing I've been doing, but much of the
dumb-down (d-d) I deal with is of the more 'intelligent' variety.  I
often start out with rich metadata that I have created myself or that I
understand pretty well, and then d-d to simple DC (oai_dc) for use with
OAI, so I suppose my d-d is just on the dumb side of dumb and dumber.  :-)

It might be useful to take the idea of intelligent and dumb d-d and
apply it to the idea of value d-d and element d-d, as in:

                          The Dumb-down Matrix

                         Intelligent    |        Dumb
                      -------------------------------
Element  D-D |          I         |          II             |
                      --------------------------------
Value D-D     |         III        |         IV            |
                      --------------------------------

Case I -- Intelligent Element Dumb-down (has a ring to it :-)  Every
possible element (even non-DC namespace elements) which can be reduced
to one of the more generic irreducible DC elements is reduced.  This has
to be based on some knowledge of the elements to be reduced either coded
into the processor or derived from external semantic schema or other
sources.  The reduction of elements in the DC namespaces must conform to
the rules in the DC semantic schemas (property, sub-property relationships).

Case II -- Dumb Element Dumb-down:  Only elements from the DC namespaces
are eligible for reduction.  The reduction must be based on the DC
semantic schema.

ISSUE:  The question remains with either of the above two cases as to
what should be done with non-DC elements which cannot be reduced.  They
can either be dropped altogether or they can be preserved, possibly with
the assumption that some down-stream processor might be able to reduce
them further or find them useful in some way.  I guess I would be
inclined to drop them from the record, the assumption being that the
only reason for performing dumb-down is because you know that simple DC
is the only metadata required for further processing.  If you plan on
passing the metadata record to unknown down-stream processors you
probably shouldn't be doing dumb-down at all before sending the data
unless the down-stream processors requires it, in which case it is
advertising that it can't handle non-simple DC to begin with.

Case III -- Intelligent Value Dumb-down:  A processor is allowed to
dumb-down rich values based on the processors explicit (encoding scheme)
or implicit knowledge of the value's type.  The processor may perform
type conversions, parse embedded related metadata to extract appropriate
strings for use as the simple value, perform lookups in thesauri or
ontologies, or fetch linked related metadata to effect the dumb-down.
Values which cannot be dumbed-down are dropped.  If this results in an
element with an empty value, that element is also dropped from the record.

Case IV -- Dumb Value Dumb-down:  Any rich values which cannot be
expressed as a string are dropped from the record.  If this results in
an element with an empty value, that element is also dropped from the
record.  Processors should be liberal in their conversion of rich values
to strings.  For example, if the value refers to related metadata via a
URL, the URL itself is the dumb-downed value.  If the rich value is
structured text with markup, the dumb processor can treat the whole
thing as a blob of text.

On both the element and value axis, processors could probably run the
entire spectrum between intelligent and dumb.  One processor could be
very intelligent with a particular class of DC records and dumb with
another.

Related to the above:  What is a DC record?  I guess I would argue for
an inclusive definition.  From the perspective of a DC dumb-down
processor it is any record that contains at least one element which is
or can be reduced to an element in the DC namespace.   This would
include records which might include only one DC element out of a
hundred.  For a processor which supports 'Intelligent Element Dumb-down'
it might also include records which originally contain no elements in
the DC namespace until after some intelligent element dumb-down.  With
this definition, if the dumb-down processor was intelligent enough, it
would consider a MARC record a DC record.  Hmmm...  Maybe a bit much?

I've wasted another perfectly good evening pondering metadata when I
could have been saving the video game universe from the Borg  :-)  Time
for my evening video game bout.

Regards,
    Tom Habing

Top of Message | Previous Page | Permalink

JiscMail Tools


RSS Feeds and Sharing


Advanced Options


Archives

January 2020
December 2019
November 2019
September 2019
August 2019
July 2019
June 2019
May 2019
April 2019
March 2019
February 2019
January 2019
December 2018
November 2018
October 2018
September 2018
August 2018
July 2018
June 2018
May 2018
April 2018
March 2018
February 2018
January 2018
December 2017
November 2017
September 2017
August 2017
July 2017
June 2017
May 2017
April 2017
March 2017
February 2017
January 2017
December 2016
November 2016
October 2016
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
September 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
November 2013
October 2013
September 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
November 2008
October 2008
September 2008
August 2008
July 2008
June 2008
May 2008
April 2008
February 2008
January 2008
December 2007
November 2007
October 2007
September 2007
August 2007
July 2007
May 2007
April 2007
March 2007
February 2007
January 2007
December 2006
November 2006
October 2006
September 2006
August 2006
July 2006
June 2006
May 2006
April 2006
March 2006
February 2006
January 2006
December 2005
November 2005
September 2005
August 2005
July 2005
June 2005
April 2005
March 2005
February 2005
January 2005
December 2004
November 2004
October 2004
September 2004
August 2004
July 2004
June 2004
May 2004
March 2004
February 2004
January 2004
November 2003
October 2003
September 2003
August 2003
June 2003
May 2003
April 2003
March 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
December 2000
November 2000
October 2000


JiscMail is a Jisc service.

View our service policies at https://www.jiscmail.ac.uk/policyandsecurity/ and Jisc's privacy policy at https://www.jisc.ac.uk/website/privacy-notice

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager