Hi Mikael, all,
Im not entirely sure email is the best medium for describing this
process, but sure I will try:
Its the Framework for metadata generation and management that I have a
particular interest in, the Terms and the Application Profile are in
some ways secondary as they are generated with and exist outside the
Framework and it is important to realise that these are transitory
whereas the Framework is (and so it should be) generally much more stable.
There is strong adherence to the JISC Information Environment and this
did form the basis from which these methods of
generating/managing/exporting/importing etc... metadata have originated.
i.e. the LOM, OAI-PMH, z39.50 etc... but this is not an exclusive
relationship. any representation, transport or search mechanism could be
easily plugged in.
there are two kinds of data, unique instances of data describing objects
- title, url, description are the three most common examples of this (as
these also form the basis of all metadata schemas concerned with
describing electronic resources - this isnt meant to be definitive -
maybe theres some out there that dont?) but this *could* be reduced to a
unique identifier and appropriate polymorphous extentions be applied if
anyone has a problem with this, and repeating groups of data that can be
associated with instances of unique data, whether an entity is written
by a particular author, or is of a particular type, theres lots, the
majority of the LOM is made up of this kind of data. By repeating groups
I mean that many resources can belong to the same group - i.e. many
resources will be of type X.
The Framework allows the creation of both types of data - the unique
entities are stores as 'resources' - the repeating groups are
represented by a extensible hierarchies in the form of structures of
folders - very much like the representation of the file system and the
different kinds of file on your OS. These structures can be generated to
reflect any of the models available.
Resources are then copied and pasted around the tree of folders so that
relationships are created between the resource and any of the repeating
groups. (This functionality, the adaptation of the clipboard metaphor,
is my baby and the key to this solution).
This is how a resource can be in two (or more and even contradictory)
places (in several schemas) at the same time and is an excellent way to
experiment with aligning different schemas.
On top of this, relationships can also be defined between entities,,
synonyms, antonyms, in different languages, etc etc...
Erm,, thats about it in a nutshell,, it really isnt that difficult if
recursion is understood and implemented properly, I despair at how such
a simple concept can be so poorly understood sometimes but thats
irrelevant...
The demonstration addressed one of the more troublesome aspects of the
LOM, those of nested elements - in particular the taxonpath element -
and provided a solution to the as yet largely avoided issue of post
harvest marshaling of resources into the harvesters data store. (am i
wrong - is work being conducted in this area?)
A working example of harvesting resources and correctly (and
automatically) filing LOM records was shown.
Theres more, but Ive probably gone on enough...
One major stumbling block most people seem to have with regards this
solution is that they seem to think that the information in the LOM can
only be represented in the LOM format - its just not true.
I should also thank the various people who have helped clarify this
process,, you know who you are, but I would like to mention that Paul
Hollands has been particularly helpful in that he realised the
connection between the folder metaphor and the rdf triple store
mechanism. I have yet to revisit my code to see how the reality of this
pans out but my initial reaction is that it will work very well.
Hope this helps
Steve
Mikael Nilsson wrote:
>
>
> Steve Richardson wrote:
>
>> Hi Mikael, all
>>
>> The following I absolutely agree with, and forms a basic principle in
>> the design of the solution I demonstrated in Newcastle last year.
>> Brilliant! You have paraphrased my contention wonderfully. Thank you! My
>> only question is - why is it off topic?
>>
>
> Thanks... it's sort of off-topic to the subject, but not for the list!
> I'd love to hear more about your demonstration. Could you expand a bit?
>
> /Mikael
>
>> Mikael Nilsson wrote:
>>
>>>
>>> Machine-processability is really the key to metadata. It is actually
>>> much less central to the definition of metadata what kinds of
>>> properties we may want to use. Some will be standardized across
>>> domains (legal metadata is maybe one such case), some will be
>>> community-specific, some will be locally defined and some will
>>> necessarily apply to a single kind of resource in a single context
>>> only. And we need for our metadata standards to *support* this
>>> heterogeneity.
>>>
>>> <off-topic>
>>> The way to do that, in my opinion, is to produce a useful abstract
>>> framework for metadata, and then let different communities fill that
>>> framework with their kind of metadata. Consensus building on metadata
>>> terms is a bottom-up process. I think this is where LOM currently
>>> fails: it tries to do both a framework, a set of metadata attributes
>>> and an basic application profile in one standard. That is untenable in
>>> the long run. The three (framework, terms, application profile) needs
>>> to be separate. Dublin Core gets this part right, incomplete as it may
>>> be.
>>> </off-topic>
>>>
>>> That is my current thinking on the issue... It will be interesting to
>>> see what gaping holes and glaring oversights you will (necessarily)
>>> find :-)
>>>
>>> /Mikael
>>>
>>>
>>>
>>
>
|