I'd suggest that as a point of interoperability principle it is always
better - whenever possible - to go as close as possible to IEEE LOM, while
interoperable LOM instances remains the aim. I believe we should take care
to ensure flexibility does not become the mutually-exclusive enemy of
interoperability.
Using this approach, the order of preference of techniques for including
information would be:
1. use default LOM vocabularies, if appropriate
2. if not, use vocab langstrings with alternative sources, as in UK-LOM core
(ideally with machine-locatable and parsable sources, in VDEX format)
3. if not appropriate (either because the field does not exist in LOM, or
the field value needs to be hierarchical), use custom classifications, again
with locatable VDEX sources, perhaps with a custom “purpose” value in a
(locatable) vocabulary.
4. only if 1, 2 or 3 do not apply, use valid and properly name-spaced
elements at the nearest appropriate LOM element, such as adding ODRL within
the rights section.
Though there is an inherent risk in LOM that the nicely generic
classification field becomes a dump repository of all information that
cannot fit into other LOM fields, I’d still propose that this is a better
solution than using external fields, purely because it will aid systematic
interoperability. (A different way of looking at the problem would be to
point out that the classification field would be an adequate way of encoding
ALL LOM-type information – or that even simpler field=namespace:value pairs
could be used, but that leads us to RDF…)
A related point would be that different people have different
interpretations of the term “application profile”. Often one sees - within
and without the IEEE LOM world - the idea that an AP is a congregation of
disparate parts of different schema. Yet it is possible of course – and this
includes the UK LOM Core – that an AP exists entirely within the boundaries
of LOM, without reference to external namespaces/schema. Moreover, the UK
LOM Core’s usage of particular vocabularies
In the end I guess it depends on one’s priority. Using an external schema –
whether slotted into an XML instance of a LOM record or not – may make a
record semantically precise, and clear to a human reader, or a software
system designed specifically to expect such an arrangement. That’s all very
well as far as it goes, and there may well be instances where that is
appropriate (and I would agree that rights information is one of them – the
last thing we need is a new LOM-specific DRM language, and ideally when that
does happen, it should be globalised, generalised, and formally incorporated
as far up the standards body tree as is politically possible).
But apart from those particular cases, to me the ‘externalise’ approach does
little to encourage true interoperability – that is, a transposition of
information between generically disparate systems that is effortless and
transparent to the end user.
For example, we’ve done much work in applying metadata to LOs that describe
info that can be used for selecting suitable resources for PMLD learners,
and used only valid non-extended LOM fields for all this complex
information. In Xtensis all exported records expose all locations of used
vocabularies and classifications within the relevant LOM fields, and the
VDEX interpretations of those vocabs and classifications at permanent
locations. By doing this, we haven’t ensured complete automatic semantic
interoperability with other systems, but we think we’ve come far closer to
this aim than if we’d created (or reused, in custom ways) non-LOM elements
to represent the various dimensions of metadata we and our users need to
encode (sensory and cognitive accessibility, educational level, competencies
in many many different curricula, language, learning style, etc. etc.) Its
one generic mechanism, which could be adopted by anyone, rather than
inventing a new mechanism for each new bit of information.
Ps. In terms of our architecture, Xtensis includes an XML-based language
known as XtFlow, which among its several uses, allows application profile
validation. (We try not to presume that metadata instances are created
purely for compliance to a particular AP, but that any metadata record may
or may not comply with any number of APs at any point of development. This,
we think, helps in creating systems that can both cope with strict metadata
policies within a system, yet allows the architecture to cope intelligently
with incomplete records from disparate sources. This, we hope, is the key to
true transparent, effortless interop.)
XtFlow applies to LOM information in an *abstract* information model-type
form, rather than (as when using any of the generic XML schema or XML
document validation tools available) validating any particular XML instance
of a LOM record in any of LOM’s numerous binding versions. Although this
precludes validating schema extensions (although they can still be included
within Xtensis) it does allow validation (and conditional batch processing
of LOM metadata) in LOM-semantic-native ways, rather than addressing
specifics of the binding. For example, an XtFlow rule could take a certain
action (including failing a validation report) by checking for the usage of
a particular vocabulary in a particular LOM field, rather than checking that
within an IMS LR-MD 1.3 instance, a vocablangstring element exists at a
particular location, with a particular value for a source sub-element etc. etc.
Hope some of this helps in some way. Apologies for going on and on - I'm
clearly not writing enough whitepapers... :)
Best,
Robin Skelcey
Technical Director
Xtensis ltd.
[log in to unmask]
0779 3352391
|