Quoting "Diane I. Hillmann" <[log in to unmask]>:
> Yes, I understand that, but for most people the notion of
> "properties" is not particularly helpful in a training context. In my
> experience, it's important to deal with where folks are, not where
> you wish they were, particularly when attempting to teach them to do
> something new.
>
>> Also the conversation a few months back about dc:coverage and the
>> confusion over whether it merged together "about-ness" and
>> "applicability" (at least on the "spatial" side) highlighted the
>> problems which arise if the broad bucket approach is applied loosely.
>
> But this implies that we are entirely in control about how people
> understand and use metadata definitions, which we certainly aren't.
The issue here isn't - in the first instance - how implementers
(mis)use "our" terms: it's the principles "we" (the UB, WGs, other
designers of DCAPs) ourselves apply to constructing/creating/defining
them. At this point we _do_ have control.
> This is not to say that we shouldn't try to clarify some of the
> ambiguity of the past (and avoid it in future), but we do need to be
> realistic about what to expect from that exercise. Even if we had
> gone the other way on that decision, I don't think it would have
> materially changed what people did with the element.
I can only guess! But if there had been two properties that separated
out notions of "aboutness" and "applicability/validity", supported by
clear documentation and examples, and appropriate machine-processable
descriptions, it seems to me that there is a chance that they would
have been used as intended. And - the bottom line - my consuming
application would have been able to distinguish a course about Bristol
from a course that is applicable/available only to residents of Bristol.
>> A statement using the dc:coverage property and having a place as
>> value is more or less useless because I don't know if it means the
>> resource is "about" the place or "applicable to" the place.
>
> It is only useless if you require and expect the metadata you receive
> from others to be entirely predictable and consistent. My experience
> is that it's usually neither, so I've adjusted my expectations
> accordingly. I'd be happy if all the providers I worked with managed
> to put place names in Coverage (bonus if they spell them correctly!)
> and used an identifier that could get me to the resource.
OK, but we have to start from the position of defining properties that
we believe are
(a) formulated in a way which is consistent with the DCAM; and
(b) as fully, clearly and unambiguously defined as possible (both in
terms of human-readable documentation and machine-processable
assertions about relationships with other terms) (c) useful for the
"functional requirements" they set out to address
I accept that in practice people will use DCMI terms in ways we hadn't
predicted, and ways that result in nonsensical inferences. But I don't
think we should treat that as a design consideration (except as
something we have to do our best to avoid).
I don't understand an approach that places so much emphasis on not
defining new properties because it's bad for interoperability. If an
application needs to model a relationship type that is not covered
adequately by an existing property, they need a new property. In terms
of interoperability, that is a _better_ solution than "stretching" an
existing property in ways that were never intended in the name of
"reuse". It seems to me that it's the latter approach which is giving
metadata consumers so many problems and damaging interoperability.
A consuming application encountering statements using an "unknown"
property can then choose either to ignore statements or to make use of
information about that property to infer other statements, which may
result in "useful" data, referencing "known" properties. That is a
better position for the consumer than having to deal only a small set
of "known" properties, but having to grapple with the fact that they
have been used in unpredictable and inconsistent ways.
> Your mileage may vary, of course ... ;-)
Going back to Andy's initial point, I do agree that the design choice
between single propery/multiple vocab encoding schemes & multiple
properties isn't clear cut at all.
_However_ given the vocab encoding schemes currently proposed for the
adaptability property, I can see no useful single property that can be
defined to describe relationships between a resource and instances of
_all_ those classes.
With a different set of vocabulary encoding schemes - derived from a
different approach to modelling the problem space - that situation
might be different. But the choice of which properties/classes required
should be based on a modelling of the problem space which meets the
functional requirements of the application, not on a principle of
trying to "squeeeze everything in to" a single property.
Pete
-------
Pete Johnston
Research Officer (Interoperability)
UKOLN, University of Bath, Bath BA2 7AY, UK
tel: +44 (0)1225 383619 fax: +44 (0)1225 386838
mailto:[log in to unmask]
http://www.ukoln.ac.uk/ukoln/staff/p.johnston/
|