Are we only talking about Linked Data or are we talking about information modeling? DCAP is a documentation model for describing an information ecosystem and DCAM is its formal abstract 'domain' model, or should be. Whether or not that model results in Linked Data is beside the point, isn't it?

Jon,
who just found this and had to paste it here:

Endless invention, endless experiment,
Brings knowledge of motion, but not of stillness;
Knowledge of speech, but not of silence;
Knowledge of words, and ignorance of the Word...

Where is the Life we have lost in living?
Where is the wisdom we have lost in knowledge?
Where is the knowledge we have lost in information?
 -- T. S. Eliot, Choruses from 'The Rock'



On Wed, Feb 15, 2012 at 11:26 AM, Karen Coyle <[log in to unmask]> wrote:
Hmm. In terms of analogies, I would equate DCAP with a data dictionary, not DCAM. To me a data dictionary is the actual metadata elements you will use, not an abstract definition of the possible structures. DCAM seems to be closer to the idea of "design patterns."

I don't see how something can be linked data if it doesn't have certain characteristics:
- http uris
- subjects, predicates, objects (whether serialized as triples or not, and RDF/XML and turtle are examples of not)
- subjects and predicates constrained as URIs; objects constrained differently (which DCAM would address)

It's possible that the JSON examples in that blog post met these criteria (I didn't perceive URIs for the predicates, but maybe I don't read JSON well).

kc


On 2/15/12 7:58 AM, Jon Phipps wrote:
Re: "Underneath it all you still have to have something that expresses
valid triples, n'est pas?"

Actually, my point here is that there are many data serializations, models
and use cases for creating, validating, and distributing metadata and many
of them don't include a notion of triples, (e.g. nosql) although many of
them do include a notion of domain-specific validity and some form of
distribution. RDF is extremely useful for distributing metadata in an Open
World context, but it's hardly the only data model and hardly the only
method of distributing useful metadata.

We need to provide, or at least try to provide, a specification that makes
it possible for an organization to describe how they expect the 'things'
they know about to be described: which properties are valid or not, what
constitutes valid data, and what does each property mean. In the old days,
this model used to be called a 'data dictionary' and it's an incredibly
useful concept in a world of distributed heterogeneous data. Providing a
way for someone to create a single 'data dictionary' that can be used
(preferably by a machine) to create validations for domain-specific data
and that can be used by anyone (preferably a machine) in the organization,
or alternatively in the world, to understand the meaning of that data
across departmental, organizational, or national boundaries would be
incredibly and fundamentally useful.

If we say that RDF is the ONLY useful way to do this, then we might as well
go back to "DCAM is just RDF".

Jon

I check email just a couple of times daily; to reach me sooner, click here:
http://awayfind.com/jonphipps


On Wed, Feb 15, 2012 at 9:47 AM, Karen Coyle<[log in to unmask]>  wrote:

What *does* seem to be core in this blog post is the use of http URIs for
values. I'd add to that: properties defined with http URIs, so you know
what you are describing. Although you can serialize all of this in JSON if
you wish, it means that you have started with LD concepts, not the usual
JSON application. Underneath it all you still have to have something that
expresses valid triples, n'est pas?

kc


On 2/15/12 5:49 AM, Jon Phipps wrote:

I've been doing some wandering around in JSON land for the last few days
and, as part of a continuing observation that RDF is an implementation
detail rather than a core requirement, I'd like to point to this post from
James Snell
http://chmod777self.blogspot.**com/2012/02/mostly-linked-**data.html<http://chmod777self.blogspot.com/2012/02/mostly-linked-data.html>

And the JSON Scema spec: http://json-schema.org/

Jon,
who may someday get his act together and pay attention to these meetings
more than a couple of hours before the meeting.

On Tuesday, February 14, 2012, Thomas Baker<[log in to unmask]>   wrote:

On Tue, Feb 14, 2012 at 05:25:17PM -0500, Tom Baker wrote:

--  that DCAM should be developed using a test-driven approach, with
    effective examples and test cases that can be expressed in various
    concrete syntaxes.


Jon suggested that we take Gordon's requirements for metadata record

constructs

[1] as a starting point.  As I understand them, these are:

--  the ability to encode multicomponent things (which in the cataloging
   world happen to be called "statements", as in "publication statement"
   and "classification statement") either:

   -- as unstructured strings, or
   -- as strings structured according to a named Syntax Encoding Scheme,

or

   -- as Named Graphs with individual component triples

--  the ability to express the repeatability of components in such

"statements"


--  the ability to designate properties as "mandatory", or "mandatory if
   applicable", and the like

--  the ability to constrain the cardinality of "subsets of properties"
   within a particular context, such as the FRBR model

-- the ability to express mappings between properties in different

namespaces.


It has also been suggested that we find examples of real metadata
instance
records from different communities and contexts -- e.g., libraries,

government,

industry, and biomed -- for both testing and illustrating DCAM
constracts.

Tom

[1]

https://www.jiscmail.ac.uk/**cgi-bin/webadmin?A2=ind1202&L=**
dc-architecture&P=6405<https://www.jiscmail.ac.uk/cgi-bin/webadmin?A2=ind1202&L=dc-architecture&P=6405>


--
Tom Baker<[log in to unmask]>



--
Karen Coyle
[log in to unmask] http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet



--
Karen Coyle
[log in to unmask] http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet