On 1/27/2015 20:39, Bernard Vatant wrote:
[log in to unmask]" type="cite">
LDOM constructions seem to be attached to instances of rdfs:Class. Does it mean that any implementation of this language would need minimal RDFS inference capacities?

The WG has not yet decided on the role of inferencing, but the current assumption is that the mode of inferencing is depending on the SPARQL dataset that this is executing over. By default this would mean whatever (no) inferences SPARQL has. I am personally not fond on relying on inferences for constraint definitions - it would require too many "modes" and make definitions less interchangeable.

[log in to unmask]" type="cite">
Or, like in SPARQL, could you have this as an option?

Possibly.

[log in to unmask]" type="cite">
Of course, most RDF data use classes, either RDFS or OWL. In our Mondeca software we have been using for over ten years now what we call "attribute constraints" which really look like ldom:PropertyConstraint. so I'm perfectly fine with this. My point, and I think the point of people here (Karen correct me if I am wrong) is that you might want to have validation rules not attached to a specific rdfs:Class, or even to a specific value of rdf:type. Hence the example given in my post. Don't interpret it like a proposal to get rid of, or reinvent the semantics of rdf:type or rdfs:Class.
Granted, you provide LDOM constructions which are not attached to classes, like global or template constraints. But they are defined using RDFS as a meta-language. http://www.w3.org/ns/ldom/core is currently 404, but I guess the intention is to have it available at some point as a reference vocabulary for LDOM classes and properties.

Yes, a current snapshot of the LDOM system vocabulary is at

    https://w3c.github.io/data-shapes/data-shapes-core/ldom.ldom.ttl

although the names etc will change, and - again - this has absolutely no official status whatsoever as the WG so far only made an early straw poll.

[log in to unmask]" type="cite">
It will be an RDFS vocabulary, I suppose.

No, an LDOM vocabulary. The language is self-contained. Every term such as ldom:minCount is backed by a SPARQL query. Even the system properties are defined in itself. See the file above for details (and open issues etc).

[log in to unmask]" type="cite">
How will it define for example ldom:property? Certainly by something like

ldom:property 
    a rdf:Property ;
    rdfs:domain rdfs:Class ;
    rdfs:range ldom:PropertyConstraint .
 
This linkage means that every existing RDFS or OWL ontology could be repurposed and redefined using LDOM semantics, without breaking existing instance data.

It's where I am wondering. Do you expect this kind of repurpose to be standard, any RDFS or OWL ontology having a unique non-ambiguous LDOM interpretation?

This is answered now. No RDFS is used, esp no rdfs:domain and range and no open world things.

[log in to unmask]" type="cite">

Coming up with a completely new terminology (e.g. an alternative to rdf:type) would basically create two parallel universes.

Indeed. I'm not making such a proposal.
 
On your specific paper below, I agree that such patterns are very important. In fact, SPIN can be used to define such patterns (as classes) with ASK queries or templates.

Point taken. I have certainly to look more closely at SPIN.
 
The goal of the group however is not discovery of the content of SPARQL end points, but rather a design that can be used to drive input forms, to validate instance data and similar use cases.

Indeed. I'm pretty aware of this, that we all need for our respective applications !!!
 
A random SPARQL expression does not help much in those contexts, and instead we need a more structured vocabulary. That's why we have Templates and a controlled core set of templates that can be used via ldom:property etc.

Agreed. This amounts to say there are two uses of patterns, descriptive (which patterns do you have in the data) and prescriptive (which patterns should you have in the data). LDOM is mostly (only) on the prescriptive side. My point is that you can define patterns in an agnostic way, able to support both uses.

I am not sure what you mean with "descriptive" here. Could you clarify?

[log in to unmask]" type="cite">
 
I do agree with your use case of using such pattern definitions for classification purposes, and to have hierarchies of such patterns. LDOM includes a function (currently called ldom:violatesConstraints(?node, ?class) : xsd:boolean) that takes a node and a class definition, and checks whether the node could potentially be an instance of that class.

Well, there again I think you are stuck in your class / subsumption paradigm (sorry) and you try to bring back the example I gave into this paradigm, although I tried hard to make clear that patterns can be used outside this paradigm.

No, but everybody seems to misunderstand this, so I need to work on better explanations or names. You can still define and check for any "pattern" (some people prefer to call them "shapes") and then check whether a given node matches the pattern. This is completely independent of the rdf:type triples and can be triggered by any other means that your application or protocol wants.

[log in to unmask]" type="cite">
 
This is very similar to the classification done in OWL systems using owl:equivalentClass in that it compares the given node with all restrictions defined by the class. There are definitely interesting future opportunities in that space to "discover" data this way, i.e. it is possible to define a simple classification engine based on the above SPARQL function alone. Note that this classification can be triggered on any node, independent of whether it has an rdf:type triple or not.

Same remark as above. All that is interesting indeed, but really appealing only to people having like you and me breathing this stuff for ten years and more. But if you want to be understood by this community of people dealing with records, metadata etc, in short, "librarians", be aware that they have managed records for ages without any notion of class and subsumption. All they have are predicates (aka metadata term). Type is originally a predicate (term) among others. Look closely at the original definition in DC Elements at http://purl.org/dc/elements/1.1/type. The definition has evolved towards a RDFS definition of DC Terms, with dcterms:type rdfs:range rdfs:Class, but that does not mean people here have shifted paradigm completely to Description Logics etc.

LDOM certainly has nothing to do with Description Logics! Just plain old classes and instances. Most people would understand it. Only people with RDFS/OWL baggage complicate things. It's really simple. Having said this, people may want to run LDOM on top of a SPARQL store that has OWL inferencing activated, if this makes sense for their scenario. And OWL and LDOM class definitions can be mixed, i.e. you can have open world semantics and/or closed world semantics, cleanly separated via different vocabularies.

[log in to unmask]" type="cite">
And there is no reason they should. This community has made and is making considerable efforts to make the huge libray legacy playing nicely with the Semantic Web pile. The work done for coordination between the RDF AP and the W3C Shapes is great. But people around here have some very good sound principles grounded on centuries of library science. If they frown, there is generally good reasons why :)

To be honest, while I can see that librarians are interested in this technology, there are dozens of other communities that we also need to make happy. It cannot be the perfect language for everyone. But Karen is on the WG and is making sure that the requirements of DC terms etc are met as well as possible, and the fact that I am sitting here answering this email may also show you that we are taking all requirements serious. As I stated elsewhere, one option is to attach global properties to rdfs:Resource, or to have their semantic declared as global constraints, for all occurrences of a property as predicate in any triple. This can be achieved via reusable templates too. See the end of the following (ongoing) thread for some ideas:

https://lists.w3.org/Archives/Public/public-data-shapes-wg/2015Jan/0186.html

Regards
Holger