[log in to unmask]" type="cite">LDOM constructions seem to be attached to instances of rdfs:Class. Does it mean that any implementation of this language would need minimal RDFS inference capacities?
[log in to unmask]" type="cite">Or, like in SPARQL, could you have this as an option?
[log in to unmask]" type="cite">Of course, most RDF data use classes, either RDFS or OWL. In our Mondeca software we have been using for over ten years now what we call "attribute constraints" which really look like ldom:PropertyConstraint. so I'm perfectly fine with this. My point, and I think the point of people here (Karen correct me if I am wrong) is that you might want to have validation rules not attached to a specific rdfs:Class, or even to a specific value of rdf:type. Hence the example given in my post. Don't interpret it like a proposal to get rid of, or reinvent the semantics of rdf:type or rdfs:Class.
Granted, you provide LDOM constructions which are not attached to classes, like global or template constraints. But they are defined using RDFS as a meta-language. http://www.w3.org/ns/ldom/core is currently 404, but I guess the intention is to have it available at some point as a reference vocabulary for LDOM classes and properties.
[log in to unmask]" type="cite">It will be an RDFS vocabulary, I suppose.
[log in to unmask]" type="cite">How will it define for example ldom:property? Certainly by something like
ldom:property
a rdf:Property ;
rdfs:domain rdfs:Class ;
rdfs:range ldom:PropertyConstraint .
This linkage means that every existing RDFS or OWL ontology could be repurposed and redefined using LDOM semantics, without breaking existing instance data.
It's where I am wondering. Do you expect this kind of repurpose to be standard, any RDFS or OWL ontology having a unique non-ambiguous LDOM interpretation?
[log in to unmask]" type="cite">
Coming up with a completely new terminology (e.g. an alternative to rdf:type) would basically create two parallel universes.
Indeed. I'm not making such a proposal.
On your specific paper below, I agree that such patterns are very important. In fact, SPIN can be used to define such patterns (as classes) with ASK queries or templates.
Point taken. I have certainly to look more closely at SPIN.
The goal of the group however is not discovery of the content of SPARQL end points, but rather a design that can be used to drive input forms, to validate instance data and similar use cases.
Indeed. I'm pretty aware of this, that we all need for our respective applications !!!
A random SPARQL expression does not help much in those contexts, and instead we need a more structured vocabulary. That's why we have Templates and a controlled core set of templates that can be used via ldom:property etc.
Agreed. This amounts to say there are two uses of patterns, descriptive (which patterns do you have in the data) and prescriptive (which patterns should you have in the data). LDOM is mostly (only) on the prescriptive side. My point is that you can define patterns in an agnostic way, able to support both uses.
[log in to unmask]" type="cite">I do agree with your use case of using such pattern definitions for classification purposes, and to have hierarchies of such patterns. LDOM includes a function (currently called ldom:violatesConstraints(?node, ?class) : xsd:boolean) that takes a node and a class definition, and checks whether the node could potentially be an instance of that class.
Well, there again I think you are stuck in your class / subsumption paradigm (sorry) and you try to bring back the example I gave into this paradigm, although I tried hard to make clear that patterns can be used outside this paradigm.
[log in to unmask]" type="cite">This is very similar to the classification done in OWL systems using owl:equivalentClass in that it compares the given node with all restrictions defined by the class. There are definitely interesting future opportunities in that space to "discover" data this way, i.e. it is possible to define a simple classification engine based on the above SPARQL function alone. Note that this classification can be triggered on any node, independent of whether it has an rdf:type triple or not.
Same remark as above. All that is interesting indeed, but really appealing only to people having like you and me breathing this stuff for ten years and more. But if you want to be understood by this community of people dealing with records, metadata etc, in short, "librarians", be aware that they have managed records for ages without any notion of class and subsumption. All they have are predicates (aka metadata term). Type is originally a predicate (term) among others. Look closely at the original definition in DC Elements at http://purl.org/dc/elements/1.1/type. The definition has evolved towards a RDFS definition of DC Terms, with dcterms:type rdfs:range rdfs:Class, but that does not mean people here have shifted paradigm completely to Description Logics etc.
[log in to unmask]" type="cite">And there is no reason they should. This community has made and is making considerable efforts to make the huge libray legacy playing nicely with the Semantic Web pile. The work done for coordination between the RDF AP and the W3C Shapes is great. But people around here have some very good sound principles grounded on centuries of library science. If they frown, there is generally good reasons why :)