Peter Johnson writes...
>........ The AI
>community made the initial mistake of focussing on representation, to their
>cost, but to their credit they realised that how you represent the knowledge
>(e.g. the knowledge about a patient) is irrelevant, it is working out what
>concepts, distinctions are needed that is important, and the relationships
>between them. They worked this out about 15 years ago, surely we can learn
>from them?
Representation methods *do* make a difference, but mainly affect how
efficiently you can manipulate information.
>Once we have defined (if it is possible) these standard concepts and their
>relationships, then we can represent, store and transmit them in myriad
>ways, and I don't care to argue the merits of different systems - different
>technologies are useful in different scenarios.
Absolutely right! The hard part is finding the most effective
method.....but are there universal "standard concepts"? There is a lot in
the literature to suggest that conceptualisation is task and purpose
specific.
>So can I make yet another plea that we drop discussion of representation as
>irrelevant, and focus on the concepts and their semantics? This is the
>interesting area.
"Representation" has drifted in meaning, I think. It now commonly includes
the "contents" (semantic models, constraints, ontologies, etc., and
extensional knowledge) rather than just the means. But I think there is an
additional component:-
- what are you going to use the information for, and how do you know what
you need?
Andrzej Glowinski
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
|