From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Dogma (disambiguation).

DOGMA, short for Developing Ontology-Grounded Methods and Applications, is the name of research project in progress at Vrije Universiteit Brussel's STARLab, Semantics Technology and Applications Research Laboratory. It is an internally funded project, concerned with the more general aspects of extracting, storing, representing and browsing information.[1]

Technical introduction[edit]

DOGMA [2] is an ontology approach and framework that is not restricted to a particular representation language. This approach has some distinguishing characteristics that make it different from traditional ontology approaches such as (i) its groundings in the linguistic representations of knowledge[3] and (ii) the methodological separation of the domain-versus-application conceptualization, which is called the ontology double articulation principle.[4] The idea is to enhance the potential for re-use and design scalability. Conceptualisations are materialised in terms of lexons. A lexon is a 5-tuple declaring either (in some context G):

  1. taxonomical relationship (genus): e.g., < G, manager, is a, subsumes, person >;
  2. non-taxonomical relationship (differentia): e.g.', < G, manager, directs, directed by, company >.

Lexons could be approximately considered as a combination of an RDF/OWL triple and its inverse, or as a conceptual graph style relation (Sowa, 1984). The next section elaborates more on the notions of context.

Language versus conceptual level[edit]

Another distinguishing characteristic of DOGMA is the explicit duality (orthogonal to double articulation) in interpretation between the language level and conceptual level. The goal of this separation is primarily to disambiguate the lexical representation of terms in a lexon (on the language level) into concept definitions (on the conceptual level), which are word senses taken from lexical resources such as WordNet.[5] The meaning of the terms in a lexon is dependent on the context of elicitation.[6]

For example, consider a term “capital”. If this term was elicited from a typewriter manual, it has a different meaning (read: concept definition) than when elicited from a book on marketing. The intuition that a context provides here is: a context is an abstract identifier that refers to implicit and tacit assumptions in a domain, and that maps a term to its intended meaning (i.e. concept identifier) within these assumptions.[7]

Ontology evolution[edit]

Ontologies naturally co-evolve with their communities of use. Therefore, in De Leenheer (2007)[8] he identified a set of primitive operators for changing ontologies. We make sure these change primitives are conditional, which means that their applicability depends on pre- and post-conditions.[9] Doing so, we guarantee that only valid structures can be built.

Context dependency types[edit]

De Leenheer and de Moor (2005) distinguished four key characteristics of context:

  1. a context packages related knowledge: it defines part of the knowledge of a particular domain,
  2. it disambiguates the lexical representation of concepts and relationships by distinguishing between language level and conceptual level,
  3. it defines context dependencies between different ontological contexts and
  4. contexts can be embedded or linked, in the sense that statements about contexts are themselves in context.

Based on this, they identified three different types of context dependencies within one ontology (intra-ontological) and between different ontologies (inter-ontological): articulation, application, and specialisation. One particular example in the sense of conceptual graph theory[10] would be a specialisation dependency for which the dependency constraint is equivalent to the conditions for CG-specialisation[11]

Context dependencies provide a better understanding of the whereabouts of knowledge elements and their inter-dependencies, and consequently make negotiation and application less vulnerable to ambiguity, hence more practical.

See also[edit]


  1. ^ "Welcome to VUB STARLab". Retrieved 2008-07-26. 
  2. ^ (Jarrar, 2005, Jarrar et al., 2007, De Leenheer et al., 2007)
  3. ^ (Jarrar, 2006)
  4. ^ (see Jarrar, 2005, Jarrar et al., 2007)
  5. ^ (Fellbaum, 1998)
  6. ^ (De Leenheer and de Moor, 2005)
  7. ^ (Jarrar et al., 2003).
  8. ^ (De Leenheer et al., 2007)
  9. ^ (Banerjee et al., 1987)
  10. ^ (Sowa, 1984)
  11. ^ (Sowa, 1984: pp. 97).

Further reading[edit]

  • Mustafa Jarrar: "Towards Methodological Principles for Ontology Engineering". PhD Thesis. Vrije Universiteit Brussel. (May 2005)
  • Mustafa Jarrar: "Towards the notion of gloss, and the adoption of linguistic resources in formal ontology engineering". In proceedings of the 15th International World Wide Web Conference (WWW2006). Edinburgh, Scotland. Pages 497-503. ACM Press. ISBN 1-59593-323-9. May 2006.
  • Mustafa Jarrar and Robert Meersman: "Ontology Engineering -The DOGMA Approach". Book Chapter (Chapter 3). In Advances in Web Semantics I. Volume LNCS 4891, Springer. 2008.
  • Banerjee, J., Kim, W. Kim, H., and Korth., H. (1987) Semantics and implementation of schema evolution in object-oriented databases. Proc. ACM SIGMOD Conf. Management of Data, 16(3), pp. 311–322
  • De Leenheer P, de Moor A (2005). Context-driven disambiguation in ontology elicitation. In P. Shvaiko and J. Euzenat (eds), Context and Ontologies: Theory, Practice, and Applications. Proc. of the 1st Context and Ontologies Workshop, AAAI/IAAI 2005, Pittsburgh, USA, pp 17–24
  • De Leenheer P, de Moor A, Meersman R (2007). Context dependency management in ontology engineering: a formal approach. Journal on Data Semantics VIII, LNCS 4380, Springer, pp 26–56
  • Jarrar, M., Demey, J., Meersman, R. (2003) On reusing conceptual data modeling for ontology engineering. Journal on Data Semantics 1(1):185–207
  • Spyns P, Meersman R, Jarrar M (2002). Data modeling versus ontology engineering. SIGMOD Record, 31(4), pp 12–17