Information

What are the practical uses of ontologies?

What are the practical uses of ontologies?


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I have read many papers and books about ontologies, and I am trying to figure out that how they are used in a real project. For example, how can the ontology for a soccer player robot can be defined and used with a cognitive architecture in order to make it intelligent?

Are ontologies relations between terms in that domain of knowledge (for example relation between the ball and foot word, and physical rules definition and their relation with the foot and ball movement) or relations between tactics, strategies and different mixture of tactics?

Are there any clear examples of ontology usage in real projects and their combining usage with the cognitive architectures like ACT-R for augmenting the cognitive architecture?


Interesting question. "Ontology" is often used in confusing and polyvalent ways, so let's start by clearing up the terminology very quickly for those who aren't intimate with the various different meanings.

What does "ontology" mean?

Broadly, ontology the field is the philosophical study of being. An ontology is a method for establishing what beings or entities may exist (cf. epistemology's whether we should believe they exist), how they may be grouped, and the relations they may have.

In information and computer science, "ontology" is used in a related, but not identical sense to refer to formally defined sets of types, properties and the relationships between them. To answer your question, then, relations compose part of an ontology, but ontologies do not comprise sets of relations and nothing but sets of relations. From the Wikipedia page:

An ontology is a description (like a formal specification of a program) of the concepts and relationships that can formally exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy. -- Tom Gruber, Toward Principles for the Design of Ontologies Used for Knowledge Sharing.

(Gruber updated the definition in 2007, but it is not so much different as an extension.)

Ontologies and ACT-R

ACT-R defines a basic ontology with the entities modules, buffers and pattern-matchers, ways to group entities within their type (e.g., perceptual-motor vs. memory modules), and the relations between them.

We use ontologies to abstract away from particular data structures. The ACT-R cognitive architecture both implies and specifies an ontology, and we can use that ontology to extend ACT-R, or make ACT-R talk to other architectures which may not have any data structure overlap with ACT-R. Oltramari et al. (2014) provides an apt concrete ("real"?) example. They coupled ACT-R to the SCOPE knowledge management system, where they use concrete information science ontologies in order to extend ACT-R to use abstract philosophical ontologies.

In general, ACT-R models only employ as much knowledge as required to perform well-defined cognitive tasks. By and large, they can be seen as “monadic” agents, whose knowledge bases are limited, partially reusable and sporadically portable across experimental conditions. On the contrary, in order to replicate high-level contextual reasoning and pattern recognition in humans, large amount of common-sense knowledge should be available to ACTR: to overcome these limitations, we propose to equip ACT-R with a specific module for processing ontologies [philosophical sense], i.e. semantic specifications of a given domain or application (Guarino, 1998) [information science sense], which are generally used in combination with inference engines for deductive reasoning. Since the ACT-R declarative module supports a relatively coarse-grained semantics based on slot-value pairs, and the procedural system is not optimal to effectively manage complex logical constructs, a specific extension is needed to make ACT-R suitable to fulfill knowledge-intensive cognitive tasks like context-driven spatial reasoning. Accordingly, we engineered an extra module as a bridging component between the cognitive architecture and an external knowledge-base system KBS, SCONE (Fahlman, 2006).

References

  • Oltramari, A., Vinokurov, Y., Lebiere, C., Oh, J., & Stentz, A. (2014, March). Ontology-Based Cognitive System for Contextual Reasoning in Robot Architectures. In 2014 AAAI Spring Symposium Series.

Ontologies relevant to behaviour change interventions: a method for their development

Background: Behaviour and behaviour change are integral to many aspects of wellbeing and sustainability. However, reporting behaviour change interventions accurately and synthesising evidence about effective interventions is hindered by lacking a shared, scientific terminology to describe intervention characteristics. Ontologies are standardised frameworks that provide controlled vocabularies to help unify and connect scientific fields. To date, there is no published guidance on the specific methods required to develop ontologies relevant to behaviour change. We report the creation and refinement of a method for developing ontologies that make up the Behaviour Change Intervention Ontology (BCIO). Aims: (1) To describe the development method of the BCIO and explain its rationale (2) To provide guidance on implementing the activities within the development method. Method and results: The method for developing ontologies relevant to behaviour change interventions was constructed by considering principles of good practice in ontology development and identifying key activities required to follow those principles. The method's details were refined through application to developing two ontologies. The resulting ontology development method involved: (1) defining the ontology's scope (2) identifying key entities (3) refining the ontology through an iterative process of literature annotation, discussion and revision (4) expert stakeholder review (5) testing inter-rater reliability (6) specifying relationships between entities, and (7) disseminating and maintaining the ontology. Guidance is provided for conducting relevant activities for each step. Conclusions: We have developed a detailed method for creating ontologies relevant to behaviour change interventions, together with practical guidance for each step, reflecting principles of good practice in ontology development. The most novel aspects of the method are the use of formal mechanisms for literature annotation and expert stakeholder review to develop and improve the ontology content. We suggest the mnemonic SELAR3, representing the method's first six steps as Scope, Entities, Literature Annotation, Review, Reliability, Relationships.

Keywords: behaviour behaviour change evaluation studies evidence synthesis interventions ontologies.

Copyright: © 2020 Wright AJ et al.

Conflict of interest statement

No competing interests were disclosed.

Figures

Figure 1.. The Behaviour Change Intervention Ontology…

Figure 1.. The Behaviour Change Intervention Ontology v1.4 ( Michie et al. , 2020).

Figure 2.. Extract from the Intervention Setting…

Figure 2.. Extract from the Intervention Setting Ontology ( Norris et al. , 2020b) showing…


Introduction

Controlled terminologies and ontologies are indispensable for modern biomedicine [1]. Ontology was historically restricted to philosophical inquiry into the nature of existence, but logicians at the turn of the 20 th Century translated the term into a precise representation of knowledge using statements that highlight essential qualities, parts and relationships [2]. In the early 1970's, explicit approaches to knowledge representation emerged in artificial intelligence [3], and in the 1990's were christened ontologies in computer science [4]. These representations were promoted as stable schemas for data𠅊 kind of object-oriented content—to facilitate data sharing and reuse. Ontologies have since been used intensively for research in biomedicine, astronomy, information science and many other areas. Biomedical scientists use ontologies to encode the results of complex experiments and observations consistently, and analysts use the resulting data to integrate and model system properties. In this way, ontologies facilitate data storage, sharing between scientists and subfields, integrative analysis, and computational reasoning across many more facts than scientists can consider with traditional means.

In addition to their computational utility, key biomedical ontologies serve as lingua franca: they allow numerous researchers to negotiate and agree on central, domain-specific concepts and their hierarchical interrelations. Concepts commonly modeled with ontologies include organismal phenotypes [5]–[7] and gene functions in genetics and genomics [1], [8] signs, symptoms and disease classifications in medicine [9] species, niche names and inter-species relations in ecology and evolution [10]. Building an ontology in any of these areas faces similar challenges: lack of an external standard that defines the most critical concepts and concept linkages for the ontology's proposed function vast numbers of aliases referring to the same concept and no yardstick with which to compare competing terminologies. This paper considers scientific ontologies generally and then develops a framework and validates a family of measures that helps to overcome these challenges.

Proper ontologies, group ontologies and free text

The word ontology historically represented the product of one person's philosophical inquiry into the structure of the real world: What entities exist? What are their properties? How are they grouped and hierarchically related?

While this original definition still holds in philosophy, the computational interpretation of an ontology is a data structure typically produced by a community of researchers through a procedure that resembles the work of a standards-setting committee or a business negotiation (L. Hunter, 2010, personal communication). To agree on the meaning of shared symbols, the process involves careful utility-oriented design. The collective ontologies that result are intended to be used as practical tools, such as to support the systematic annotation of biomedical data by a large number of researchers. A standard domain-specific ontology used in the sciences today includes a set of concepts representing external entities, a set of relations, typically defined as the predicates of statements linking two concepts (such as cat is-an animal, cat has-a tail), and taxonomy or hierarchy defined over concepts, comprised by the union of relations. An ontology may also explicitly represent a set of properties associated with each concept and rules for these properties to be inherited from parent to child concept. Furthermore, formal ontologies sometimes incorporate explicit axioms or logical constraints that must hold in logical reasoning over ontology objects.

In practice, what different research groups mean by the term ontology can range from unstructured terminologies, to sets of concepts and relations without complete connection into a hierarchy, to taxonomies, to consistent, formal ontologies with defined properties and logical constraints.

An ontology developed by group represents a glimpse into the specific worldviews held within that group and its broader domain. By the same logic, we can consider the union of all published articles produced by a scientific community as a much more complete sample of scientific worldviews. While a research team that writes a joint paper agrees on its topic-specific worldview to some extent, its collective domain ontology is neither explicitly defined, nor free from redundancy and contradiction. Insofar as scientists communicate with each other and respond to prior published research, however, these worldviews spread and achieve substantial continuity and homogeneity [11]. A large collection of scientific documents therefore represents a mixture of partially consistent scientific worldviews. This picture is necessarily complicated by the flexibility and imprecision of natural language. Even when scientists agree on specific concepts and relations, their corresponding expressions often differ, as the same meaning can be expressed in many ways.

Nevertheless, if we accept that the published scientific record constitutes the best available trace of collective scientific worldviews, we arrive at the following conclusion: Insofar as an ontology is intended to represent knowledge within a scientific domain, it should correspond with the scientific record. Moreover, an ontology would practically benefit from evaluation and improvement based on its match with a corpus of scientific prose that represents the distribution of its (potential) users' worldviews.

Previous work on ontology evaluation

Previously proposed metrics for ontology evaluation can be divided into four broad categories: Measures of an ontology's (1) internal consistency (2) usability (or task-based performance), (3) comparison with other ontologies and (4) match to reality. While this review is necessarily abbreviated, we highlight the most significant approaches to ontology evaluation.

Metrics of an ontology's internal consistency are nicely reviewed by Yu and colleagues [12]. They especially highlight: clarity, coherence, extendibility, minimal ontological commitment, and minimal encoding bias [4] competency [13] consistency, completeness, conciseness, expandability, and sensitiveness [14]. The names of these metrics suggest their purposes. For example, conciseness measures how many unique concepts and relations in an ontology have multiple names. Consistency quantifies the frequency with which an ontology includes concepts that share subconcepts and the number of circularity errors.

Measurements of an ontology's usability [15]–[17] build on empirical tools from cognitive science that assess the ease with which ontologies can be understood and deployed in specific tasks [18]. Results from such studies provide concrete suggestions for improving individual ontologies, but they are also sometimes used to compare competing ontologies. For example, Gangemi and colleagues [19] described a number of usability-profiling measures, such as presence, amount, completeness, and reliability, that assess the degree to which parts of an ontology are updated by ontologists [19]. The authors also discuss an ontology's 𠇌ognitive ergonomics”: an ideal ontology should be easily understood, manipulated, and exploited by its intended users.

Approaches to ontology comparison typically involve the 1) direct matching of ontology concepts and 2) the hierarchical arrangement of those concepts, often between an ontology computationally extracted and constructed from text and a reference or “gold standard” ontology built by experts. Concept comparison draws on the information retrieval measures of precision and recall [12], [20], [21] (sometimes called term [22] or lexical precision and recall [22] see Materials and Methods section below for precise definitions of precision and recall). Matching ontology terms, however, raises challenging questions about the ambiguity of natural language and the imperfect relationship between terms and the concepts that underlie them. Some ignore these challenges by simply assessing precision and recall on the perfect match between terms. Others deploy string similarity techniques like stemming or edit distance to establish a fuzzy match between similar ontology terms [23], [24].

The second aspect of ontology matching involves a wide variety of structural comparisons. One approach is to measure the Taxonomic Overlap, or intersection between sets of super- and subconcepts associated with a concept shared in both ontologies, then averaged across all concepts to create a global measure [23]–[25]. Another uses these super and subconcept sets to construct asymmetric taxonomic precision and recall measures [26], closely related to hierarchical precision and recall [27], [28]. A similar approach creates an augmented precision and recall based on the shortest path between concepts [29] or other types of paths and a branching factor [30]. An alternate approach is the OntoRand index that uses a clustering logic to compare concept hierarchies containing shared concepts [31]. The relative closeness of concepts is assessed based on common ancestors or path distance, and then hierarchies are partitioned and concept partitions are compared.

Approaches for matching an ontology to reality are more diverse and currently depend heavily on expert participation [12]. For example, Missikoff and colleagues [32] suggested that an ontology's match to reality be evaluated by measuring each ontology concept's 𠇏requency of use” by experts in the community. Missikoff and colleagues' ultimate goal was to converge to a consensus ontology negotiated among virtual users via a web-interface. Smith [33] recommended an approach to ontology evolution which rests on explicitly aligning ontology terms to unique entities in the world studied by scientists. Ontology developers would then be required to employ a process of manual tracking, whereby new discoveries about tracked entities would guide corresponding changes to the ontology. In a related effort, Ceusters and Smith suggested studying the evolution of ontologies over time [34]: they defined an ontology benchmarking calculus that follows temporal changes in the ontology as concepts are added, dropped and re-defined.

A converse approach to matching ontologies with domain knowledge appears in work that attempts to learn ontologies automatically (or with moderate input from experts) from a collection of documents [35]–[38] using machine learning and natural language processing. The best results (F-measure around 0.3) indicate that the problem is extremely difficult. Brewster and colleagues [36], [39] proposed (but did not implement) matching concepts of a deterministic ontology to a corpus by maximizing the posterior probability of the ontology given the corpus [36], [39]. In this framework, alternative ontologies can be compared in terms of the posterior probability conditioned on the same corpus. Their central idea, which shares our purpose but diverges in detail, is that “the ontology can be penalized for terms present in the corpus and absent in ontology, and for terms present in the ontology but absent in the corpus” (see also [19]). Each of these approaches to mapping ontologies to text face formidable challenges associated with the ambiguity of natural language. These include synonymy or multiple phrases with the same meaning polysemy or identical expressions with different meanings and other disjunctions between the structure of linguistic symbols and their conceptual referents.

In summary, among the several approaches developed to evaluate an ontology's consistency, usability, comparison and match to reality, metrics that evaluate consistency are the most mature among the four and have inspired a number of practical applications [40]–[42]. The approach that we propose and implement here belongs to the less developed areas of matching ontologies to each other and to discourse in the world. When considering approaches that compare ontologies to each other and to discourse, metrics comparing ontologies to one another jump from the comparison of individual concepts to the comparison of entire concept hierarchies without considering intermediate concept-to-concept relationships. This is notable because discourse typically only expresses concepts and concept relationships, and so the measures we develop will focus on these two levels in mapping ontologies to text.

Our purpose here is to formally define measures of an ontology's fit with respect to published knowledge. By doing this we attempt to move beyond the tradition of comparing ontologies by size and relying on expert intuitions. Our goal is to make the evaluation of an ontology computable and to capture both the breadth and depth of its domain representation—its conceptual coverage and the parsimony or efficiency of that coverage. This will allow us to compare and improve ontologies as knowledge representations. To test our approach, we initially analyzed four of the most commonly used medical ontologies against a large corpus of medical abstracts. To facilitate testing multiple ontologies in reference to multiple domains we also analyzed seven synonym dictionaries or thesauri—legitimate if unusual ontologies [43]𠅊nd compared their fit to three distinctive corpora: medical abstracts, news articles, and 19-century novels in English.

Medical ontologies

Medical ontologies have become prominent in recent years, not only for medical researchers but also physicians, hospitals and insurance companies. Medical ontologies link disease concepts and properties together in a coherent system and are used to index the biomedical literature, classify patient disease, and facilitate the standardization of hospital records and the analysis of health risks and benefits. Terminologies and taxonomies characterized by hierarchical inclusion of one or a few relationship types (e.g., disease_conceptx is-a disease_concepty) are often considered lightweight ontologies and are the most commonly used in medicine [44], [45]. Heavyweight ontologies capture a broader range of biomedical connections and contain formal axioms and constraints to characterize entities and relationships distinctive to the domain. These are becoming more popular in biomedical research, including the Foundational Model of Anatomy [46] with its diverse physical relations between anatomical components.

The first, widely used medical ontology was Jacques Bertillon's taxonomic Classification of Causes of Death, adopted in 1893 by the International Statistical Institute to track disease for public health purposes [47]. Five years later, at a meeting of the American Public Health Association in Ottawa, the Bertillon Classification was recommended for use by registrars throughout North America. It was simultaneously adopted by several Western European and South American countries and updated every ten years. In the wake of Bertillon's death in 1922, the Statistics Institute and the health section of the League of Nations drafted proposals for new versions and the ontology was renamed the International List of Causes of Death (ICD). In 1938 the ICD widened from mortality to morbidity [48] and was eventually taken up by hospitals and insurance companies for billing purposes. At roughly the same time, other ontologies emerged, including the Quarterly Cumulative Index Medicus Subject Headings, which eventually gave rise to the Medical Subject Headings (MeSH) that the NIH's National Library of Medicine uses to annotate biomedical research literature [49], [50]. By 1986 several medical ontologies were in wide use and the National Library of Medicine began the Unified Medical Language System (UMLS) project in order to link many of them to facilitate information retrieval and integrative analysis [51]. By far the most frequently cited ontology today in biomedicine is the Gene Ontology (GO), a structurally lightweight taxonomy begun in 1998 that now comprises over 22,000 entities biologists use to characterize gene products [52].

Thesaurus as ontology

We propose to further test and evaluate our ontology metrics using the fit between a synonym dictionary or thesaurus and a corpus. A thesaurus is a set of words (concepts) connected by synonymy and occasionally antonymy. Because synonymy constitutes an is-equivalent-to relationship (i.e., word x is-equivalent-to word y), thesauri can be viewed as ontologies, albeit rudimentary ones. Moreover, because a given thesaurus is intended to describe the substitution of words in a domain of language, the relationship between a thesaurus and a corpus provides a powerful model for developing and testing general measures of the fit between ontology and knowledge domain. Most useful for our purposes, the balance between theoretical coverage and parsimony is captured with the thesauri model: A bloated 100,000 word thesaurus is clearly not superior to one with 20,000 entries efficiently tuned to its domain. A writer using the larger thesaurus would not only be inconvenienced by needing to leaf through more irrelevant headwords (the word headings followed by lists of synonyms), but be challenged by needing to avoid inappropriate synonyms.

Synonymy is transitive but not necessarily symmetric – the headword is sometimes more general than its substitute. Occasionally thesauri also include antonyms, i.e., is-the-opposite-of, but fewer words have antonyms and for those that do, antonyms listed are far fewer than synonyms.

A typical thesaurus differs from a typical scientific ontology. While ontologies often include many types of relations, thesauri contain only one or two. Thesauri capture the natural diversity of concepts but are not optimized for non-redundancy and frequently contain cycles. Any two exchangeable words, each the other's synonym, constitute a cycle. As such, thesauri are not consistent, rational structures across which strict, logical inference is possible. They instead represent a wide sample of conflicting linguistic choices that represent a combination of historical association and neural predisposition. Despite these differences, we believe thesauri are insightful models of modern, domain-specific ontologies. Working with thesauri also contributes practically to evaluating the match between ontologies and discourse. Because all of our measures depend on mapping concepts from ontology to text, assessment of the match between thesaurus and text can directly improve our identification of ontology concepts via synonymy.


What is the difference between cyber security ontologies and scenario ontologies in this system?

I have been reading the paper Towards a Cognitive System for Decision Support in Cyber Operations. And I have been trying to understand the role of two ontologies proposed here, cyber security ontologies and scenario ontologies.

I have asked a question on CGS SE, What are the practical uses of ontologies? and the role of the ACT-R architecture in this system have become clear to some extent, but the question about the ontologies is that:

Does the cyber security ontologies make the ideas in the TENA's Repo understandable for ACT-R or the ideas that are in the repo has to be represented by an ontology called cyber security ontologies in order to be understandable for the ACT-R architecture?

The Cognitive System realized in the TENA framework:

Questions raising from this Figure can be:

  1. What is the difference between the cyber security ontologies and the scenario ontologies?
  2. What kind of knowledge the scenario ontologies represent?
  3. And of course what is the reason that the scenario ontologies are connected to Event data management module instead of TENA's Repo?

ACT-R Architecture:

TENA Architecture:


ORIGINAL RESEARCH article

Matthew John Yee-King * , Thomas Wilmering, Maria Teresa Llano Rodriguez, Maria Krivenski and Mark d'Inverno

In this paper, we present an analysis of feedback as it occurs in classroom-based and technology supported music instrument learning. Feedback is key to learning in music education and we have developed technology based on ideas from social media and audio annotation which aims to make feedback more effective. The analysis here aims to enhance our understanding of technology-mediated feedback. The result of this analysis is three ontologies describing feedback and feedback systems. First, we developed the teacher's ontology using a qualitative, observational approach to describe the types of feedback that music instrument tutors give to their students. We used this ontology to inform the design of an online music annotation platform for music students. Second, we develop the grounded ontology using a grounded theory approach, based on 2,000 annotations made by students and tutors using the annotation platform. We compare the grounded and teacher's ontologies by examining structural, semantic and expressive features. Through this comparison, we find that the grounded ontology includes elements of the teacher's ontology as well as elements relating to practical and social aspects of the annotation platform, while the teacher's ontology contains more domain knowledge. Third, we formalize the transactional capabilities of the platform into the third ontology, the platform ontology, which we have written in the OWL language, and show how this allows us to develop several practical use cases, including the use of semantic web capabilities in music education contexts.


NIFSTD Design Principles

As originally proposed in Bug et al. (2008), NIFSTD was envisioned as an extensive set of ontologies, specific to the domain of neuroscience. NIFSTD started its journey with a carefully designed set of principles which enabled its ontologies to be maximally reusable, extendable, and practically applicable within information systems. Over the course of its evolution, NIFSTD augmented its principles in order to conform to the current, up-to-date trends, and practices recommended by the semantic web communities as well as by the community of standard biomedical ontologies. NIFSTD closely follows the OBO Foundry (Smith et al., 2007) best practices however, the constraints of the NIF project required that we take a practical approach, designed to easily extend the NIFSTD ontologies, while at the same time mitigating against any disruptions to the production NIF system. Our approach is outlined following the discussion of the NeuroLex Semantic Wiki framework in Section “The NeuroLex Semantic Wiki Framework.”

NIFSTD modular structure

The NIFSTD ontologies are built in a modular fashion, where each module covers a distinct, orthogonal domain of neuroscience (Bug et al., 2008). Modules covered in NIFSTD include anatomy, cell types, experimental techniques, nervous system function, small molecules, and so forth. The upper-level classes in NIFSTD modules are carefully normalized under the classes of Basic Formal Ontology (BFO) 2 . These normalizations closely follow the guidelines specified in BFO manual (BFO manual) 3 . Based on the principles described in Rector (2003), NIFSTD utilizes a powerful ontology modularization technique that allows its ontologies to be reusable and easily extendable. Each domain specified in Table ​ Table1 1 has their corresponding module in NIFSTD. The individual module in turn may cover multiple sub-domains. The ingestion strategy for each source in Table ​ Table1 1 is shown in the “Import/Adapt” column, where “import” refers to the BFO compliant sources which were already represented in OWL �pt” refers to the sources that required refactoring of the source vocabularies into OWL, and/or required normalization under BFO entities.

Table 1

The NIFSTD OWL modules and corresponding community sources from which they were built.

NIFSTD modulesExternal sourceImport/adapt
Organismal taxonomyNCBI Taxonomy, GBIF, ITIS, IMSR, Jackson Labs mouse catalog the model organisms in common use by neuroscientists are extracted from NCBI taxonomy and kept in a separate module with mappingsAdapt
Molecules, chemicalsIUPHAR ion channels and receptors, sequence ontology (SO) NIDA drug lists from ChEBI, and imported protein ontology (PRO)Adapt/import
Sub-cellular anatomySub-cellular anatomy ontology (SAO). Extracted cell parts and sub-cellular structures from SAO-CORE. Imported GO cellular component with mappingAdapt/import
CellCCDB, NeuronDB, NeuroMorpho.org. Terminologies OBO cell ontology was not considered as it did not contain region specific cell typesAdapt
Gross anatomyNeuroNames extended by including terms from BIRNLex, SumsDB, BrainMap.org, etc. multi-scale representation of nervous system, macroscopic anatomyAdapt
Nervous system functionSensory, behavior, cognition terms from NIF, BIRN, BrainMap.org, MeSH, and UMLSAdapt
Nervous system dysfunctionNervous system disease from MeSH, NINDS terminology Imported Disease Ontology (DO) with mappingAdapt/import
Phenotypic qualitiesPhenotypic quality ontology (PATO) imported as part of the OBO foundry coreImport
Investigation: reagentsOverlaps with molecules above from ChEBI, SO, and PROAdapt/import
Investigation: instruments, protocols, plansBased on the ontology for biomedical investigation (OBI) to include entities for biomaterial transformations, assays, data collection, data transformations. OBI-Proxi class still remains. See discussion belowAdapt
Investigation: resource typeNIF, OBI, NITRC, biomedical resource ontology (BRO)Adapt
Investigation: cognitive paradigmCognitive paradigm ontology (CogPO) was extended from NIF-investigation moduleImport
Biological processGene ontology (GO) biological processImport

This table reports the updates of the external sources that were previously used in Bug et al. (2008) paper.

NIFSTD representation formalism

NIFSTD modules are expressed in W3C standard Web Ontology Language (OWL) 4 Description Logic (OWL-DL) formalism. Using OWL-DL, NIFSTD provides a balance between its expressivity and computational decidability. OWL-DL also allows the NIFSTD ontologies to be supported by a range of open source DIG compliant reasoners (DIG Group) 5 such as Pellet and Fact++. NIFSTD utilizes these reasoners to maintain its inferred classification hierarchies as well as to keep its ontologies in a logically consistent state.

NIFSTD currently supports OWL 2 (OWL 2 Primer) 6 , the latest ontology language advocated by the W3C consortium. OWL 2 provides improved ontological features such as defining property chain rules to enable transitivity across object properties, specifying reflexivity, asymmetry, and disjointness between object properties, richer data-types, qualified cardinality restrictions, and enhanced annotation capabilities.

Accessing NIFSTD ontologies

NIFSTD is available in OWL format 7 for loading in Protégé (Protégé Ontology Editor) 8 or other ontology editing tools that use the OWL API. Protégé has been the main editing tool for building the NIFSTD modules. Currently, NIFSTD supports Protégé 4.X versions with OWL 2. On the web, NIFSTD is available through the NCBO BioPortal (NIFSTD in NCBO BioPortal) 9 , which also provides annotation and various mapping services. NIFSTD is also available in RDF and has its SPARQL endpoint (NIFSTD SPARQL endpoint) 10 .

Within NIF, NIFSTD is served through an ontology management system called OntoQuest (Gupta et al., 2008, 2010). Originally reported in Chen et al. (2006), OntoQuest generates an OWL-compliant relational schema for NIFSTD ontologies and implements various graph search algorithms for navigating, path finding, hierarchy exploration, and term searching in ontological graphs. OntoQuest provides a collection of web services to extract specific ontological content 11 . Ontoquest also provides the NIF search portal with automated query expansion (Gupta et al., 2010) for matching NIFSTD terms, including those that are defined through logical restrictions.

Reuse of external sources

One of the founding principles of NIFSTD is to avoid duplication of efforts by conforming to existing standard biomedical ontologies and vocabulary sources. It should also be noted that NIF is not charged with developing new ontological modules but relies on community sources for new contents. Whenever possible, NIFSTD reuses those existing sources as the initial building blocks for its core modules. Essentially, these external sources were selected based on their relevance to neuroscience knowledge models. Table ​ Table1 1 illustrates the modules in NIFSTD that are either adapted, or imported, or extracted from external community sources. NIFSTD reuses a diverse collection of sources for its ontologies. These sources range from fully structured ontologies to loosely structured controlled vocabularies, lexicons, or nomenclatures that exist within the biomedical community. Each module in NIFSTD (Table ​ (Table1) 1 ) integrates the relevant terms or concepts from those external sources into a single, internally consistent ontology with a matching standard nomenclature. The process and nature of reusing an external source in NIFSTD varied upon its state. The following rules summarize the basic reuse principles:

If the source is already represented in OWL, normalized under BFO, and is orthogonal to existing NIFSTD modules, the source is simply imported as a new module.

If the source is represented in OWL and orthogonal to NIFSTD modules, but is not normalized under BFO, then an ontology-bridging module (explained later) is constructed before importing the new source. These kinds of bridging modules declare the necessary relational properties to normalize the target ontology source under BFO.

If the source is orthogonal to NIFSTD modules, but is not represented in OWL, or does not use BFO as its foundational layer, then the source should be converted into OWL, and should be normalized under BFO following the Second rule above.

If the source is satisfiable by the above three principles but observed to be too large for NIF’s scope, then a relevant subset is extracted as suggested by NIF domain experts.

For the ontologies that are of type 4 above, NIFSTD currently follows MIREOT principles (Courtot et al., 2009) that allow extracting a required subset of classes from a large ontology, e.g., ChEBI, NCBI Organismal Taxonomy, etc.

Neuroscience Information Framework Project readily accepts contributions from groups working on ontologies in the neuroscience domain. For example, the Cognitive Paradigm Ontology (CogPO Turner and Laird, 2012), has been imported under the NIF-Investigation module. As we worked through the process of adopting CogPO, we needed to make sure that the upper-level classes in CogPO were BFO compliant and derivable under the same foundational layers of NIFSTD, and the properties were extended from OBO-RO. As part of NIFSTD, CogPO can be used to annotate datasets for specific querying and comparisons and the contents are exposed via NeuroLex for community involvement (see The NeuroLex Semantic Wiki Framework).

At the beginning of the NIF project, the size, format, or immaturity of some community ontologies necessitated that NIF add significant custom content in order to provide coverage in certain modules. Over the last couple of years, the tools for extracting relevant portions of ontologies and for converting ontologies from OBO to OWL format have been improved. Thus, since the last publication (Bug et al., 2008), several of these custom ontologies were swapped for community ontologies. However, it should be noted that the NIF-Investigation module still contains “OBI-proxy” classes that were originally meant to be replaced by the matured version of OBI under BFO 1.0. However, the matured version of OBI entailed many of the original OBI-proxy classes to be retired, changed their identifiers, and sometimes did not replace them by any new classes. As NIF-Investigation continued to add many new concepts under the original obi-proxy classes, directly importing the current OBI to replace the proxy classes was not a reasonable solution. However, we have proposed the NIF-Investigation terms to be added, aligned, and maintained within OBI. We plan to incorporate portions of OBI to be extracted under NIF-Investigation, for the future release of NIFSTD.

Single inheritance for named classes

An asserted named class in NIFSTD can have only one named class as its parent. However, the same named class can be asserted under multiple anonymous classes. This principle promotes the named classes to be univocal to avoid ambiguities. In NIFSTD, classes with multiple parents are derivable via automated classification on defined classes. This approach saves a great deal of manual labor and minimizes human errors inherent in maintaining multiple hierarchies. Also, this approach provides logical and intuitive reasons as to how a class may exist under multiple, different hierarchies. A useful example can be seen in Neuronal type classification in section 𠇎xample Knowledge Model: NIFSTD Neuronal Cell Types” where a particular neuron type can be a subclass of multiple different 𠇊nonymous” classes, e.g., Neuron X is a Neuron that has GABA as a neurotransmitter. The details about the motivation behind this approach can be found in Alan Rector’s Normalization pattern discussion (Ontology Design Pattern: Normalization) 12 .

Unique identifiers and annotation properties

NIFSTD entities are named by unique identifiers and are accompanied by a variety of annotation properties. These annotation properties are mostly derived from Dublin Core Metadata (DC) and the Simple Knowledge Organization System (SKOS) model. While several annotation properties still exist from the legacy modules of BIRNLex, from which NIFSTD was built (Bug et al., 2008), currently NIFSTD only requires the following set of annotation properties for a given new class.

rdfs: label – A human-readable name for a class or property. If a class can be named in multiple ways, a label is chosen based on the name most commonly used in literatures as selected by NIF domain experts. Other names for the class can be kept as synonyms.

nifstd: createdDate – The date when the current class or property was created. This property serves as a way to track versioning.

dc: contributor – Name of the curator who has contributed to the definition of a class.

core: definition – A natural language definition of a class. In ideal case, this definition should be written in a standard Aristotelian form.

nifstd: definitionSource – A traceable source for the current definition in a free text form. A source could be a URI, an informal publication reference, a PubMed ID, etc.

owl: versionInfo – A version number associated with NeuroLex category.

The following set of properties is used when necessary:

nifstd: modifiedDate – The date when the current class was last updated.

nifstd: synonym – A lexical variant of the class name.

nifstd: abbreviation – A short name serving as a synonym, consisting of a sequence of letters typically taken from the beginning of words of which either the preferred label or another synonym are composed. Note that this should only be used for standard abbreviation (i.e., those that are commonly used in literatures, e.g., in a PubMed indexed article) 13 . Many of the abbreviations supplied are actually acronyms, but we no longer distinguish between the two.

rdfs: comment – Anything related to the class or the property that should be noted.

For the current versions of Protégé, the above properties can be set as the default set of properties for NIFSTD. NIFSTD has other annotation properties associated with version control which will be described in Section “Versioning policy.” When extracting external sources using MIREOT principles, NIFSTD keeps the identical source URIs along with the original identifier fragments unaltered. This approach allows NIF to avoid extra mapping efforts with the community sources. Prior to the MIREOT approach, the practice was simply to assign new class ID for any externally sourced classes which led to maintenance difficulty due to too many mapping annotations. We still have some mappings from the BIRNLex vocabularies, as we did not have the MIREOT tool when we started.

NIFSTD object properties

NIFSTD imports the OBO Relations Ontology (OBO-RO) for the standard set of properties as defined by the OBO Biomedical community. Other object properties in NIFSTD are mostly derived from OBO-RO. Based on where the relations are asserted, there are two kinds of relations that exist in NIFSTD: one that are within a same module, i.e., intra-modular relations, and the other that is inter-modular, cross-domain relations that exist as a separate, isolated module between two independent modules.

The intra-modular relations are the ones that exist as universally true within the classes of a specific module these relations are kept integrated together within the same module. The relations between entities that could vary based on a specific application and require domain-dependent viewpoints are kept in a separate bridging module – a module that only contains logical restrictions and definitions on a required set of classes assigned between multiple modules (see Figure ​ Figure1 1 ).

Two example bridging OWL modules in NIFSTD (rectangular boxes) that contain class property associations between multiple core modules.

The bridging modules allow the core domain modules – e.g., anatomy, cell type, etc., to remain independent of one another. This approach keeps the modularity principles intact, and facilitates broader communities to utilize and extend NIFSTD with reasonable ease. Some of the bridge modules in NIFSTD are constructed in order to include simple semantic equivalencies between ontologies.

New bridging modules can be developed should a user desire a customized ontology of their own application domain based on one or multiple NIFSTD core modules. For example, the Neurodegenerative Disease Phenotype Ontology (NDPO Maynard et al., submitted) is essentially a bridge module that asserts a number of entity-quality relations (on classes in relevant NIFSTD modules) to specify and define a list of named phenotypes.

As the existing reasoners fail to scale against large ontologies like NIFSTD, modularity in NIFSTD plays an important role. From an ontology development perspective, it is crucial to frequently check the consistency after asserting any new set of classification along with their axioms. Since NIFSTD is divided into smaller independent modules, the task of automated classification and consistency checking becomes much more maintainable while working on a specific module of interest.

Versioning policy

NIFSTD provides various levels of versioning for its content. It allows humans and machine to choose the level of version information required for tracking changes. Various annotation properties are associated with versioning different levels of content, including creation and modified date for each of the classes and files, file level versioning for each of the modules, and annotations for retiring antiquated concept definitions, tracking former ontology graph position, and replacement concepts.

– NIFSTD: has Former Parent Class – the full logical URI of the former parent class of a deprecated class or any other class whose super-class has been changed. This property is typically used for a deprecated/retired class.

– NIFSTD: is Replaced By Class – the full logical URI of the new class that exists as the replacement of the current retired class. This property should only be used if there exist a new replacing class.

The umbrella file nif.owl at http://purl.org/nif/ontology/nif.owl always imports the current versions of the NIFSTD modules. All other versions after the 1.0 release can be accessed from the NIF ontology archive at http://ontology.neuinfo.org/NIF/Archive/.


Psychology in medicine

So, why is psychology important for medicine? This question hasn't been asked enough. Part of this may be due to the difference between psychologists and psychiatrists.

The suffix “-iatry” refers to medical treatment, and as such, psychiatrists are medical doctors who have been through medical school and often practice in a hospital setting. Psychologists, on the other hand, don’t have this same type of training and often practice out of clinics or small practices. Psychologists are unable to prescribe drugs (in most jurisdictions), and thus may be seen as “less serious” than doctors and psychiatrists.

In the past, the negative perceptions of psychology within the medical establishment have hindered the spread of vital psychological training. But increasingly, medical schools are recognizing the importance of training future medical practitioners on psychological issues.

Using psychology in a normal medical practice

In a primary care setting, these skills are critical. Smoking, drug and alcohol abuse, eating disorders and obesity, depression, schizophrenia, mental disabilities, and the interactions of all these psychological issues are profoundly important for understanding and treating health issues in the general population.

Even specialists would do well to have a basic understanding of psychology and common psychiatric disorders. Due to the low importance put on these issues in the past, it’s likely that many patients who end up at specialists for one reason or another may have underlying, undiagnosed psychological conditions that are contributing to their health issues.

Preventative care

Perhaps one of the reasons that psychology has received short shrift in the past is the preventative nature of much psychiatric care. Medical doctors, more often than not, may only see psychiatric patients whose issues have progressed to causing harm to the body or serious disruption to normal functioning.

But preventative care—including psychological evaluation and treatment—is a priority for medical doctors as well. Patients provided with preventative care have better health outcomes with lower costs and less risk of complications and ongoing issues. Just because the issue is psychological does not mean preventative care is any less possible.

Fundamentals of psychology

For medical practitioners of all sorts, one critical issue is doctor-patient communication: how the transmission of potentially life-changing medical knowledge can best be handled. Even in low-stakes situations, understanding the psychology of a patient can provide key insights into the best motivations and methods for promoting their positive health. This is part of the bedside manner, and it’s generally an under-appreciated part of the field. Good communication skills are essential for medicine, and understanding psychology can truly help develop this talent.


Behavior change interventions: the potential of ontologies for advancing science and practice

A central goal of behavioral medicine is the creation of evidence-based interventions for promoting behavior change. Scientific knowledge about behavior change could be more effectively accumulated using “ontologies.” In information science, an ontology is a systematic method for articulating a “controlled vocabulary” of agreed-upon terms and their inter-relationships. It involves three core elements: (1) a controlled vocabulary specifying and defining existing classes (2) specification of the inter-relationships between classes and (3) codification in a computer-readable format to enable knowledge generation, organization, reuse, integration, and analysis. This paper introduces ontologies, provides a review of current efforts to create ontologies related to behavior change interventions and suggests future work. This paper was written by behavioral medicine and information science experts and was developed in partnership between the Society of Behavioral Medicine’s Technology Special Interest Group (SIG) and the Theories and Techniques of Behavior Change Interventions SIG. In recent years significant progress has been made in the foundational work needed to develop ontologies of behavior change. Ontologies of behavior change could facilitate a transformation of behavioral science from a field in which data from different experiments are siloed into one in which data across experiments could be compared and/or integrated. This could facilitate new approaches to hypothesis generation and knowledge discovery in behavioral science.

This is a preview of subscription content, access via your institution.


Health

Psychology can also be a useful tool for improving your overall health. From ways to encourage exercise and better nutrition to new treatments for depression, the field of health psychology offers a wealth of beneficial strategies that can help you to be healthier and happier.


Mean Salaries of Different Types of Psychologists

-All data from the Bureau of Labor Statistics, May 2014

Type of PyschologistMean Salary

Psychology professors at universities or four-year colleges

Clinical psychologists at general medical and surgical hospitals

School psychologists at elementary and secondary schools

Management, scientific, and technical consultants

Overview of the Field

An overview of psychology as a field of study.