Ontology-based customizable three-dimensional modeling for simulation

  • Authors:
  • Minho Park;Paul A. Fishwick

  • Affiliations:
  • University of Florida;University of Florida

  • Venue:
  • Ontology-based customizable three-dimensional modeling for simulation
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Modeling techniques tend to be found in isolated communities: geometry models in computer-aided design (CAD) and computer graphics, dynamic models in computer simulation, and information models in information technology. When models are included within the same digital environment, the ways of connecting them together seamlessly and visually are not well known even though elements from each model have many commonalities. We attempt to address this deficiency by studying specific ways in which models can be interconnected within the same 3D space. For example, consider this scenario: a region with several key military vehicles and targets: planes (both fighter as well as command and control center), surface-to-air missile (SAM) sites, and drones. A variety of models define the geometry, information, and dynamics of these objects. Ideally, we can explore and execute these models within the 3D scene by formalizing domain knowledge and providing a well-defined methodology. We present a modeling and simulation methodology called integrative multimodeling. The purpose of integrative multimodeling is to provide a human-computer interaction environment that allows components of different model types to be linked to one another—most notably dynamic models used in simulation to geometry models for the phenomena being modeled. In the context of integrative multimodeling, the following general issues naturally arise: (1) “How can we connect different model components?”; (2) “How can we visualize different model types in 3D space?”; and (3) “How can we simulate a dynamic model within the integrative multimodeling environment?” For the first issue, we have defined a formalized scene domain to bridge semantic gaps between the different models and facilitate mapping processes between the components of the different models by conceptualizing all objects existing in the scene domain using semantic languages and tools. For the second issue, we have developed a Python-based interface to provide visualization environments. Using the interface, users can visualize and create their own model types as well as construct a model component database. For the third issue, we have employed the RUBE framework, which was developed by the Graphics, Modeling and Arts (GMA) Laboratory at the University of Florida. RUBE is an extensible markup language (XML)-based Modeling and Simulation framework and application, which permits users to specify and simulate a dynamic model, with an ability to customize a model presentation using 2D or 3D visualizations. (Abstract shortened by UMI.)