Bioscientific data processing and modeling

  • Authors:
  • Joost Kok;Anna-Lena Lamprecht;Fons J. Verbeek;Mark D. Wilkinson

  • Affiliations:
  • Leiden Institute of Advanced Computer Science, Leiden University, Leiden, The Netherlands;Chair for Service and Software Engineering, University of Potsdam, Potsdam, Germany;Leiden Institute of Advanced Computer Science, Leiden University, Leiden, The Netherlands;Centro de Biotecnología y Genómica de Plantas, Parque Científico y Tecnológico de la U.P.M., Pozuelo de Alarcón (Madrid), Spain

  • Venue:
  • ISoLA'12 Proceedings of the 5th international conference on Leveraging Applications of Formal Methods, Verification and Validation: applications and case studies - Volume Part II
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

With more than 200 different types of "-omic" data [1] spanning from submolecular, through molecular, cell, cell-systems, tissues, organs, phenotypes, gene-environment interactions, and ending at ecology and organism communities, the problem and complexity of bioscientific data processing has never been greater. Often data are generated in high-throughput studies with the aim to have a sufficient volume to find patterns and detect rare events. For these highthroughput approaches new methods have to be developed in order to assure integrity of the volume of data that is produced. At the same time efforts to integrate these widely-varying data types are underway in research fields such as systems biology. Systems-level research requires yet additional methodologies to pipeline, process, query, and interpret data, and such pipelines are, themselves, objects of scientific value if they can be re-used or re-purposed by other researchers.