Integration of emerging computer technologies for an efficient image sequences analysis

  • Authors:
  • Luisa D'Amore;Daniela Casaburi;Ardelio Galletti;Livia Marcellino;Almerico Murli

  • Affiliations:
  • University of Naples Federico II, and SPACI, Complesso Universitario M.S. Angelo, Via Cintia, Naples, Italy;SPACI, c/o Complesso Universitario M.S.Angelo, Via Cintia, Naples, Italy;University of Naples Parthenope, Centro Direzionale, Is. C4, Naples, Italy;University of Naples Parthenope, Centro Direzionale, Is. C4, Naples, Italy;SPACI and University of Naples Federico II, Complesso Universitario M.S. Angelo, Via Cintia, Naples, Italy

  • Venue:
  • Integrated Computer-Aided Engineering
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Real time image sequences analysis is a challenge. Using high performance computing technologies, a parallel algorithm for performing data sequence analysis is proposed. We call it pipelined algorithm PA. The idea underlying the design of PA comes from the Pipes and Filters design approach: to partition the sequence into ordered subsets and to overlap tasks execution via pipelining. Moreover, in order to improve the performance gain of the PA algorithm, tasks' execution is distributed among multicore processors. The approach chosen for introducing concurrency takes into account the hierarchical parallelism of system architecture of multicore multiprocessors. More precisely, three parallelization strategies of PA are considered: first strategy distributes the execution of each task among the same number of cores employing a fine-grained task parallelism we call it inter-task data parallelism, second strategy refers to the execution of each task to one core introducing concurrency at a coarser level we call it intra-task functional parallelism, and the last one combines the previous two approaches: it refers to the mapping of each task to a group of cores intra-task functional parallelism distributing task's execution within each group inter-task data parallelism. We prove, both theoretically and experimentally that the third strategy is more effective than the others in terms of speed up improvement as the data length increases. As testbed the segmentation of ultrasound sequences is considered. Experiments on real data are carried out using a multicore-based parallel computer system relying on PETSc Portable Extensible Toolkit for Scientific computation, a high level software computing environment.