An approach to assessing virtual environments for synchronous and remote collaborative design

  • Authors:
  • Michele Germani;Maura Mengoni;Margherita Peruzzini

  • Affiliations:
  • Department of Industrial Engineering and Mathematical Sciences, Faculty of Engineering, Universití Politecnica delle Marche, 60131 Via Brecce Bianche, Ancona, Italy;Department of Industrial Engineering and Mathematical Sciences, Faculty of Engineering, Universití Politecnica delle Marche, 60131 Via Brecce Bianche, Ancona, Italy;Department of Industrial Engineering and Mathematical Sciences, Faculty of Engineering, Universití Politecnica delle Marche, 60131 Via Brecce Bianche, Ancona, Italy

  • Venue:
  • Advanced Engineering Informatics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper considers applying novel Virtual Environments (VEs) in collaborative product design, focusing on reviewing activities. Companies are usually anchored to commercial ICT tools, which are mature and reliable. However, two main problems emerge: the difficulty in selecting the most suitable tools for specific purposes and the complexity in evaluating the impact that using technology has on design collaboration. The present work aims to face both aspects by proposing a structured benchmarking method based on expert judgements and defining a set of benchmarking weights based on experimental tests. The method considers both human-human interaction and teamwork-related aspects. A subsequent evaluation protocol considering both process efficiency and human-human interaction allows a closed-loop verification process. Pilot projects evaluate different technologies, and the benchmarking weights are verified and adjusted for more reliable system assessment. This paper focuses on synchronous and remote design review activities: three different tools have been compared according to expert judgements. The two best performing tools have been implemented as pilot projects within real industrial chains. Design collaboration has been assessed by considering both process performance and human-human interaction quality, as well as benchmarking results validated by indicating some corrective actions. The final benchmarking weights can thus be further adopted for an agile system benchmark in synchronous and remote design. The main findings suggest defining both an innovative process to verify the expert benchmark reliability and a trusty benchmarking method to evaluate tools for synchronous and remote design without experimental testing. Furthermore, the proposed method has a general validity and can be properly set for different collaborative dimensions.