The ModelCraft framework: Capturing freehand annotations and edits to facilitate the 3D model design process using a digital pen

  • Authors:
  • Hyunyoung Song;François Guimbretière;Hod Lipson

  • Affiliations:
  • University of Maryland, MD;University of Maryland, MD;Cornell University, Ithaca, NY

  • Venue:
  • ACM Transactions on Computer-Human Interaction (TOCHI)
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recent advancements in rapid prototyping techniques such as 3D printing and laser cutting are changing the perception of physical 3D models in architecture and industrial design. Physical models are frequently created not only to finalize a project but also to demonstrate an idea in early design stages. For such tasks, models can easily be annotated to capture comments, edits, and other forms of feedback. Unfortunately, these annotations remain in the physical world and cannot easily be transferred back to the digital world. Our system, ModelCraft, addresses this problem by augmenting the surface of a model with a traceable pattern. Any sketch drawn on the surface of the model using a digital pen is recovered as part of a digital representation. Sketches can also be interpreted as edit marks that trigger the corresponding operations on the CAD model. ModelCraft supports a wide range of operations on complex models, from editing a model to assembling multiple models, and offers physical tools to capture free-space input. Several interviews and a formal study with the potential users of our system proved the ModelCraft system useful. Our system is inexpensive, requires no tracking infrastructure or per object calibration, and we show how it could be extended seamlessly to use current 3D printing technology.