BGU Team Develops New 3D Scanning Method

An international collaboration of Researchers formulated a method that helps achieve a more accurate 3D scanning of reconstructing intricate objects than what is presently possible. The innovative technique integrates robotics and water.

Prof. Andrei Scharf (Credit: American Associates, Ben-Gurion University of the Negev (AABGU))

Using a robotic arm to immerse an object on an axis at various angles, and measuring the volume displacement of each dip, we combine each sequence and create a volumetric shape representation of an object.

Prof. Andrei Scharf, Department of Computer Science, Ben-Gurion University of the Negev

The key feature of our method is that it employs fluid displacements as the shape sensor,” Prof. Scharf explains. “Unlike optical sensors, the liquid has no line-of-sight requirements. It penetrates cavities and hidden parts of the object, as well as transparent and glossy materials, thus bypassing all visibility and optical limitations of conventional scanning devices.”

The team applied Archimedes’ theory of fluid displacement — the volume of displaced fluid is equivalent to the volume of a submerged object — to convert the modeling of surface reconstruction into a volume measurement problem. This acts as the foundation for the team’s advanced, pioneering solution to challenges in existing 3D shape reconstruction.

The team demonstrated the new method on 3D shapes with a range of complexities, including an elephant sculpture, a DNA double helix and a mother and child hugging. The results reveal that the dip reconstructions are almost as accurate as the original 3D model.

The new method is linked to computed tomography — an imaging technique that employs optical systems for accurate scanning and pictures. However, tomography-based devices are bulky and costly and can only be used in a safe, tailored environment.

Our approach is both safe and inexpensive, and a much more appealing alternative for generating a complete shape at a low-computational cost, using an innovative data collection method.

Prof. Andrei Scharf, Department of Computer Science, Ben-Gurion University of the Negev

The Researchers will present their paper titled “Dip Transform for 3D Shape Reconstruction,” at the SIGGRAPH 2017 in Los Angeles, held between July 30th and August 3rd. It is also published in the July issue of ACM Transactions on Graphics. SIGGRAPH spotlights the most ground-breaking computer graphics research and interactive methods from around the world.

Besides Prof. Scharf, who is also affiliated with the Advanced Innovation Center for Future Visual Entertainment (AICFVE) in Beijing China, the other Researchers involved include Kfir Aberman, Oren Katzir and Daniel Cohen-Or of Tel Aviv University and AICFVE; Baoquan Chen, Qiang Zhou and Zegang Luo of Shandong University; and Chen Greif of The University of British Columbia.

The research project was supported partly by the Joint NSFC-ISF Research Program 61561146397, jointly funded by the National Natural Science Foundation of China and the Israel Science Foundation (No. 61561146397), the National Basic Research grant (973) (No. 2015CB352501) and the NSERC of Canada grant 261539.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.