Editorial Feature

Using Machine Vision for Robot Guidance

asharkyu / Shutterstock

Since the beginning of the Industrial Revolution, factories have experienced massive changes in the production process. From the utilization of electrical energy and the use of electronics and information technologies to the current moment whereby cyber-physical production systems are embracing automatization and manufacturing technologies.

Robot-based industrialization has gone through several stages in order to reach the current state of the art phase. These stages include safe automatization, mobile manipulation and the latest milestone in the industry, intelligent and perceptive robot systems. Robotics contribute towards a more efficient modern industry by increasing productivity, reducing energy and material costs and providing safer working conditions.

Vision systems are used in industry for inspection, quality control, improvements to the safety of the working environment and for the guidance of robots. Vision systems in robots are crucial for the collaboration of humans and robots and to allow robots to move around and manipulate different objects. Different machine vision techniques for robot guidance have been utilized in recent years to spur the industries that they are involved in.

Machine Vision

Vision systems can be scene-related and object-related. The first type includes a camera mounted on the robot with the purpose of mapping and object localization. In object-related vision systems, the camera is attached to the end-effector of the robot to stimulate eye-hand configuration and the acquisition of a different viewpoint of objects.

Optical calibration methods such as laser tracker systems and photogrammetry are used to improve vision accuracy in robots by detecting the spatial position of objects and correcting robot motion if required. This addition of measurement system overcomes the accuracy deficiencies and utilizes the robots precise movement.

3D Reconstruction

3D perception is crucial for robots, it has a key role in accomplishing the navigation and autonomous manipulation of objects in any environment. Visualization of the surroundings in a human-readable way is a pre-requisite for the intuitive user interface.

Consequently, vision systems for robots require the utilization of 3D information and this is achieved through the process of camera calibration. A mathematical model of a set of parameters that describe the 3D position of an object in space and the 2D image coordinates are determined by mapping the 3D location onto the 2D image.

Stereo Vision & Photogrammetry

Photogrammetry is a process that involves measuring the dimensions of a photo of an object. It is a technique used in architecture, geology, engineering and topography. It is a 3D reconstruction technique whereby the same point needs to be found in another image to obtain a 3D position. Stickers and laser points serve as markers to create high contrast which ensures object detection.

Time of Flight

A time of flight camera is a camera that utilizes light pulses to obtain 3D information by projecting a visible or infrared pattern of an object. The camera captures the lights reflected onto the objects in the scene and using the delay of incoming light, the distance of the object from which the light has been reflected can be estimated.

Structured Light

Structured light equipment is a system that has a light source and one or two receptors (cameras). There are two structured light techniques. The first group is time-perplexing techniques that project a sequence of binary patterns of light, obtaining a large resolution. However, the object and the camera need to be static for this to be achieved.

The second group is one-shot techniques whereby only one, unique pattern of light is projected. In these cases, a moving object or camera is not an obstacle. This unique pattern is achieved because each point is identified by its surrounding point. The light is modified and recorded by the camera as a strip of bands with different widths resembling a zebra pattern. Depth is obtained by calculating the distance between planes and lines.

Light Coding

In these techniques, a laser source radiates constant light in a semi-random fashion and the reflected pattern is detected by an infrared camera for analysis. The distance from each dot of this semi-random pattern is calculated and the shape and size of the object reflecting this pattern can be estimated.

Laser Triangulation

The camera, the laser emitting light and the object form a triangle and the distance between the camera and the laser and the angle at the laser corner is known. This way the angle at the camera can be calculated and all this information help to determine the shape and size of the triangle, thus providing the distance to the object.

Conclusion

Using vision techniques to guide a robot can be challenging as it requires navigating in a complex environment and manipulating various objects within this environment. However, these new developments provide new possibilities for modern technologies. Surface texture, light conditions and object occlusion are factors that still pose a challenge in the development of vision systems for robots.

The spatial coordinates of a huge number of points can be obtained by calculating the distance that light travels from its source to the point of reflection on an object. Various techniques utilize this principle creating an exciting ground of exploration to determine which technique is more appropriate, considering the task that a robot needs to accomplish.

3D machine vision is the future of modern industry and robotics. Though it can be challenging, utilizing vision systems in robots is an integral part of the production and huge step toward creating a more efficient production environment.

Sources

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

Mihaela Dimitrova

Written by

Mihaela Dimitrova

Mihaela's curiosity has pushed her to explore the human mind and the intricate inner workings in the brain. She has a B.Sc. in Psychology from the University of Birmingham and an M.Sc. in Human-Computer Interaction from University College London.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Dimitrova, Mihaela. (2019, October 16). Using Machine Vision for Robot Guidance. AZoM. Retrieved on April 24, 2024 from https://www.azom.com/article.aspx?ArticleID=18560.

  • MLA

    Dimitrova, Mihaela. "Using Machine Vision for Robot Guidance". AZoM. 24 April 2024. <https://www.azom.com/article.aspx?ArticleID=18560>.

  • Chicago

    Dimitrova, Mihaela. "Using Machine Vision for Robot Guidance". AZoM. https://www.azom.com/article.aspx?ArticleID=18560. (accessed April 24, 2024).

  • Harvard

    Dimitrova, Mihaela. 2019. Using Machine Vision for Robot Guidance. AZoM, viewed 24 April 2024, https://www.azom.com/article.aspx?ArticleID=18560.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.