How AI Can Automate Pick & Place Jobs

It is hard to deny that AI is already considered superior to humans in numerous ways; it is strong, fast, almost error-free, and does not require rest breaks. This superiority is especially important when work operations need to be reliably carried out on a continuous basis at a consistent quality with superior performance.

How AI Can Automate Pick & Place Jobs

Image Credit: IDS Imaging Development Systems GmbH

One reason why AI would be used in the machine vision environment is to improve process efficiency and cost-effectiveness. The “Vision Guided Robot” use case highlights how a robot and an integrated AI vision camera can intelligently automate common pick-and-place tasks, making even a PC obsolete.

For “smart gripping,” various disciplines must be able to efficiently work together. For example, if the work involves employing robots to sort products according to their material, size, or quality, the products must first be grasped, identified, evaluated, and localized.

When it comes to rule-based image processing systems, this is not only extremely time-consuming, especially in small batch sizes, but it is not really economically viable.

Robots, along with AI-based inference, can be equipped with the required product knowledge and expertise of an accomplished worker.

It is fair to say that for specific subtasks, significant leaps in technology development are no longer required: simply having the correct products functioning together effectively in an interdisciplinary manner as a “smart robot vision system” is sufficient.

EyeBot Use Case

In a production line, objects tend to be randomly distributed on a conveyor belt. The objects then have to first be recognized, picked, and, for example, wrapped in packaging or sent on to an appropriate station for further processing or analysis.

The software company urobots GmbH has devised a PC-based approach for directing robots and detecting objects. The AI model was extensively trained and was able to identify the orientation and location of objects in camera pictures, from which it was able to decide which grip was best for a robot.

This inspired the next goal: to adapt this solution to IDS Imaging Development Systems GmbH's AI-based embedded vision system. According to urobots, two of the most important factors to consider when creating this solution were:

  1. The user should be able to effortlessly adapt the system for multiple use cases without having any specific AI expertise. This means that function is still achievable even if, for instance, production-related factors like object appearance, lighting, or even the integration of additional object types are changed.
  2. The entire system has to be fully PC-less, with a direct connection between device components, so that it is both light and space-saving, and cost-effective.

IDS already provides both prerequisites with the IDS NXT inference camera system.

All image processing runs on the camera, which communicates directly with the robot via Ethernet. This is made possible by a vision app developed with the IDS NXT Vision App Creator, which uses the IDS NXT AI core. The Vision App enables the camera to locate and identify pre-trained (2D) objects in the image information. For example, tools that lie on a plane can be gripped in the correct position and placed in a designated place. The PC-less system saves costs, space and energy, allowing for easy and cost-effective picking solutions.

Alexey Pavlov, Managing Director, urobots GmbH

Position Detection and Direct Machine Communication

A trained neural network can recognize all of the items in an image and also their location and orientation. AI can even do this when there are a lot of natural variations, such as with food, plants, or even other flexible objects, and when there are fixed objects that generally look the same.

This results in an orientation recognition of the objects and a very stable position. The network was trained for the client by urobots GmbH using its own software and was then uploaded to the IDS NXT camera.

To complete this stage of operation, the network had to be transformed into a unique, optimized format that resembled a type of “linked list.”

The IDS NXT ferry tool made it very simple to port the trained neural network for application in the inference camera. Every layer of the CNN network will become a node descriptor that precisely defines each layer throughout the process. The end result is a complete concatenated list of the CNN, represented in binary.

The CNN accelerator IDS NXT deep ocean core, which was built specifically for the camera and is based on an FPGA, could then optimally perform this universal CNN.

The vision app built by urobots was then used to calculate optimal grip positions for a robot based on the detection data — but this did not offer a solution to the challenge. In addition to the results of what, where, and how to grip, direct communication between the IDS NXT camera and the robot was essential.

It is vital that this task is not underestimated. This decision is frequently the deciding factor in how much money, time, and labor must be put into a solution. To transmit concrete task instructions directly to the robot, urobots created an XMLRPC-based network protocol in the camera's vision app with the IDS NXT Vision App Creator.

A positional accuracy of +/−2° was achieved by the AI vision app. It also detected objects in around 200 milliseconds.

The neural network in the IDS NXT camera localizes and detects the exact position of the objects. Based on this image information, the robot can independently grasp and deposit them.

Figure 1. The neural network in the IDS NXT camera localizes and detects the exact position of the objects. Based on this image information, the robot can independently grasp and deposit them. Image Credit: IDS Imaging Development Systems GmbH

PC-Less: More Than Merely Artificially Intelligent

It is not just the artificial intelligence that made this use case so intelligent. There are two more intriguing aspects that allow this solution to function without the need for an additional PC. The first reason is that, as the camera does not merely transmit images but also provides image processing results, the PC hardware and its accompanying infrastructure can be omitted.

This, of course, reduces the system’s purchase and maintenance costs. It is also often important that process decisions are made directly at the production site. The following processes can thus be completed faster and without delay, allowing for an increase in clock rate in some situations.

Another intriguing aspect is the cost of development. AI vision, or network training, is not done in the usual rule-based, classical image processing method, which changes how image processing tasks are handled and approached.

The quality of the result is no longer dictated by manually created program code by image processing specialists and application developers. In other words, if an application can be addressed with AI, IDS NXT can save the user time and money.

This is due to the user-friendly and robust software environment, which allows each user to train a neural network, build the corresponding vision app and execute it on the camera.

Summary

This EyeBot use case has illustrated the future of computer visions: how they can become PC-less integrated AI vision applications.

There are other benefits to the modest embedded system, like expandability via the vision app-based notion, application development for diverse target groups, and end-to-end manufacturer support.

The competencies in EyeBot are effectively distributed in an application. The user’s attention is able to remain focused on the product in question, while IDS and urobots can concentrate on training and running the AI to accomplish image processing and robot control.

Another benefit is that the vision app may be readily customized for other objects, different robot models, and thus many other related applications using Ethernet-based communication and the open IDS NXT platform.

How AI Can Automate Pick & Place Jobs

Image Credit: IDS Imaging Development Systems GmbH

This information has been sourced, reviewed and adapted from materials provided by IDS Imaging Development Systems GmbH.

For more information on this source, please visit IDS Imaging Development Systems GmbH.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    IDS Imaging Development Systems GmbH. (2022, December 16). How AI Can Automate Pick & Place Jobs. AZoM. Retrieved on May 07, 2024 from https://www.azom.com/article.aspx?ArticleID=22200.

  • MLA

    IDS Imaging Development Systems GmbH. "How AI Can Automate Pick & Place Jobs". AZoM. 07 May 2024. <https://www.azom.com/article.aspx?ArticleID=22200>.

  • Chicago

    IDS Imaging Development Systems GmbH. "How AI Can Automate Pick & Place Jobs". AZoM. https://www.azom.com/article.aspx?ArticleID=22200. (accessed May 07, 2024).

  • Harvard

    IDS Imaging Development Systems GmbH. 2022. How AI Can Automate Pick & Place Jobs. AZoM, viewed 07 May 2024, https://www.azom.com/article.aspx?ArticleID=22200.

Ask A Question

Do you have a question you'd like to ask regarding this article?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.