Möchtest du unsere Inhalte auf Deutsch sehen?

x
x

Sharp robot vision reduces picking error rate

““
IoT Use Case Fizyr iDS
3 minutes Reading time
3 minutes Reading time

In the warehouse of logistics companies, about 70% of the workforce is engaged in picking and packing. The global pandemic has led to both a decrease in the available labor force in plants and an increase in the need for physical distance. As a logical reaction to this, many companies wish to automate their logistics processes step by step.

The challenge: No clear vision in poor light

Poor lighting conditions often prevail in the cargo area of a truck. In addition, the surface condition of the load goods can vary greatly: Overlaid packages, reflective glossy packaging, transparent bags or tight stacking. The human eye can find its way around this heterogeneous environment; a camera-based robot, on the other hand, has a harder time recognizing a large number of unknown objects in the harsh logistics environment. Not the best conditions for robust automation, especially since the robot has to perform a wide range of tasks: it has to pick items, handle a wide variety of packages, palletize, depalletize, load and unload loading areas. It can only manage this by executing dozens of grasp poses per second; and the prerequisite for this is many degrees of freedom. A high demand on the software that controls it!

The solution: Precise image capture as the basis for robot control

The software house Fizyr has developed a solution for this, which is based on image capturing. The aim was for integrators to be free in their choice of hardware and to be able to put together the appropriate solution for their individual picking cell. For its modular plug-and-play software, Fizyr uses cameras from IDS Imaging Development Systems GmbH.

In detail, the software works like this: A uEye CP camera first captures the products to be picked or classified. The resulting 2D images are then made available to the algorithm, which now classifies the objects: box, bag, envelope, tube, cylinder, etc. The objects can be completely unknown to the algorithm. The objects can be completely unknown to the algorithm and can also differ greatly in shape, size, color, material and position. In the next step, an Ensenso camera creates a 3D point cloud in which all spatial coordinates are available. The software combines this with the information gained from the 2D images, analyzes the surfaces of the load with regard to suitable poses and finally suggests the best possible grasp poses to the robot. Thus, it is capable of handling objects, performing over 100 grasping poses per second; six degrees of freedom (dof) enable it to do so.

Furthermore, the software performs quality checks and detects defects, which prevents damaged packages from ending up on the sorter. The basis for the robot’s short reaction times is precise image processing, a large camera field of view and high camera speed.

The result: Occupational safety increases, costs decrease

The robot’s “sharp eye” facilitates picking and packing of goods. Occupational safety increases and logistics costs decrease due to the low error rate. The degree of automation is left up to the user; he can merely hand over individual tasks to the robot or automate complete processes – such as, for example, the production of a product or the loading and unloading of truck beds.

In application

Get our IoT Use Case Update now

Get exclusive monthly insights into our use cases, activities and news from the network - Register now for free.