How advances in machine vision are driving the factory revolution


Article by: Andrea Pufflerova

Factories are being rebuilt from the ground up as the fourth industrial revolution gains momentum.

The fourth industrial revolution is advancing. But what is it exactly? What technological advances are making this new wave of transition to more advanced means of production and processes possible? And what role does image processing play in this huge gear train? Let’s take a quick tour of the history of manufacturing to understand the context of change that is currently taking place in factories, marked by a series of milestones that introduce new methods of mechanizing production and moving to the next level.

The first breakthrough (the First Industrial Revolution) was the invention of the steam engine in the mid-18th century.

At the end of the 19th century, steam power began to be displaced by electrical energy. This second industrial revolution made mass production possible through the invention of assembly lines.

In the late 1900s, computers, electronics, and digital technology emerged which sparked the spread of automation. This third industrial revolution or “digital revolution” made it possible to automate entire production processes through the use of computers, machines and robots. And then the capabilities of man and machine began to merge. These “cyber-physical systems” marked the fourth industrial revolution or Industry 4.0 and transformed traditional production facilities into intelligent factories in which everything is completely connected by a communication network for data exchange – between machines, people and systems.

Machine vision plays a central role in this interplay of technologies. Let’s take a look at its impact on future factory automation and how it is driving factory transformation.

Why a smart factory cannot be smart without machine vision

A smart factory is a highly digitized, fully automated, networked and flexible manufacturing environment that is dependent on data and communication. It uses the most advanced technologies that enable the collection, communication and analysis of data, including machine vision, artificial intelligence and the industrial Internet of Things.

Machine vision plays a central role in data generation and collection: It captures the physical world and converts it into digital data in the form of point clouds so that the data can be further evaluated and translated into valuable information by AI algorithms.

It also expands robotic skills to an unprecedented level. Robots equipped with 3D image processing and intelligence can perform the most complex and demanding tasks within a factory. 3D vision helps robots navigate spaces and perform operations that require skill. It is critical for tasks like real-time process control, product inspection and quality control, object handling and sorting, robot guidance and predictive maintenance of machines. 3D data help to identify problems such as defective machines and enable rapid intervention.

In order to make these robot tasks possible, image processing must provide large amounts of high-quality, real 3D data. This is necessary so that AI algorithms can work with this data and turn it into useful information that can then be passed on to other technologies inside and outside the factory. They can analyze it and learn from it so that appropriate decisions can be made.

Facilities that use machine vision to optimize manufacturing processes can see exponential increases in productivity and efficiency. This leads to lower costs, better product quality, less waste and the avoidance of crises related to the shrinking workforce.

The market offers a large number of different machine vision technologies. So what criteria should you use to select the right image processing for a smart factory?

Machine Vision Challenges and Advances

The development of image processing technologies is not over yet. Developers of 3D vision systems are constantly improving their solutions in order to take vision-guided robotics one step further. But one challenge could not be solved with standard technologies, which severely restricted the possible uses within a factory.

This challenge consisted of recording moving scenes and the seemingly “inherent” compromise between quality and speed.

Imagine products or product components being placed on a moving conveyor belt. When they reach a robot equipped with a 3D vision system, the vision system scans the parts one by one. The output of this scan is a 3D point cloud with exact X, Y and Z coordinates. This 3D data is used to navigate the robot, to approach each part, pick it up and move it to another location, or perform some other action on it. Or the 3D data can be used for inspection and quality control. These robotic tasks may seem pretty simple, but they are not. In fact, they represent the most demanding machine vision applications.

This is why: Traditional 3D sensor technologies were unable to provide a high quality point cloud of objects moving at high speed. Time-of-flight systems, for example, can provide high scanning speed and near real-time processing, but do not provide a high level of detail when the noise level is moderate. The result is low resolution output.

On the other hand, structured light systems offer sub-millimeter resolution and high accuracy, but that comes at the expense of speed. In other words, structured lighting systems can deliver high quality 3D data when the scanned object or camera is not moving.

The trade-off between quality and speed limits image-guided robotic and image processing applications to tasks that involve static scenes and fixed image processing systems. However, parallel structured light, which enables 3D area scanning in motion while offering high resolution and accuracy, can overcome this limitation. The technology was developed by Photoneo and enables the capture of moving scenes without movement artifacts.

The ability to scan dynamic scenes opens up countless applications that previously could not be automated.

This includes tasks that require hand-eye coordination – i.e. the assembly of a 3D vision system directly on the robot arm. Traditionally, the robot had to stop moving in order to make a high quality 3D scan. This is no longer necessary, which significantly shortens cycle times and increases productivity and efficiency.

Resisting the effects of movement or vibration is a new vision capability that ushers in a new era in factory automation. Along with other advances in this area, it is helping transform traditional manufacturing facilities into the smart factory of tomorrow.

This article was originally published on EE Times Europe.

Andrea Pufflerova is a PR specialist at Photoneo and the author of technological articles on intelligent automation solutions based on robot vision and intelligence.

Win a foldable backpack! Learn to Remote SSD Management in the Post-Pandemic Era!


Comments are closed.