Robotics and Vision: Liberated and demystified
06 October 2020
Robotics and Vision – as the song goes, you can’t have one without the other. Advances in both technologies have surged to make many new processes ripe for automation. Yet, even the more experienced engineers sometimes can still be nervous about the prospect of integrating a robot vision system.
In the past, the skills required to set up an application, and avoid it ending up in the white elephant graveyard of failed applications, have been feared as something of a ‘black art’. But now, a revolution is underway, because both technologies have become more accessible, quick and easy to set up.
In this exclusive article for DPA, we invited Neil Sandhu, SICK UK Product Manager and Vice-Chairman of the UK Industrial Vision Association, to answer some frequently asked questions about robotics and vision for machine designers and users. He provides some practical advice to show how easy it can be to set up common applications.
There’s a sense of liberation in the air. Production environments of every size and kind are beginning to anticipate new freedoms that will make stationary and mobile robots more accessible and affordable than they have ever been before.
Robots need ‘eyes’ in the form of machine vision and the two technologies have developed symbiotically. Together, they are shrugging off the shackles of their exclusive, and often expensive, past and are delivering new automation opportunities into the hands of the many.
Industrial robots will never replace people; they will work for them, minimising people’s intervention in repetitive, dangerous or heavy tasks, and leaving the humans free to add more value to processes. Now, we are seeing exciting developments in lightweight, adaptable and lower-cost articulated arms, that collaborate and cooperate safely with human colleagues.
What advances have been made in vision technology?
Vision has been fundamental to these advances. Hardware developments such as CMOS, Smart Cameras, time-of-flight and stereovision have improved sensor performance hugely. Many programmable vision sensors can now process applications onboard the device, while high-powered streaming cameras can be configured via Sensor Integration Machines (SIMs) to deliver localised, edge integrations.
These robot eyes can’t just be ‘bolted on’; you need to add intelligence and communication to make them guides. It is robot guidance software and systems that have opened up countless new applications to many more new users and uses – replacing common repetitive and heavy tasks such as picking from a bin or feeder or tending at a machine.
Most of important of all, for vision to truly bring robotics power to the people, they need to be simple to install and use. Until recently, most vision systems probably required a heavy investment in expert support to design and install, and certainly a great deal of programming knowledge, and external computing power, to set up. Now vision systems are becoming ‘plug and play’ and really easy to install and commission.
How is the technology developing?
There are plenty of cameras to choose from – plenty of ‘eyes’ to look at the task in hand. They are all very good at what they do. At SICK we like to think about vision systems in a different way: What if we thought about a camera as being like a smart phone? We all have our favourite smart phones and they are all, essentially, quite similar. We treat them as a ‘blank canvas’ onto which we download the ‘App’ we need.
So, SICK developed its AppSpace development environment. The solution lies in the software, not the hardware, and it’s deployed onto the camera, just like a smart phone. Developers can choose to adapt existing Apps or create their own custom applications.
The beauty of this approach is that a small range of cameras can be put to work on a myriad of different robot guidance tasks, and even be adapted to switch seamlessly between more than one application.
End users don’t need to worry about programming or how to get the application up and running. They can purchase a device with the app already onboard. It comes out-of-the box, and the set-up is intuitive – just like installing an App on your phone. Or you can choose a software-only package and adapt an existing device to a new application. This way, you can easily redeploy your existing camera for a new application.
In a typical cobot application developed in AppSpace, a single camera with an app, talks to the robot. The vision system can be ‘trained’ to find the shape of a part or product. It can send the data to the robot controller so that it knows the co-ordinates of the product to pick, as well as its orientation. Armed with the information, the robot can grip the part precisely and accurately every time without damage. There’s no need for gripper attachments such as objects or guides, and measurement and quality inspections can be carried out simultaneously.
Critically, the camera can talk directly to the robot. Thus, we can easily support belt picking, picking from feeders, packaging, robot machine tending or picking up kits of parts. We can train all these things into the system and its very easy for the user to configure, because it is just like using an app.
How has SICK collaborated with cobot manufacturers?
SICK has collaborated with many robot manufacturers, including Universal Robots (UR), to develop an interface that enables vision-guided robot pick and place, quality inspection and measurement to be set up with minimum time and effort. The SICK Inspector PIM60 URCap software is a is simple yet powerful UR + plug-in that has been developed to ensure easy integration between a range of UR robots and the SICK Inspector PIM60 2D vision sensor. The Inspector PIM60 is an embedded vision sensor, which means that all calculations are done on the sensor itself, and there is no need for an external computer.
After a quick calibration routine, the URCap software can output pick-points in the robot’s coordinate system and the Inspector PIM60 also enables inspection and measurement tasks for pass/fail-criteria or trending.
The UR control unit provides the user with access to live images from the sensor, so calibrating and aligning the sensor is easy. This also allows the gripper position and the switching between reference objects to be adjusted directly. As a result, an image-based robot guidance system can be set up in a just a few minutes. Thanks to the availability of further toolboxes, the range of possible applications is virtually unlimited.
The SICK Inspector PIM60 URCap is quick and easy to program and configure without the need for a separate PC or specialist software expertise. Standard configurations such as changing jobs and pick-points, calibration and alignment are done directly from the robot control pendant, making the everyday operations fast and straightforward.
More advanced operations such as inspection and dimension measurement of objects prior to picking, can be done through SOPAS, SICK’s standard device configuration tool. The SICK Inspector URCap is also ready to expand through extra data fields that can accommodate results from both detailed object inspections and measurements.
What common robot tasks can be solved with simple App-based vision systems?
Robot guidance has really become childsplay for many common applications such as picking from a belt, a parts bin or a feeder. It’s also becoming much easier to program and adapt mobile vehicles such as service robots, automated guided vehicles and carts.
Now SICK has developed both 2D and 3D vision-guided part localisation systems using the AppSpace software development platform. It opens up new applications for picking specific small parts like bolts from a deep mixed parts bin and placing them on a conveyor or selecting part-completed items and placing on a press or machining centre.
SICK’s Trispector P1000 programmable 3D vision camera was among the first to enable reliable, continuous in-line product detection to be customised for robotic belt picking. The SICK Belt Pick Toolkit App comes ready-made as a 3D vision-guided belt-picking solution for both industrial and collaborative robots. With the addition of z-axis control through 3D vision, even products with complex profiles can be picked from variable heights without risk of damage.
When the SICK Trispector1000 Beltpick application proved hugely successful, SICK moved on to release vision solutions to facilitate two other common robot tasks: pick and place and bin picking.
Pick and place
The SICK PLOC2D is a 2D vision system for robot localisation based on the SICK Inspector P vision sensor. Powered by a SICK-developed algorithm, it can be used with static workstations, moving belts, or bowl feeders to handle much smaller parts than has previously been possible.
The SICK PLOC2D provides a powerful 2D imaging system which connects directly to the robot controller or PLC. An EasyTeach function matches taught-in shapes against the object down to a 0.5 pixel resolution, with a rotational measurement to 0.1o. The approved shape and its location in the field of view is output to the robot controller to guide picking.
Can robot vision Apps handle more challenging robot tasks?
The need to pick up components that have been delivered to a factory in a container, bin or stillage and transfer them onto a conveyor belt for onward processing is a common robotic task. To fully appreciate the complexity for the vision system, cast your mind back to a game you may have played in your childhood called ‘pick up sticks’.
Even quite a small child has the keen vision and dexterity to pick up the uppermost piece from a pile of randomly arranged objects without disturbing the others. But, for a robot guidance it is the ultimate task to master. Until recently it would have taken a great deal of money, programming complexity and sophisticated, heavyweight robot hardware to replicate a task that is, literally, childsplay. For this reason, most 3D part localisation systems, such as SICK’s PLB system have been used for larger scale, heavyweight industrial robot applications, mainly in the automotive industry.
The SICK PLB 520 uses a stereoscopic vision camera to enable 3D vision-guided bin picking applications of much smaller objects than has previously been possible. The SICK PLB520 robot guidance system recognises the correct part profile, calculates which is uppermost and most accessible for selection, and then finds the optimum gripping point to pick and then place it exactly where required without collisions. Then it will choose the next part at another angle and repeat the task at high speeds.
Picking from racks
The SICK PLR is a self-contained robot guidance solution that combines 2D and 3D vision to support automated assembly processes where large parts, for example car body panels, are stored in carriers or racks. The parts may hang in slightly different positions and orientations, especially if the racks are bent or the parts are not precisely fixtured.
The system works by first taking picture of the part and looking for contrasting features like drill holes, for example. It then projects a laser cross onto a flat area of the part and takes a second image. The resulting data enables the correct distance and any pitch, roll or yaw of the part to be calculated in the system, with the information communicated so that the robot can safely grip it.
Are there any Apps for mobile vehicles and service robots?
Automated, or semi-automated vehicles are essentially mobile robots and the same integration principles apply. Complete plug and play App solutions can be used to adapt existing vehicles, or to make the design and configuration of new machines much quicker, simpler and more cost-effective.
For example, the recently released, SICK Pallet Pocket and SICK Dolly Positioning SensorApps run on SICK’s Visionary T-AP 3D time-of-flight snapshot camera. They promise to cut out delays associated with lining up both automated and manual forklifts to load pallets in high-bay warehouses, as well as positioning AGVs to collect dollies.
The new Apps, developed in SICK’s AppSpace software environment, work by positioning the camera in front of the pocket or dolly chassis within a working range of 1.5m to 3m. Using a single shot of light, the SICK Visionary T-AP 3D camera captures a 3D image, then pre-processes and evaluates the co-ordinates of the pallet pocket or space under the dolly, before outputting to the vehicle controller. The information can also be sent to a driver display to aid manual forklift operation, particularly useful in high-bay warehouses.
Developed in SICK AppSpace, the SICK Pallet Pocket and SICK Dolly Positioning SensorApps are supplied already loaded onto the SICK Visionary T-AP camera and ready for use. As self-contained hardware components, they are easy to fit to existing machinery, as well as being available for use in new industrial truck designs. The App can be easily adjusted to a wide range of pallet and dolly types. Setting up the sensor is easy with the SICK SOPASair configuration and diagnostics tool, which has an intuitive web-browser graphical user interface.
Parallel advances in vision systems and robot technologies have created unprecedented accessibility for users to attempt automation projects that might previously have remained in the ‘pending’ pile. Instead of time-consuming development from scratch, and the risk of a frustrating system being turned off, or side-lined, ready-made software and plug-in systems provide proven reassurance for a fraction of the development time and costs.
Robotics has truly been ‘democratised’, bringing the opportunity for more flexible and adaptable Industry 4.0 processes that are proven to work reliably. The possibilities for robotic automation of manufacturing processes have multiplied. Speak to the SICK team if you would like advice on how to automate more of your processes using ready-made robot guidance systems that have never been simpler.
For more information please contact Andrea Hornby on 01727 831121 or email email@example.com.
Contact Details and Archive...