This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

More challenging roles lie ahead for robots

09 July 2012

We tend to associate robots with rather mundane, repetitive tasks but, of course, they are now able to undertake much more complex operations that only a decade ago would have been considered well beyond the capabilities of any mechatronic system. Les Hunt takes a look at recent developments.

With reported agricultural labour shortages all over the world and demographics showing the average age of farmers steadily climbing, complacency about the security of our food production isn’t an option. That was one stark message delegates received at the European Robotics Forum 2012 back in March.

According to Forum speaker, Professor Simon Blackmore of Harper Adams University College, it is thanks to a reliance on large scale, mechanised agriculture, combined with cheap labour in emerging economies, that the routine deployment of robotics has been confined only to a small number of specific tasks, such as milking, feed distribution and farm cleaning.

“Earlier attempts to build complex robots capable of using virtual sight to, for example, harvest difficult to handle or delicate crops met with the conclusion that such robots were not sufficiently robust, were too slow and too expensive,” he says. The combination of human hand-eye co-ordination, dextrous manipulation and advanced object recognition may have been desirable, but simply too challenging. 

“At the moment, crops are drilled in straight rows to suit machines, but what if they were drilled to follow the contours of the land, or to take account of the micro level environmental conditions within a portion of a field,” asks Professor Blackmore. “The potential boost to production we could generate if harvests were staggered to suit the crop rather than mechanisation is immense. We’re talking about micro-tillage, mechanical weeding and planting using small, smart, autonomous, modular machines.” 

The Euro Agri Robotics Network (EARN) is looking to realise some of these goals. One application demonstrated at the recent Forum was ‘Crop Scout’, a robotic monitoring platform capable of measuring crops and checking for disease. Currently, farmers routinely use pesticide and herbicide as a prophylactic and spray their crops whether or not pests and disease are present. Trials with the Crop Scout have resulted in as much as 98% reduction in the amount of spray used, as the robotic sprayer sent by the Crop Scout treated only those areas affected by disease or pests. 

The new generation of agricultural robots have notched up some impressive trial results. Though much smaller than typical farm machinery, they can act co-operatively and carry out tasks such as spraying with a boom. Lasers are used for multiple tasks, from harvesting to weeding. Unlike conventional tractors that can cause a great deal of damage to soil environments, the new generation of lightweight robots move on wide, low pressure tyres and only cultivate the minimum volume of soil to create the required seed environment.

Professor Blackmore believes that, in reality, agricultural robots are more likely to be put to work on high value crops initially, like those with special sensors designed to ‘smell’ for ripeness. “Agriculture twenty years from now will be a mix of the traditional and the new, but the new robots will be intelligent enough to work with the natural environment to maintain both economic competitiveness and sustainable, high quality food production,” He says.

Surgical robots
From the farm to the rather specialised field of neuro-surgery. The Hospital Nacional Guillermo Almenara in Lima, Peru is the first South American hospital to undertake a neuro-surgical robotic procedure using Renishaw’s NeuroMate stereotactic system - an innovative robotic technology that enables neurosurgeons to deliver neuro-implantable devices with pinpoint accuracy.

NeuroMate provides a platform for a broad range of functional neurosurgical procedures. It has been used in thousands of electrode implantation procedures for deep brain stimulation and stereotactic electroencephalography, as well as stereotactic applications in neuro-endoscopy, radiosurgery, biopsy, and transcranial magnetic stimulation.

Professor Bertrand Devaux, a consultant neurosurgeon at Sainte-Anne Hospital in Paris, uses neuromate every day for procedures such as these and says he would not consider doing them manually without the robot. He believes it to be the easiest, fastest and most precise way to perform stereotactic procedures, and an essential part of any fully integrated neurosurgical operating theatre of the future.

Dr Luis Bromley, who heads up the Neurovascular and Tumour Surgical Department at the hospital, says NeuroMate provides a platform for planning and executing a broad range of stereotactic neurosurgical procedures. “We expect to enhance the safety and cost-effectiveness of these interventions, while improving patient outcomes through the accuracy and reliability of the system,” he says. Dr Camilo Contreras Campana, who performed the first NeuroMate procedure at the hospital said it integrated very nicely with the surgical workflow. “The benefits of having a robotic assistant in the operating room were evident in our first experience,” he adds.

Human-robot interaction
In today's manufacturing plants, the division of labour between humans and robots is quite clear: large, automated robots are typically cordoned off in metal cages, manipulating heavy machinery and performing repetitive tasks, while humans work in less hazardous areas on jobs requiring finer detail.

But according to Julie Shah, the Boeing Career Development Assistant Professor of Aeronautics and Astronautics at MIT, the factory floor of the future may host humans and robots working side by side, each helping the other in common tasks. Professor Shah envisages robotic assistants performing tasks that would otherwise hinder a human's efficiency in the workplace, such as providing tools and materials when required. "It's really hard to make robots do careful refinishing tasks that people do really well. But providing robotic assistants to do the non-value-added work can actually increase productivity," she says.

A robot working in isolation has to simply follow a set of pre-programmed instructions to perform a repetitive task. But working with humans is a different matter: For example, each mechanic working at the same station at an aircraft assembly plant, for example, may prefer to work differently, so a robotic assistant would have to adapt to an individual's particular style to be of any practical use.

Professor Shah and her colleagues at MIT have devised an algorithm that enables a robot to quickly learn an individual's preference for a certain task, and adapt accordingly to help complete the task. The group is using the algorithm in simulations to train robots and humans to work together, and will present its findings at the Robotics: Science and Systems Conference in Sydney this month.

As a test case, Professor Shah's team looked at spar assembly, a process of building the main structural element of an aircraft's wing. In the typical manufacturing process, two pieces of the wing are aligned. Once in place, an operator applies sealant to predrilled holes, hammers bolts into the holes to secure the two pieces, then wipes away excess sealant. The entire process can be highly individualised. For example, one operator might choose to apply sealant to every hole before hammering in bolts, while another prefers to finish one hole before moving on to the next. The only constraint is the sealant, which dries within three minutes.

The researchers say robots such as ABB’s FRIDA may be programmed to help. A flexible robot with two arms capable of a wide range of motion, FRIDA can be manipulated either to fasten bolts or paint sealant into holes, depending on the operator’s preferences.

To enable such a robot to anticipate a human's actions, the group first developed a computational model in the form of a decision tree. Each branch along the tree represents a choice that an operator may make; for example, continue to hammer a bolt after applying sealant, or apply sealant to the next hole. If the robot places the bolt, how sure is it that the person will then hammer the bolt, or just wait for the robot to place the next bolt?. "There are many branches," says Professor Shah.

Using this model, the group performed human experiments, training a laboratory robot to observe an individual's chain of preferences. Once the robot learned a person's preferred order of tasks, it then quickly adapted, either applying sealant or fastening a bolt according to a person's particular style of work. In a real-life manufacturing setting, robots and humans would need to undergo an initial training session off the factory floor. Once the robot learns a person's work habits, its factory counterpart can be programmed to recognise that same person, and implement the appropriate task plan.

And coming back to our previous surgical application, Professor Shah sees the future possibility of robotic assistants being trained to monitor lengthy procedures in an operating room and anticipate a surgeon's needs, handing over scalpels and gauze, depending on a doctor's preference. While such a scenario may be years away, robots and humans may eventually work side by side - with the right algorithms, that is.


Print this page | E-mail this page

Hammond White Paper