This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

Machines can learn to respond to situations like human beings

29 April 2016

Researchers from KU Leuven, Belgium, have shown that machines can learn to respond to unfamiliar objects like human beings would.

Image courtesy of KU Leuven

Imagine heading home in your self-driving car. The rain is falling in torrents and visibility is poor. All of a sudden, a blurred shape appears on the road. What would you want the car to do? Should it hit the brakes, at the risk of causing the cars behind you to crash? Or should it just keep driving?

Human beings in a similar situation will usually be able to tell the difference between, say, a distracted cyclist who’s suddenly swerving, and road-side waste swept up by the wind. Our response is mostly based on intuition. We may not be sure what the blurred shape actually is, but we know that it looks like a human being rather than a paper bag.

But what about the self-driving car? Can a machine that is trained to recognise images tell us what the unfamiliar shape looks like? According to KU Leuven researchers Jonas Kubilius and Hans Op de Beeck, it can.    

“Current state-of-the-art image-recognition technologies are taught to recognise a fixed set of objects,” Jonas Kubilius explains. “They recognise images using deep neural networks: complex algorithms that perform computations somewhat similarly to the neurons in the human brain.”

“We found that deep neural networks are not only good at making objective decisions (‘this is a car’), but also develop human-level sensitivities to object shape (‘this looks like …’). In other words, machines can learn to tell us what a new shape – say, a letter from a novel alphabet or a blurred object on the road – reminds them of. This means we’re on the right track in developing machines with a visual system and vocabulary as flexible and versatile as ours.”

Does that mean we may soon be able to safely hand over the wheel? “Not quite. We’re not there just yet. And even if machines will at some point be equipped with a visual system as powerful as ours, self-driving cars would still make occasional mistakes – although, unlike human drivers, they wouldn’t be distracted because they’re tired or busy texting. However, even in those rare instances when self-driving cars would err, their decisions would be at least as reasonable as ours.”


Print this page | E-mail this page