This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

'Polarised 3D' imaging provides 1000-fold resolution improvement

02 December 2015

MIT researchers have shown that by exploiting the polarization of light they can increase the resolution of conventional 3D imaging devices by as much as 1,000 times.

Combining information from the Kinect depth frame (a) with polarized photographs, researchers reconstructed the 3D surface shown in (c). Polarization cues allow coarse depth sensors like Kinect to achieve laser scan quality (b)

Polarization affects the way in which light bounces off of physical objects. If light strikes an object squarely, much of it will be absorbed, but whatever reflects back will have the same mix of polarizations that the incoming light had. At wider angles of reflection, however, light within a certain range of polarizations is more likely to be reflected. So the polarization of reflected light carries information about the geometry of the objects it has struck.

This relationship has been known for centuries, but it’s been hard to do anything with it, because of a fundamental ambiguity about polarized light. Light with a particular polarization, reflecting off of a surface with a particular orientation and passing through a polarizing lens, is indistinguishable from light with the opposite polarization, reflecting off of a surface with the opposite orientation.

This means that for any surface in a visual scene, measurements based on polarized light offer two equally plausible hypotheses about its orientation. Canvassing all the possible combinations of either of the two orientations of every surface, in order to identify the one that makes the most sense geometrically, is a prohibitively time-consuming computation.

To resolve this ambiguity, MIT's Media Lab researchers use coarse depth estimates provided by some other method, such as the time a light signal takes to reflect off of an object and return to its source. Even with this added information, calculating surface orientation from measurements of polarized light is complicated, but it can be done in real-time by a graphics processing unit, the type of special-purpose graphics chip found in most video game consoles.

The researchers’ experimental setup consisted of a Microsoft Kinect — which gauges depth using reflection time — with an ordinary polarizing photographic lens placed in front of its camera. In each experiment, the researchers took three photos of an object, rotating the polarizing filter each time, and their algorithms compared the light intensities of the resulting images.

On its own, at a distance of several metres, the Kinect can resolve physical features as small as a centimetre or so across. But with the addition of the polarization information, the researchers’ system could resolve features in the range of hundreds of micrometres, or one-thousandth the size.

For comparison, the researchers also imaged several of their test objects with a high-precision laser scanner, which requires that the object be inserted into the scanner bed. Polarized 3D still offered the higher resolution.

In a new paper to be presented at an international conference on computer vision in December, the researchers propose that polarization systems could aid the development of self-driving cars. Today’s experimental self-driving cars are, in fact, highly reliable under normal illumination conditions, but their vision algorithms become unreliable in rain, snow, or fog because water particles in the air scatter light in unpredictable ways, making it much harder to interpret.

The MIT researchers show that in some very simple test cases — which have nonetheless bedevilled conventional computer vision algorithms — their system can exploit information contained in interfering waves of light to handle scattering.


Contact Details and Archive...

Print this page | E-mail this page