This website uses cookies primarily for visitor analytics. Certain pages will ask you to fill in contact details to receive additional information. On these pages you have the option of having the site log your details for future visits. Indicating you want the site to remember your details will place a cookie on your device. To view our full cookie policy, please click here. You can also view it at any time by going to our Contact Us page.

AI microscope to help check tumour removal in just minutes

06 January 2021

A new microscope can quickly and inexpensively image large tissue sections, potentially during surgery, to figure out if surgeons removed all cancer cells.

(Photo by Brandon Martin/Rice University)

The microscope can rapidly image relatively thick pieces of tissue with cellular resolution and could allow surgeons to inspect the margins of tumours within minutes of their removal.

“The main goal of the surgery is to remove all the cancer cells, but the only way to know if you got everything is to look at the tumour under a microscope,” says Mary Jin, a PhD student in electrical and computer engineering at Rice University and co-lead author of the study in the Proceedings of the National Academy of Sciences.

“Today, you can only do that by first slicing the tissue into extremely thin sections and then imaging those sections separately,” Jin says. “This slicing process requires expensive equipment, and the subsequent imaging of multiple slices is time-consuming. Our project seeks to image large sections of tissue directly, without any slicing.”

The deep learning extended depth-of-field microscope, or DeepDOF, makes use of an artificial intelligence technique known as deep learning to train a computer algorithm to optimise both image collection and image post-processing.

With a typical microscope, there’s a trade-off between spatial resolution and depth-of-field, meaning only things that are the same distance from the lens can be brought clearly into focus. Features that are even a few millionths of a metre closer or further from the microscope’s objective will appear blurry. For this reason, microscope samples are typically thin and mounted between glass slides.

Slides are used to examine tumour margins today, and they aren’t easy to prepare. Removed tissue is usually sent to a hospital lab, where experts either freeze it or prepare it with chemicals before making razor-thin slices and mounting them on slides. The process is time-consuming and requires specialised equipment and workers with skilled training. It is rare for hospitals to have the ability to examine slides for tumour margins during surgery, and hospitals in many parts of the world lack the necessary equipment and expertise.

“Current methods to prepare tissue for margin status evaluation during surgery have not changed significantly since first introduced over 100 years ago,” says co-author Ann Gillenwater, a Professor of head and neck surgery at the University of Texas MD Anderson. “By bringing the ability to accurately assess margin status to more treatment sites, the DeepDOF has [the] potential to improve outcomes for cancer patients treated with surgery.”

Jin’s PhD advisor, co-corresponding author Ashok Veeraraghavan, says DeepDOF uses a standard optical microscope in combination with an inexpensive optical phase mask costing less than $10 to image whole pieces of tissue and deliver depths-of-field as much as five times greater than today’s state-of-the-art microscopes.

“Traditionally, imaging equipment like cameras and microscopes are designed separately from imaging processing software and algorithms,” says co-lead author Yubo Tang, a postdoctoral research associate in the lab of co-corresponding author Rebecca Richards-Kortum. “DeepDOF is one of the first microscopes that’s designed with the post-processing algorithm in mind.”

The phase mask is placed over the microscope’s objective to module the light coming into the microscope.

“The modulation allows for better control of depth-dependent blur in the images captured by the microscope,” says Veeraraghavan, an associate professor in electrical and computer engineering. “That control helps ensure that the deblurring algorithms that are applied to the captured images are faithfully recovering high-frequency texture information over a much wider range of depths than conventional microscopes.”

DeepDOF does this without sacrificing spatial resolution, he says.

“In fact, both the phase mask pattern and the parameters of the deblurring algorithm are learned together using a deep neural network, which allows us to further improve performance,” Veeraraghavan says.

DeepDOF uses a deep learning neural network, an expert system that can learn to make human-like decisions by studying large amounts of data. To train DeepDOF, researchers showed it 1,200 images from a database of histological slides. From that, DeepDOF learned how to select the optimal phase mask for imaging a particular sample and it also learned how to eliminate blur from the images it captures from the sample, bringing cells from varying depths into focus.

“Once the selected phase mask is printed and integrated into the microscope, the system captures images in a single pass and the ML (machine learning) algorithm does the deblurring,” Veeraraghavan says.

Richards-Kortum, Professor of bioengineering and director of the Rice 360° Institute for Global Health, says DeepDOF can capture and process images in as little as two minutes.

“We’ve validated the technology and shown proof-of-principle,” Richards-Kortum says. “A clinical study is needed to find out whether DeepDOF can be used as proposed for margin assessment during surgery. We hope to begin clinical validation in the coming year.”


More information...

Print this page | E-mail this page

Phoenix Contact