We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress
Sign In
Advertise with Us
GLOBETECH PUBLISHING LLC

Download Mobile App




Deep Learning Technique Could Reveal Transparent Features in Medical Images

By MedImaging International staff writers
Posted on 31 Dec 2018
Print article
Image: From an original transparent etching (far right), engineers produced a photograph in the dark (top left), then attempted to reconstruct the object using first a physics-based algorithm (top right), then a trained neural network (bottom left), before combining both the neural network with the physics-based algorithm to produce the clearest, most accurate reproduction (bottom right) of the original object (Photo courtesy of MIT).
Image: From an original transparent etching (far right), engineers produced a photograph in the dark (top left), then attempted to reconstruct the object using first a physics-based algorithm (top right), then a trained neural network (bottom left), before combining both the neural network with the physics-based algorithm to produce the clearest, most accurate reproduction (bottom right) of the original object (Photo courtesy of MIT).
Engineers at the Massachusetts Institute of Technology (Cambridge, MA, USA) have developed a deep learning technique that can reveal images of transparent features or objects that are nearly impossible to decipher in almost total darkness.

Deep neural networks have been widely applied in the field of computer vision and image recognition. The MIT engineers recently developed neural networks to reconstruct transparent objects in images taken with plenty of light. However, they became the first to use deep neural networks in experiments to reveal invisible objects in images taken in the dark.

In their study, the researchers reconstructed transparent objects from images of those objects, taken in almost pitch-black conditions using a “deep neural network.” This machine-learning technique involves training a computer to associate certain inputs with specific outputs — in this case, dark, grainy images of transparent objects and the objects themselves.

The researchers trained a computer to recognize more than 10,000 transparent glass-like etchings, based on extremely grainy images of those patterns. The images were taken in very low lighting conditions, with about one photon per pixel — far less light than a camera would register in a dark, sealed room. They then showed the computer a new grainy image, not included in the training data, and found that it learned to reconstruct the transparent object that the darkness had obscured.

The researchers repeated their experiments with a totally new dataset, consisting of more than 10,000 images of more general and varied objects, including people, places, and animals. After training, the researchers fed the neural network a completely new image, taken in the dark, of a transparent etching of a scene with gondolas docked at a pier. The once again found that the physics-informed reconstruction produced a more accurate image of the original, compared to reproductions without the physical law embedded. The results demonstrate that deep neural networks can be used to illuminate transparent features such as biological tissues and cells, in images taken with very little light.

“We have shown that deep learning can reveal invisible objects in the dark,” said the study’s lead author Alexandre Goy. “This result is of practical importance for medical imaging to lower the exposure of the patient to harmful radiation, and for astronomical imaging.”

Related Links:
Massachusetts Institute of Technology

Gold Member
Solid State Kv/Dose Multi-Sensor
AGMS-DM+
New
Ultrasound System
P20 Elite
New
Ultrasound Table
Powered Ultrasound Table-Flat Top
Portable X-Ray Unit
AJEX240H

Print article
Radcal

Channels

MRI

view channel
Image: PET/MRI can accurately classify prostate cancer patients (Photo courtesy of 123RF)

PET/MRI Improves Diagnostic Accuracy for Prostate Cancer Patients

The Prostate Imaging Reporting and Data System (PI-RADS) is a five-point scale to assess potential prostate cancer in MR images. PI-RADS category 3 which offers an unclear suggestion of clinically significant... Read more

Nuclear Medicine

view channel
Image: The new SPECT/CT technique demonstrated impressive biomarker identification (Journal of Nuclear Medicine: doi.org/10.2967/jnumed.123.267189)

New SPECT/CT Technique Could Change Imaging Practices and Increase Patient Access

The development of lead-212 (212Pb)-PSMA–based targeted alpha therapy (TAT) is garnering significant interest in treating patients with metastatic castration-resistant prostate cancer. The imaging of 212Pb,... Read more

General/Advanced Imaging

view channel
Image: The Tyche machine-learning model could help capture crucial information. (Photo courtesy of 123RF)

New AI Method Captures Uncertainty in Medical Images

In the field of biomedicine, segmentation is the process of annotating pixels from an important structure in medical images, such as organs or cells. Artificial Intelligence (AI) models are utilized to... Read more

Imaging IT

view channel
Image: The new Medical Imaging Suite makes healthcare imaging data more accessible, interoperable and useful (Photo courtesy of Google Cloud)

New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible

Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more
Copyright © 2000-2024 Globetech Media. All rights reserved.