Intel and Philips Partner to Speed Up Imaging Analysis Using AI

By HospiMedica International staff writers

[22 Aug 2018]
Image: Using Intel Xeon Scalable processors and the OpenVINO toolkit, Intel and Philips tested two healthcare use cases for deep learning inference models (Photo courtesy of iStock).

Intel Corporation (Santa Clara, CA, USA) and Royal Philips (Amsterdam, Netherlands) have tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling and the other on CT scans of lungs for lung segmentation. In these tests, which were conducted using Intel Xeon Scalable processors and the OpenVINO toolkit, the researchers achieved a speed improvement of 188 times for the bone-age-prediction model and 38 times for the lung-segmentation model over the baseline measurements. These tests show that healthcare organizations can implement artificial intelligence (AI) workloads without expensive hardware investments.

The size of medical image files is growing along with the improvement in medical image resolution, with most images having a size of 1GB or greater. More healthcare organizations are using deep learning inference to more quickly and accurately review patient images. AI techniques such as object detection and segmentation can help radiologists identify issues faster and more accurately, which can translate to better prioritization of cases, better outcomes for more patients and reduced costs for hospitals. Deep learning inference applications typically process workloads in small batches or in a streaming manner, which means they do not exhibit large batch sizes. Until recently, graphics processing unit (GPUs) was the prominent hardware solution to accelerate deep learning. By design, GPUs work well with images, but also have inherent memory constraints that data scientists have had to work around when building some models.

Central processing units (CPUs), such as Intel Xeon Scalable processors, do not have such memory constraints and can accelerate complex, hybrid workloads, including larger, memory-intensive models typically found in medical imaging. For a large subset of AI workloads, CPUs can better meet the needs of data scientists as compared to GPU-based systems. Running healthcare deep learning workloads on CPU-based devices offers direct benefits to companies such as Philips as it allows them to offer AI-based services that do not drive up costs for their end customers.

“Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds,” said Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights.