Intel Neural Compute Stick 2 for Medical Imaging

image1-lrg

This blog post was originally published at Intel's website. It is reprinted here with the permission of Intel.

Intel has been an integral part of hospital technology for almost 50 years. From desktop computers to MRI scanners, diagnostic monitors, and even portable X-Ray machines, we have been at the forefront of healthcare transformation. So it makes sense that we’ve been an early collaborator with major healthcare centers and medical device manufacturers to help make sense of the new healthcare AI revolution, including educating customers on the best practices and latest advances in the field. In the spirit of continuing to educate the industry, in this blog we will:

The Dataset

The Medical Segmentation Decathlon is a 10-dataset challenge for medical image segmentation. It’s a well-curated, labeled dataset for building semantic segmentation models, such as the popular U-Net. Most importantly,the Creative Commons Attribution-ShareAlike* 4.0 license (CC BY-SA 4.0) makes it friendly to both public and commercial entities alike.

We’ve chosen the Brain Tumor Segmentation (BraTS) subset of the Decathlon as a great real-world example for the budding healthcare AI practitioner to learn how to prepare, train, and infer on semantic segmentation models. It’s relatively easy to train a 2D U-Net within a few hours of work to identify brain tumors from MRI scans. Our hope is that this training example will help data scientists in the same way we’ve been using the beloved MNIST dataset as the “Hello World!” tutorial for deep learning.

Training the Model

We’ve made our 2D U-Net model scripts available on GitHub. You should be able to train directly on your Intel CPU simply by downloading and installing Anaconda* and creating a Conda environment with the latest versions of TensorFlow* (1.12), Keras* (2.2.4), and NiBabel* (2.3.1) to run the training and inference. Anaconda will provide you with the Intel® Optimization for TensorFlow*. We also rely on helper functions from the Python* packages h5py, tqdm, and psutil. These three commands should install the packages for you to train the model:

$ conda create -c anaconda -n decathlon pip python=3.6 tensorflow keras tqdm h5py psutil
$ conda activate decathlon
$ pip install nibabel

Once you download the BraTS dataset from the Decathlon website, you simply untar the file, and then run the script:

$ bash run_brats_model.sh DIRECTORY_FOR_RAW_DATA

where DIRECTORY_FOR_RAW_DATA is the directory in which you untarred the BraTS datafiles.

The script should take the raw MRI data files, preprocess them as Numpy arrays, save them to a single HDF5 file for convenience, and then train a 2D U-Net on the dataset.

Converting the Model to use the Intel Distribution of OpenVINO Toolkit Inference Engine

Once you’ve trained the model, you can convert it using the Intel Distribution of OpenVINO Toolkit, which enables developers to create high performance computer vision applications and easily incorporate industry standard frameworks for deployment of deep learning solutions that run fast and seamlessly across Intel’s silicon architectures. An Intel CPU-optimized version of OpenCV is included as a part of the toolkit’s installation and will allow you to develop C++ inference scripts using the Inference Engine (IE) plugin.


Figure 1: The Intel Neural Compute Stick 2.

By using the toolkit’s IE you’ll be able to deploy trained models using the Intel NCS 2, a great choice for inference at the edge due to its low power and bandwidth consumption, fast local processing, and high responsiveness at a reasonable price.

Running Inference in a Docker Container

We have included instructions on how to build a Docker container with the Intel Distribution of OpenVINO toolkit to enable edge inference on your U-Net model. You’ll need to download a copy of the OpenVINO installer before running the Docker build script. Be sure to choose the Linux* installer since that is what the Docker container will be using under the hood. Docker will allow you to run your model on a wide range of hardware and operating systems. This gives you a portable solution to run deep learning models on Intel’s wide variety of AI hardware.

Recommendations

We hope that you’ll try out our Decathlon scripts and build tutorials of your own based on the other subsets. Perhaps you can try to build a model based that identifies pancreatic tumors? You could build a 3D U-Net based to differentiate hepatic blood vessels or even create a startup that leverages Intel hardware to provides a complete, low-cost, portable solution for healthcare AI applications at the edge.


Figure 2: Examples of the model’s prediction. A Dice of 1.0 indicates perfect overlap between the model’s prediction and the radiologist’s ground truth label.

References

  • Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby JS, Freymann JB, Farahani K, Davatzikos C. “Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features”, Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117
  • Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby J, Freymann J, Farahani K, Davatzikos C. “Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-GBM collection”, The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.KLXWJJ1Q
  • Bakas S, Akbari H, Sotiras A, Bilello M, Rozycki M, Kirby J, Freymann J, Farahani K, Davatzikos C. “Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection”, The Cancer Imaging Archive, 2017. DOI: 10.7937/K9/TCIA.2017.GJQ7R0EF
  • Menze BH, Jakab A, Bauer S, Kalpathy-Cramer J, Farahani K, Kirby J, Burren Y, Porz N, Slotboom J, Wiest R, Lanczi L, Gerstner E, Weber MA, Arbel T, Avants BB, Ayache N, Buendia P, Collins DL, Cordier N, Corso JJ, Criminisi A, Das T, Delingette H, Demiralp Γ, Durst CR, Dojat M, Doyle S, Festa J, Forbes F, Geremia E, Glocker B, Golland P, Guo X, Hamamci A, Iftekharuddin KM, Jena R, John NM, Konukoglu E, Lashkari D, Mariz JA, Meier R, Pereira S, Precup D, Price SJ, Raviv TR, Reza SM, Ryan M, Sarikaya D, Schwartz L, Shin HC, Shotton J, Silva CA, Sousa N, Subbanna NK, Szekely G, Taylor TJ, Thomas OM, Tustison NJ, Unal G, Vasseur F, Wintermark M, Ye DH, Zhao L, Zhao B, Zikic D, Prastawa M, Reyes M, Van Leemput K. “The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)”, IEEE Transactions on Medical Imaging 34(10), 1993-2024 (2015) DOI: 10.1109/TMI.2014.2377694
  • Ronneberger O., Fischer P. , and T. Brox. “U-Net Convolutional Networks for Biomedical Image Segmentation.” arXiv:1505.04597v1 [cs.CV] 18 May 2015

Tony Reina
Data Scientist, Deep Learning Algorithms, Intel

Chaudhury Baishali
DCG DEA co-engineering, Intel

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top