2nd Generation Intel Xeon Scalable Processors Bring Powerful Deep Learning Capabilities to Transform Your Business with A.I.

IoT_blog_2nd-Gen_Jonathan-B_twitter_static_post

This blog post was originally published at Intel's website. It is reprinted here with the permission of Intel.

From enabling security throughout a city, to the rapid identification and classification of complex medical imaging data, Intel is accelerating the value of IoT.

As the billions of connected things generate massive data at ever-increasing rates, the Internet of Things (IoT) and AI are converging to create a pivotal transition — with more data being processed and analyzed at the edge than ever before. In fact, IDC predicts that in 2020, just one year from now, 45 percent of all data created by IoT devices will be stored, processed, analyzed and acted upon close to or at the Edge.1

Staying competitive in the midst of the massive data explosion and embracing the opportunities created by the intersection of AI and IoT will demand more performance and intelligence at the Edge. With high performance compute and AI technologies, companies are able to capture, process and analyze data at the Edge for near-real-time business insights.

The new 2nd Generation Intel Xeon Scalable processors with built-in Deep Learning Boost bring outstanding performance and AI capabilities for the next era of data-driven IoT edge platforms. The 2nd Generation Intel Xeon Scalable processor significantly accelerates performance — up to 14x — for deep learning inference workloads such as speech recognition, object detection, image classification and more.2  When used in combination with the Intel Distribution of OpenVINO toolkit, developers can streamline deep learning deployment and optimize performance for deep learning inference applications at the Edge.

It’s exciting to see how these new technologies can meet the very specific requirements of diverse markets. Smart cities, industrial manufacturers, healthcare providers, schools, banks and retailers can all benefit in ways that drive growth and insight. With the capabilities for AI-based IoT workloads, 2nd Generation Intel Xeon Scalable processors are enabling security and safety throughout a city; rapid identification and classification of complex medical imaging data; inventory control and consumer heat mapping in retail; industrial machine vision in factories; personalized retail shopping experiences and frictionless checkout; and accelerated defect detection for manufacturers. As I look forward, the potential for expanding how we integrate AI and vision at the network edge for IoT is only beginning to be realized.

A great example of this is the work being done by Siemens Healthineers to speed cardiac MRI imaging for radiologists. One-third of all deaths — 34 deaths per minute and 18 million each year — are due to cardiovascular disease.3  The workload per radiologist continues to increase dramatically: 100 studies per day and more than 12-hour workdays are not unusual.4  The optimization of the cardiac MRI segmentation model with 2nd Generation Intel Xeon Scalable processors is allowing Siemens Healthineers to meet the growing needs of data-intensive AI applications for the health and life sciences industry.

It takes powerful processing and intelligence at the edge to enable new opportunities and competitive differentiators, whether through workload consolidation or deep learning inference. You can get even better built-in deep learning inference capabilities, speed deployment and lower TCO — simply with one integrated CPU and with AI workloads optimized by the Intel Distribution of OpenVINO toolkit.

Explore the possibility

intel.com/xeon

intel.com/iot

Learn more

Watch the video www.intel.com/iot

Read the product brief: https://www.intel.com/content/www/us/en/design/products-and-solutions/processors-and-chipsets/cascade-lake/2nd-gen-intel-xeon-scalable-processors.html

See the Siemens Healthineers brief https://www.intel.ai/white-papers/Siemens-Healthineers-AI-Cardiac-Imaging

______________________________________________________________________________

1. https://innovationatwork.ieee.org/how-the-edge-computing-layer-helps-with-latency/.
2. 1x inference throughput improvement on Intel Xeon Platinum 8180 processor (July 2017) baseline: Tested by Intel as of July 11, 2017: Platform: 2S Intel Xeon Platinum 8180 CPU @ 2.50 GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384 GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel SSD DC S3700 Series (800 GB, 2.5in SATA 6 Gb/s, 25 nm, MLC). Performance measured with: environment variables: KMP_AFFINITY=’granularity=fine, compact‘, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, synthetic data set was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50), and https://github.com/soumith/convnet-benchmarks/tree/master/caffe/imagenet_winners (ConvNet benchmarks; files were updated to use newer Caffe prototxt format but are functionally equivalent). Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l”.
14x inference throughput improvement on Intel Xeon Platinum 8280 processor with Intel DL Boost: Tested by Intel as of 2/20/2019. 2-socket Intel Xeon Platinum 8280 processor, 28 cores HT on, turbo on, total memory 384 GB (12 slots/ 32 GB/ 2933 MHz), BIOS: SE5C620.86B.0D.01.0271.120720180605 (ucode: 0x200004d), Ubuntu 18.04.1 LTS, kernel 4.15.0-45-generic, SSD 1x sda INTEL SSDSC2BA80 SSD 745.2GB, nvme1n1 INTEL SSDPE2KX040T7 SSD 3.7TB, Deep Learning Framework: Intel Optimization for Caffe version: 1.1.3 (commit hash: 7010334f159da247db3fe3a9d96a3116ca06b09a), ICC version 18.0.1, MKL DNN version: v0.17 (commit hash: 830a10059a018cd2634d94195140cf2d8790a75a, model https://github.com/intel/caffe/blob/master/models/intel_optimized_models/int8/resnet50_int8_full_conv.prototxt, BS=64, syntheticData, 4 instance/2 socket, Datatype: INT8 vs. Tested by Intel as of July 11, 2017: 2S Intel Xeon Platinum 8180 CPU @ 2.50 GHz (28 cores), HT disabled, turbo disabled, scaling governor set to “performance” via intel_pstate driver, 384 GB DDR4-2666 ECC RAM. CentOS Linux release 7.3.1611 (Core), Linux kernel 3.10.0-514.10.2.el7.x86_64. SSD: Intel SSD DC S3700 Series (800 GB, 2.5in SATA 6 Gb/s, 25nm, MLC). Performance measured with: Environment variables: KMP_AFFINITY=’granularity=fine, compact’, OMP_NUM_THREADS=56, CPU Freq set with cpupower frequency-set -d 2.5G -u 3.8G -g performance. Caffe: (http://github.com/intel/caffe/), revision f96b759f71b2281835f690af267158b82b150b5c. Inference measured with “caffe time –forward_only” command, training measured with “caffe time” command. For “ConvNet” topologies, synthetic data set was used. For other topologies, data was stored on local storage and cached in memory before training. Topology specs from https://github.com/intel/caffe/tree/master/models/intel_optimized_models (ResNet-50), Intel C++ compiler ver. 17.0.2 20170213, Intel MKL small libraries version 2018.0.20170425. Caffe run with “numactl -l”.
3. Journal of the American College of Cardiology, 2017, Dr. Thomas Friese, Siemens Healthcare GmgH.
4. The Royal College of Radiologists, 2017.

Performance results are based on testing as of February 2018 and may not reflect all publicly available security updates. See configuration disclosure for details. No product can be absolutely secure.

Software and workloads used in performance tests may have been optimized for performance only on Intel® microprocessors. Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software, operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products. For more complete information visit intel.com/benchmarks.
Intel® technologies’ features and benefits depend on system configuration and may require enabled hardware, software, or service activation. Performance varies depending on system configuration. No computer system can be absolutely secure. Check with your system manufacturer or retailer or learn more at intel.com/iot.

Cost reduction scenarios described are intended as examples of how a given Intel-based product, in the specified circumstances and configurations, may affect future costs and provide cost savings. Circumstances will vary. Intel does not guarantee any costs or cost reduction.

Jonathan Ballon
Vice President, Internet of Things Group (IOTG)
General Manager, Markets and Channels Acceleration Division
Intel

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top