What’s New: At Hot Chips 2019, Intel revealed new details of upcoming high-performance artificial intelligence (AI) accelerators: Intel® Nervana™ neural network processors, with the NNP-T for training and the NNP-I for inference. Intel engineers also presented technical details on hybrid chip packaging technology, Intel® Optane™ DC persistent memory and chiplet technology for optical I/O.
“To get to a future state of ‘AI everywhere,’ we’ll need to address the crush of data being generated and ensure enterprises are empowered to make efficient use of their data, processing it where it’s collected when it makes sense and making smarter use of their upstream resources. Data centers and the cloud need to have access to performant and scalable general purpose computing and specialized acceleration for complex AI applications. In this future vision of AI everywhere, a holistic approach is needed—from hardware to software to applications.”
–Naveen Rao, Intel vice president and general manager, Artificial Intelligence Products Group
Why It’s Important: Turning data into information and then into knowledge requires hardware architectures and complementary packaging, memory, storage and interconnect technologies that can evolve and support emerging and increasingly complex use cases and AI techniques. Dedicated accelerators like the Intel Nervana NNPs are built from the ground up, with a focus on AI to provide customers the right intelligence at the right time.
What Intel Presented at Hot Chips 2019:
Intel Nervana NNP-T: Built from the ground up to train deep learning models at scale: Intel Nervana NNP-T (Neural Network Processor) pushes the boundaries of deep learning training. It is built to prioritize two key real-world considerations: training a network as fast as possible and doing it within a given power budget. This deep learning training processor is built with flexibility in mind, striking a balance among computing, communication and memory. While Intel® Xeon® Scalable processors bring AI-specific instructions and provide a great foundation for AI, the NNP-T is architected from scratch, building in features and requirements needed to solve for large models, without the overhead needed to support legacy technology. To account for future deep learning needs, the Intel Nervana NNP-T is built with flexibility and programmability so it can be tailored to accelerate a wide variety of workloads – both existing ones today and new ones that will emerge. View the presentation for additional technical detail into Intel Nervana NNP-T’s (code-named Spring Crest) capabilities and architecture.
Intel Nervana NNP-I: High-performing deep learning inference for major data center workloads: Intel Nervana NNP-I is purpose-built specifically for inference and is designed to accelerate deep learning deployment at scale, introducing specialized leading-edge deep learning acceleration while leveraging Intel’s 10nm process technology with Ice Lake cores to offer industry-leading performance per watt across all major datacenter workloads. Additionally, the Intel Nervana NNP-I offers a high degree of programmability without compromising performance or power efficiency. As AI becomes pervasive across every workload, having a dedicated inference accelerator that is easy to program, has short latencies, has fast code porting and includes support for all major deep learning frameworks allows companies to harness the full potential of their data as actionable insights. View the presentation for additional technical detail into Intel Nervana NNP-I’s (code-named Spring Hill) design and architecture.
Lakefield: Hybrid cores in a three-dimensional package: Lakefield introduces the industry’s first product with 3D stacking and IA hybrid computing architecture for a new class of mobile devices. Leveraging Intel’s latest 10nm process and Foveros advanced packaging technology, Lakefield achieves a dramatic reduction in standby power, core area and package height over previous generations of technology. With best-in-class computing performance and ultra-low thermal design power, new thin form-factor devices, 2 in 1s, and dual-display devices can operate always-on and always-connected at very low standby power. View the presentation for additional technical detail into Lakefield’s architecture and power attributes.
TeraPHY: An in-package optical I/O chiplet for high-bandwidth, low-power communication: Intel and Ayar Labs demonstrated the industry’s first integration of monolithic in-package optics (MIPO) with a high-performance system-on-chip (SOC). The Ayar Labs TeraPHY* optical I/O chiplet is co-packaged with the Intel Stratix 10 FPGA using Intel Embedded Multi-die Interconnect Bridge (EMIB) technology, offering high-bandwidth, low-power data communication from the chip package with determinant latency for distances up to 2 km. This collaboration will enable new approaches to architecting computing systems for the next phase of Moore’s Law by removing the traditional performance, power and cost bottlenecks in moving data. View the presentation for additional technical detail and design decisions on creating processors with optical I/O.
Intel Optane DC persistent memory: Architecture and performance: Intel Optane DC persistent memory, now shipping in volume, is the first product in the memory/storage hierarchy’s entirely new tier called persistent memory. Based on Intel® 3D XPoint™ technology and in a memory module form factor, it can deliver large capacity at near-memory speeds, latency in nanoseconds, while also natively delivering the persistence of storage. Details of the two operational modes (memory mode and app direct mode) as well performance examples show how this new tier can support a complete re-architecting of the data supply subsystem to enable faster and new workloads. View the presentation for additional architectural details, memory controller design, power fail implementation and performance results for Intel Optane DC persistent memory.
More context: “Accelerating with Purpose” for AI Everywhere (Naveen Rao Blog) | Artificial Intelligence at Intel
About Intel
Intel (NASDAQ: INTC), a leader in the semiconductor industry, is shaping the data-centric future with computing and communications technology that is the foundation of the world’s innovations. The company’s engineering expertise is helping address the world’s greatest challenges as well as helping secure, power and connect billions of devices and the infrastructure of the smart, connected world – from the cloud to the network to the edge and everything in between. Find more information about Intel at newsroom.intel.com and intel.com.