Nvidia Is Moving Faster Than the Competition in the AI Chipset Industry

Tractica-Logo-e1431719018493_0_1_0_0_0_0_0_0_0

This market research report was originally published at Tractica's website. It is reprinted here with the permission of Tractica.

Since 2015, more than 70 companies have entered the AI chipset market and more than 100 chip starts have been announced. All of them are trying to tackle the AI algorithm acceleration problem using different techniques. These companies range from cloud companies to top semiconductor companies to startups. Intel, for instance, has poured billions of dollars into AI via its acquisitions of Altera and Mobileye. Many startups have raised capital exceeding $100 million. Wave Computing recently announced it had raised $86 million in its Series E round. Habana, another AI chipset company, closed $75 million in a Series B round recently. The need for AI chipsets has injected new life into the semiconductor industry, which has been somewhat quiet for the last decade. Cambricon, an AI chip company from China, became the first unicorn chip company, with a valuation as high as $2.5 billion in its latest round of funding.

There’s No Question…

However, there shouldn’t be any question about it: Nvidia is the de facto leader in the AI chipset industry. Almost singlehandedly, the company has created a new market for AI servers and workstations and has already reached a $2 billion run rate. It is certainly being challenged by many startups and other semiconductor companies, but Nvidia is doing extremely well in dealing with the potential competition in terms of both innovation and execution. In fact, over the past few years, Nvidia has actually moved much faster than the competition.

How fast? Here are some comparison points. Keynotes from Nvidia’s CEO Jensen Huang have always been some of the prominent takeaways at the GPU Technology Conferences (GTCs) organized by Nvidia. If you look at the archives of the GTCs, there was no mention of computer vision or AI in the 2013 keynote. In 2014, a few minutes of speech were dedicated to computer vision, and by 2015, the conference was all about computer vision. By 2016, Nvidia had introduced the DGX-1 with Pascal that had 170 TFLOPS. By 2017, the company had introduced the new DGX with V100 that offered 1 petaFLOPS compute (tensor FLOPS), and 2018 saw the introduction of the 2 petaFLOP DGX2.

In the past 2 years, Nvidia has successfully introduced two very complicated chips and servers that are already in production. Volta, a graphics processing unit (GPU) targeted toward data center AI acceleration that is already in production, is 815 mm square, a large die by any standards. Nvidia’s Automotive Xavier SoC is one of the most complex systems on a chip (SoCs) designed that consists of ARM, a GPU, a vision accelerator, and an image processing pipeline and offers 30 TOPS of performance.

…and No Comparison

Most (if not all) of the startups that are building comparable chipsets are still in the sampling stages, even the ones that started back in 2016. Graphcore, for instance, just announced the availability of its pods. Wave Computing is sampling, and there’s no official word from Cerebras and Groq on their production schedule. Even other top semiconductor companies are struggling. Intel’s Nervana has been delayed and will not be out until later in 2019. AI chipset startups with smaller chipset sizes targeting the embedded market are shipping, but no one with a large size chipset that directly competes with Nvidia’s products has been able to get to market yet.

Certainly, having lots of dollars in the bank helps, and Nvidia has not been shy about investing. It has reportedly invested $3 billion in the Volta platform and $2 billion in Xavier, which has also helped. And Nvidia has not stopped there – it has created one of the industry’s most comprehensive software stacks for deep learning. The stack extends from cloud containers to AI frameworks to AI libraries to low level acceleration framework. All this taken together, along with the adaption of GPU architecture (that was already in production), has helped Nvidia get to market fast and generate billions of dollars in revenue.

True Competition Will Come

As for the competition, each company has its own take on the AI algorithm acceleration problem and has designed architectures from the ground up to tackle this issue. All are going to market with their own bags of tricks that promise significant gains in performance while reducing the power required. Lots of money is being poured into AI chipsets, which is good news for the industry overall. When these companies start shipping later in 2019, true competition will begin.

Anand Joshi
Principal Analyst, Tractica

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top