Logo of NVIDIA Corporation

NVIDIA (NVDA) Q3 2018 Earnings Call Transcript

Earnings Call Transcript


Executives: Simona Jankowski - VP, IR Colette Kress - EVP & CFO Jen-Hsun Huang - President &

CEO
Analysts
: Toshiya Hari - Goldman Sachs Stacy Rasgon - Bernstein C. J. Muse - Evercore Vivek Arya - Bank of America Merrill Lynch Joseph Moore - Morgan Stanley Craig Ellis - B. Riley & Company Christopher Caso - Raymond James Matthew Ramsay - Canaccord Genuity Hans Mosesmann - Rosenblatt

Securities
Operator
: Good afternoon. My name is Victoria, and I'm your conference operator for today.

Welcome to NVIDIA's financial results conference call. [Operator Instructions] I'll now turn the call over to Simona Jankowski, Vice President of Investor Relations, to begin your conference.

Simona Jankowski: Thank you. Good afternoon, everyone and welcome to NVIDIA's Conference Call for the Third Quarter of Fiscal 2018. With me on the call today from NVIDIA are Jen-Hsun Huang, President and Chief Executive Officer; and Colette Kress, Executive Vice President and Chief Financial Officer.

I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. It is also being recorded. You can hear a replay by telephone until November 16, 2017. The webcast will be available for replay up until next quarter's conference call to discuss Q4 and full year fiscal 2018 financial results. The contents of today's call is NVIDIA's property, it can't be reproduced or transcribed without our prior written consent.

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For a discussion of factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Forms 10-K and 10-Q and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today, November 9, 2017, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO Commentary, which is posted on our website. With that, let me turn the call over to Colette.

Colette Kress: Thanks, Simona. We had an excellent quarter with record revenue in each of our four market platforms.

And every measure of profit hit record levels, reflecting the leverage of our model. Data center revenue of $501 million more than doubled from a year ago and the strong adoption of our Volta platform and early traction with our inferencing portfolio. Q3 revenue reached $2.64 billion, up 32% from a year earlier, up 18% sequentially and well above our outlook of $2.35 billion. From a reporting segment perspective, GPU revenue grew 31% from last year to $2.22 billion. Tegra processor revenue rose 74% to $419 million.

Let's start with our Gaming business. Gaming revenue was $1.56 billion, up 25% year-on-year and up 32% sequentially. We saw robust demand across all regions and form factors. Our Pascal-based GPUs remained the platform of choice for gamers as evidenced by our strong demand for GeForce GTX 10-Series products. We introduced the GeForce GTX 1070 Ti which became available last week.

It complements our strong holiday lineup, ranging from the entry-level GTX 1050 to flagship GTX 1080 Ti. A wave of great titles is arriving for the holidays, driving enthusiasm in the market. We collaborated with Activision to bring Destiny 2 to the PC early in the month. PlayerUnknown's Battlegrounds popularly known as [indiscernible], continues to be one of the year's most successful titles. We are closely aligned with PUB G to ensure that GeForce is the best way to play the game, including bringing shadow play highlights to its 20 million players.

Last weekend, Call

of Duty: World War II had a strong debut. And Star Wars Battlefront 2 will be [indiscernible]. E-sports remains one of the most important secular growth drivers in the Gaming market with a fan base that now exceeds 350 million. Last weekend, the League of Legends World Championship was held in Beijing's National Stadium, The Bird's Nest where the 2008 Olympic Games were held. More than 40,000 fans attended live.

And online viewers were said to break last year's record of 43 million following in 18 languages. GPU sales also benefited from continued cryptocurrency money. We met some of this demand with a dedicated board in our OEM business and a portion with GeForce GTX boards, though it's difficult to quantify. We remain nimble in our approach to the cryptocurrency market. It is volatile, does not and will not distract us from focusing on our core Gaming market.

Lastly, Nintendo Switch console continues to gain momentum since launching in March and also contributed to growth. Moving to data center; our data center business had an outstanding quarter. Revenue of $501 million more than doubled from last year and rose 20% on the quarter and its strong traction of the new Volta architecture. Shipments of the Tesla V100 GPU began in Q2 and ramped significantly in Q3, driven primarily by demand from cloud service providers and high-performance computing. As we have noted before, Volta delivers 10x the deep learning performance of our Pascal architecture, which has been introduced just a year earlier, far outpacing Moore's Law.

The V100 is being broadly adopted with every major server OEM and cloud provider. In China, Alibaba, Baidu and Tencent announced that they are incorporating V100 in their datacenters and cloud server, service infrastructures. In the U.S., Amazon Web Services announced that V100 inferences are now available in four of its regions. Oracle Cloud has just added Tesla P100 GPUs to its infrastructure offerings and plans to expand to the V100 GPUs. We expect support from V100 from other major cloud providers as well.

In addition, all major server OEMs announced support for the V100, [indiscernible], Hewlett-Packard Enterprise, IBM and Supermicro are incorporating it in servers. And China's top server OEMs, Huawei, Insper and Lenovo have adopted our HGX server architecture to build a new generation of accelerated datacenters with V100 GPUs. Our new offerings for the AI inference market are also gaining momentum. The recently launched TensorRT programmable inference acceleration platform opened a new market opportunity for us, improving the performance and reducing the cost of AI inferencing in order -- by orders of magnitude compared with CPUs. It supports every major deep learning framework, every network architecture and any level of network complexity.

More than 1,200 companies are already using our inference platform, including Amazon, Microsoft, Facebook, Google, Alibaba, Baidu, JD.com, [indiscernible], Hi Vision and Tencent. During the quarter, we announced that the NVIDIA GPU Cloud container registry or NGC is now available through Amazon's cloud and will be supported soon by other cloud platforms. NGC helps developers get started with deep learning development through no-cost access to a comprehensive easy-to-use, fully optimized deep learning software stack. It enables instant access to the most widely used GPU-accelerated frameworks. We also continued to see robust growth in our HPC business.

Next-generation supercomputers such as the U.S. Department of Energy's CRM Summit Systems expected to come online next year, leverage Volta's industry-leading performance and our pipeline is strong. The past weeks have been exceptionally busy for us. We have hosted five major GPU Technology Conferences in Beijing, Munich, Taipei, Tel Aviv and Washington, with another next month in Tokyo. In a strong indication of the growing importance of GPU-accelerated computing, more than 22,000 developers, data scientists and others will come this year to our GTCs, including the main event in Silicon Valley, that's up 10x in just five years.

Other key metrics show similar gains. Over the same period, the number of NVIDIA GPU developers has grown 15x to 645,000; and the number of CUDA downloads this year are up 5x to 1.8 million. Moving to Professional Visualization; third quarter revenue grew to $239 million, up 15% from a year ago and up 2% sequentially, driven by demand for high-end real-time rendering, simulation and more powerful mobile workstations. The defense and automotive industries grew strongly as the demand for professional VR solutions driven by Quadro P5000 and P6000 GPUs. Among key customers, Audi and BMW are deploying VR in auto showrooms.

And the U.S. Army, Navy and Homeland Security are using VR for mission training. Last month, we announced early access to NVIDIA Holodeck, the intelligent VR collaboration platform. Holodeck enables designers, developers and their customers to come together virtually from anywhere in the world in a highly realistic, collaborated and physically simulated environment. Future updates will address the growing demand for the development of deep learning techniques and virtual environments.

In automotive, revenue grew to $144 million, up 13% year-over-year and up slightly from last quarter. Among key developments this quarter, we announced DRIVE PX Pegasus, the world's first AI computer for enabling Level 5 driverless vehicles. Pegasus will deliver over 320 trillion operations per second, more than 10x its predecessor. It's powered by four high-performance AI processors in a supercomputer that is a size of a license plate. NVIDIA DRIVE is being used by over 25 companies to develop fully autonomous robotaxis and DRIVE PX Pegasus will become the path to production.

It is designed for [indiscernible] Certification, the industry's highest safety level and will be available in the second half of 2018. We also introduced the DRIVE Ix SDK for delivering intelligence experiences inside the vehicle. DRIVE Ix provides a platform for car companies to create and always engage AI Co-Pilot. It uses deep learning networks to track head movement and gauge and it will have a conversation with the driver using advanced speech recognition, lip-reading and natural language understanding. We believe this will set the standard for the next generation of infotainment systems, a market that is just beginning to develop.

Finally, we announced that DHL, the world's largest mail and package delivery service, and [indiscernible], one of the world's leading automotive suppliers, will deploy a test lead of autonomous delivery trucks next year using the NVIDIA DRIVE PX platform. DHL will outfit electric light trucks with the ZF Pro AI self-driving system based on our technology. Now turning to the rest of the income statement; Q3 GAAP gross margins was 59.5% and non-GAAP was 59.7%, both up sequentially and year-over-year, reflecting continued growth in value-added platforms. GAAP operating expenses were $674 million and non-GAAP operating expenses were $570 million, consistent with our outlook and up 19% year-on-year. Investing in our key market opportunities is essential to our future, including Gaming, AI and self-driving cars.

GAAP operating income was a record $895 million, up 40% from a year ago. Non-GAAP operating income was $1.01 billion, up 42% from a year ago. GAAP net income was a record $838 million and EPS was $1.33, up 55% and 60%, respectively, from a year earlier. Non-GAAP net income was $833 million and EPS was $1.33, up 46% and 41%, respectively from a year earlier, reflecting revenue strength as well as gross margin and operating margin expansion. We've returned $1.16 billion to shareholders so far this fiscal year through a combination of quarterly dividends and share repurchases.

We have announced an increase to our quarterly dividend of $0.01 to an annualized $0.60, effective with our Q4 fiscal year '18 dividend. We are also pleased to announce that we intend to return another $1.25 billion to shareholders for fiscal 2019 through quarterly dividends and share repurchases. Our quarterly cash flow from operations reached record levels, surpassing $1 billion for the first time to $1.16 billion. Now turning to the outlook for the fourth quarter of fiscal 2018; we expect revenue to be $2.65 billion plus or minus 2%. GAAP and non-GAAP gross margins are expected to be 59.7% and 60% respectively, plus or minus 50 basis points.

GAAP and non-GAAP operating expenses are expected to be approximately $722 million and $600 million, respectively. GAAP and non-GAAP OI&E are both expected to be nominal. GAAP and non-GAAP tax rates are both expected to be 17.5%, plus or minus 1% excluding discrete items. Other financial details are included in the CFO Commentary and other information available on our website. We will now open the call for questions.

Please limit your question to one. Operator, we will -- would you please pool for questions? Thank

you
Operator
: [Operator Instructions] Your first question comes from the line of Toshiya Hari with Goldman Sachs.

Toshiya Hari: Jen-Hsun, three months ago, you described the July quarter as a transition quarter for your data center business. And clearly, you guys have ramped very well into October. But if you can talk a little bit about the outlook for the next couple of quarters in data center? And particularly on the inferencing side.

I know you guys are really excited about that opportunity. So if you can share customer feedback and what your expectations are into the next year in inferencing, that will be great. Jen-

Hsun Huang: Yes. As you know, we started ramping very strongly Volta this last quarter. And we started the ramp the quarter before.

And since then, every major cloud provider from Amazon, Microsoft, Google to Baidu, Alibaba and Tencent and even recently, Oracle, has announced support for Volta and we'll be providing Volta for their internal use of deep learning as well as external public cloud services. We also announced that every major server computer maker in the world has now supported Volta and in the process of taking Volta out to market. HP and Dell and IBM and Cisco and Huawei in China, Insper in China, Lenovo, have all announced that they will be building service -- families of servers around the Volta GPU. So I think we -- this ramp is just the first part of supporting the build out of GPU-accelerated service from our company for data centers all over the world as well as cloud service providers all over the world. The applications for these GPU servers has now grown to many markets.

I've spoken about the primary segments of our Tesla GPUs. There are five of them that I talk about regularly. The first one is high-performance computing where the market is $11 billion or so. It is one of the faster growing parts of the IT industry because more and more people are using high-performance computing for doing their product development or looking for insights or predicting the market or whatever it is. And today, we represent about 15% of the world's top 500 supercomputers.

And I've repeatedly said, and I believe this completely, and I think it's becoming increasingly true, that every single supercomputer in the future will be accelerated somehow. So this is a fairly significant growth opportunity for us. The second is deep learning training, which is very, very much like high-performance computing. And you need to do computing at a very large scale. You're performing trillions and trillions of iterations.

The models are getting larger and larger. Every single year, the amount of data that we're training with it is increasing. And the difference between a computing platform that's fast versus not could mean the difference between building a $20 million data center or high-performance computing servers for training to $200 million. And so the money that we save and the capability we provide is really, the value's incredible. The third segment and this is the segment that you just mentioned, has to do with inference, which is when you're done with developing this network, you had to put it down to the hyperscale datacenters to support the billions and billions of queries that consumers make to the Internet every day.

And this is a brand-new market for us. 100% of the world's inference is done on CPUs today. We announced very recently, this last quarter in fact, that TensorRT 3 inference acceleration platform and in combination with our Tensor Core GPU instruction set architecture, we are able to speed up networks by a factor of 100. Now the way to think about that is, imagine whatever amount of workload that you got, if you can speed up using our platform by a factor of 100, how much you can save. The other way to think about that is because the amount of -- the networks are getting larger and larger and they're so complex now.

And we know that every network on the planet will run on an architecture because they were trained on our architecture today. And so whether it's CNNs or RNNs or GANs or auto encoders or all of the variations of those, irrespective of the precision that you need to support. The size of the network, we have the ability to support them; and so you could either scale out your hyperscale datacenters to support more traffic or you can reduce your cost tremendously or simultaneously, both. The fourth segment of our data center is providing all of that capability, what I just mentioned, whether it's HPC, training or inference and turning it inside out and making it available in the public cloud. There are thousands of startups now that are in -- are startup because of AI.

Everybody recognizes the importance of this new computing model. And as a result of this new tool, this new capability, all these unsolvable problems in the past are now interestingly solvable. And so you can see startups cropping up all over the west, all over the east and there's just -- there are thousands of them. And these companies don't either -- would rather not use their scarce financial resources to go build high-performance computing centers or they don't have the skill to be able to build out a high-performance platform the way these Internet companies can. And so these cloud providers, cloud platforms are just a fantastic resource for them.

So it get rented by the hour. We created in conjunction with that, and I mentioned that all the service providers have taken it to market. In conjunction with that, we created a registry in the cloud that containerizes these really complicated software stacks. Every one of these soft -- frameworks with the different versions of our GPUs and different acceleration of layers and different optimization techniques, we've containerized all of that for every single version and every single type of framework in the marketplace. And we put that up in the registry -- cloud registry called the [indiscernible] GPU Cloud.

And so all you had to do was download that into the cloud service provider that we've got certified in Tesla 4. And with just one click, you are doing deep learning. And then, the last -- and so that's the cloud service providers. If you -- the way to guess that -- estimate that is there are obviously tens of billions of dollars being invested in these AI startups. And some large proportion of their investment fund raiser will ultimately had to go towards high-performance computing, whether they build it themselves or they rented it in the clouds.

And so I think that's a multibillion opportunity for us. And then lastly, this is probably the largest of all the opportunities which is the vertical industries. Whether it's automotive companies that are developing their supercomputers to get ready for self-driving cars or in healthcare companies that are now taking advantage of artificial intelligence to do better diagnostics of -- diagnosis of disease, to manufacturing companies to -- for in-line inspection, to robotics, large logistics companies, Colette mentioned earlier DHL. But the way to think about that is all of these planning -- all of these companies doing planning to deliver products to you through this large network of delivery systems, it is the world's largest plane [indiscernible] and whether it's Uber or DD or Lyft or Amazon or DHL or UPS or FedEx, they all have high-performance computing problems that are now moving to deep learning. And so those are really exciting opportunities for us, and so the last one is just the vertical industries.

I mean, all of these segments were now in a position to start addressing because we've put our GPUs in the cloud, all of our OEMs are in the process of taking these platforms out to market. And we have the ability now to address high-performance computing and deep learning training as well as inference using one common platform. And so I think the -- we've been steadfast with the excitement of accelerated computing for data centers. And I think this is just the beginning of it all.

Operator: Your next question comes from the line of Stacy Rasgon with Bernstein Research.

Stacy Rasgon: I had a question on your Gaming seasonality into Q4. It's usually up a bit. I was wondering, do you see, I guess, drivers that would drive the lack of normal seasonal trends given how strong it's been sequentially and year-over-year? And I guess as a related question, do you see your Volta volumes in Q4 exceeding Q3?
Jen-

Hsun Huang: Let's see. There's -- I'll answer the last one first and then work towards the first one. I think the guidance that we provided, we feel comfortable with.

But if you think about Volta, it is just in the beginning of the ramp and it's going to ramp into the market opportunities I talked about. And so my hope is that we continue to grow. And there's every evidence that the markets that we serve, that we're addressing with Volta is -- are very large markets. And so there's a lot of reasons to be hopeful about the future growth opportunities for Volta. We've primed the pump.

So cloud service providers are either announce the availability of Volta or they announce the soon availability of Volta. They're all racing to get Volta through cloud because customers are clamoring for it. The OEMs are -- we've primed the pump with OEMs and some of them are sampling now and some of them are racing to get Volta into production in the marketplace. And so I think the foundation, the demand is there. The urgent need for accelerated computing is there because Moore's Law is not scaling anymore.

And then we've primed the pump. So the demand is there, there is a need, the need is there; and the foundations for getting Volta to market is primed. With respect to Gaming, what drives our Gaming business? Remember, our Gaming business is sold one at a time to millions and millions of people. And what drives our Gaming business is several things. As you know, e-sports is incredibly, incredibly vibrant and what drives -- the reason why e-sports is so unique is because people want to win and having better gear helps.

The latency that they expect is incredibly low and performance drives down latency and they want to be able to react as fast as they can. People want to win and they want to make sure that the gear that they use is not the reason why they didn't win. The second growth driver for us this content, the quality of content. And boy, if you look at Call of Duty or Destiny 2 or PUB G, the content just looks amazing. The AAA content just looks amazing.

And one of the things that's really unique about video games is that in order to enjoy the content and the fidelity of the content, the quality of the production value at its fullest, you need the best gear. It's very different than streaming video, it's very different than watching movies where streaming videos, it is what it is. But for video games, of course, it is not. And so when AAA titles comes out in the later part of the year, it helps to drive platform adoption. And then lastly, increasingly, social is becoming a huge part of the growth dynamics of Gaming.

People are -- they recognize how beautiful these video games are. And so they want to share their brightest moments with people, they want to share the levels they discover, they want to take pictures of the amazing graphics that's inside. And it is one of the primary drivers, the leading driver, in fact, of YouTube and people watching other people play video games, these broadcasters. And now, with our Ansel, the world's first in-game virtual reality and surround and digital camera, we have the ability to take pictures and show that with people. And so I think all of these different drivers are helping our Gaming business.

And I'm optimistic about Q4. It looks like it's going to be a great quarter.

Operator: Your next question comes from the line of C.J. Muse from Evercore. C.J.

Muse: I was hoping to speak in a near-term and a longer-term question. On the near term, you talked about the health on demand side for Volta. Curious if you're seeing any sort of restrictions on the supply side, whether it's wafers or access to high-bandwidth memory, et cetera. And then the longer-term question really revolves around CUDA. You've talked about that as being a sustainable competitive advantage for you guys entering the year.

And now that we've moved beyond HPC and hyperscale training to more into inference and GPU as a service and you've posted GTC around the world, curious if you could extrapolate on how you're seeing that advantage and how you've seen it evolve over the year and how you're thinking about CUDA as the AI standard?
Jen-

Hsun Huang: Yes, thanks a lot, C.J. Well, everything that we build is complicated. Volta is the single largest processor that humanity has ever made, at 21 billion transistors, 3D packaging, the fastest memories on the planet and all of that in a couple of hundred watts which basically says it's the most energy-efficient form of computing that the world has ever known. And one single Volta replaces hundreds of CPUs. And so it's energy-efficient, it saves an enormous amount of money and it gets this job done really, really fast which is just one of the reasons why GPU-accelerated computing is so popular now.

With respect to the outlook for our architecture. As you know, we are a one architecture company. And it's so vitally important. And the reason for that is because there are so much software and so much tools created on top of this one architecture. On the inference side -- on the training side, we have a whole stack of software and optimizing compilers and numeric libraries that are completely optimized for one architecture called CUDA.

On the inference side, the optimizing compilers that takes these large, huge computational graphs that come out of all of these frameworks, and these computational graphs are getting larger and larger and their numerical precision differs from one type of network to another -- from one type of application to another. Your numerical precision requirements for a self-driving car where lives are at stake to detecting where counting the number of people crossing the street, counting something versus trying to track -- detect and track something very subtle in all the weather conditions, is a very, very different problem. And so the numeric -- the types of networks are changing all the time, they're getting larger all the time. The numerical precision is different for different applications. And we have different computing -- compute performance levels as well as energy availability levels that these inference compilers are likely to be some of the most complex software in the world.

And so the fact that we have one singular architecture to optimize for, whether it's HPC for numeric, molecular dynamics and computational chemistry and biology and astrophysics, all the way to training to inference gives us just enormous leverage. And that's the reason why NVIDIA could be an 11,000 people company. And arguably, performing at a level that is 10x that. And the reason for that is because we have one singular architecture that's -- that is accruing benefits over time instead of three, four, five different architectures where your software organization is broken up into all these different, small subcritical mass pieces. And so it's a huge advantage for us.

And it's a huge advantage for the industry. So people who support CUDA know that the next-generation architecture will just get a benefit and go for the ride that technology advancement provides them and affords them, okay? So I think it's an advantage that is growing exponentially, frankly. And I'm excited about it.

Operator: Your next question comes from the line of Vivek Arya with Bank of America.

Vivek Arya: Congratulations on the strong results and the consistent execution.

Jen-Hsun, in the last few months, we have seen a lot of announcements from Intel, from Xylinx and others describing other approaches to the AI market. My question is how does the customer make that decision, whether to use a GPUs or an SPGA or an ASIC, right? What is -- what can remain a competitive differentiator over the longer term? And does your position in the trail market also then maybe give you a leg up when they consider solution for the inference part of the problem?
Jen-

Hsun Huang: Yes, thank you, Vivek. So first of all, we have one architecture and people know that our commitment to our GPUs, our commitment to CUDA, our commitment to all of the software stacks that run on top of our GPUs, every single one of the 500 applications, every numerical solver, every CUDA compiler, every tool chain across every single operating system in every single computing platform, we are completely dedicated to it. We support the software first long as we shall live. And as a result of that the benefits to their investment in CUDA just continues to accrue.

I -- you have no idea how many people send me notes about how they literally take out their old GPU, put in a new GPU. And without lifting a finger, things got 2x, 3x, 4x faster than what they were doing before, incredible value to customers. The fact that we are singularly focused and completely dedicated to this one architecture in an unwavering way allows everybody to trust us and know that we will support it for as long as we shall live, and that is the benefit of an architectural strategy. When you have four or five different architectures to support that you offer to your customers and you ask them to pick the one that they like the best, you're essentially saying that you're not sure which one is the best. And we all know that nobody's going to be able to support five architectures forever.

And as a result, something has to give and it would be really unfortunate for a customer to have chosen the wrong one. And if there's five architectures, surely, over time, 80% of them will be wrong. And so I think that our advantage is that we are singularly focused. With respect to FPGAs. I think FPGAs have their place.

And we use FPGAs here at NVIDIA to prototype things and -- but FPGAs is a chip design. It's able to be a chip for -- it's incredibly good at being a flexible substrate to be any chip, and so that's it's advantage. Our advantage is that we have a programming environment. And writing software is a lot easier than designing chips. And if it's within the domain that we focus on, like for example, we're not focused on network packet processing but we are very focused on deep learning.

We are very focused on high performance and parallel numeric analysis. If we're focused on those domains, our platform is really quite unbeatable. And so that's how you think through that. I hope that was helpful.

Operator: Your next question comes from Atif Malik with Citi.

Atif Malik: Colette, on the last call, you mentioned that crypto was $150 million in the OEM line in the July quarter. Can you quantify how much crypto was in the October quarter? And expectations in the January quarter directionally? And just longer-term, why should we think that crypto won't impact the gaming demand in the future? If you can just talk about the steps anybody has taken with respect to having the different mode and all that?

Colette Kress: So in our results, in the OEM results, our specific crypto boards equated to about $70 million of revenue, which is the comparable to the $150 million that we saw last quarter. Jen-

Hsun Huang: Yes. Our longer term, Atif -- well, first of all, thank you for that. The longer-term way to think about that is crypto is small for us but not 0.

And I believe that crypto will be around for some time, kind of like today. There will be new currencies emerging, existing currencies would grow in value. The interest in mining these new emerging currency crypto algorithms that emerge are going to continue to happen. And so I think for some time, we're going to see that crypto will be a small but not 0, small but not 0 part of our business. The -- when you think about crypto in the context of our company overall, the thing to remember is that we're the largest GPU computing company in the world.

And our overall GPU business is really sizable and we have multiple segments. And there's data center and I've already talked about the five different segments within data center. There's [indiscernible] and even that has multiple segments within it, whether it's rendering or computed design or broadcast, in a workstation, in a laptop or in a data center, the architecture is rather different. And of course, you know that we have high performance computing, you know that we have autonomous machine business, self-driving cars and robotics. And you know of course that we have gaming; and so these different segments are all quite large and growing.

And so my sense is that as although crypto will be here to stay, it will remain small not zero.

Operator: Your next question comes from the line of Joe Moore with Morgan Stanley.

Joseph Moore: Just following up on that last question. You mentioned that some of the crypto market had moved to traditional gaming. What drives that? Is there a lack of availability of the specialized crypto product? Or is it just that there's a preference driven for the gaming oriented crypto solutions?
Jen-

Hsun Huang: Yes, Joe, I appreciate you asking that.

Here's the reason why. So what happens is when a crypto -- when a currency -- digital currency market becomes very large, it entices somebody to build a custom ASIC for it. And of course, Bitcoin is the perfect example of that. Bitcoin is incredibly easy to design in its specialized chip form. But then what happens is a couple of different players starts to monopolize the marketplace.

As a result, it chases everybody out of the mining market and it encourages a new currency to evolve, to emerge. And the new currency, the only way to get people to mine is if it's hard to mine, okay? You got to put some effort into it. However, you want a lot of people to try to mine it. And so therefore, the platform that is perfect for it, the ideal platform for digital, new emerging digital currencies turns out to be a CUDA GPU. And the reason for that is because there are several hundred million NVIDIA GPUs in the marketplace.

If you want to create a new cryptocurrency algorithm, optimizing for our GPUs is really quite ideal. It's hard to do. It's hard to do, therefore, you need a lot of computation to do it. And yet there is enough GPUs in the marketplace, it's such an open platform that the ability for somebody to get in and start mining is very low barriers to entry. And so that's the cycles of these digital currencies, and that's the reason why I say that digital currency crypto usage of GPUs, crypto usage of GPUs will be small but not 0 for some time.

And it's small because when it gets big, somebody will be able to build custom ASIC. But if somebody builds a custom ASIC, there will be a new emerging cryptocurrency. So ebbs and flows.

Operator: Your next question comes from the line of Craig Ellis with B. Riley.

Craig Ellis: Jen-Hsun, congratulations on data center annualizing $2 billion, it's a huge milestone. I wanted to follow-up with a question on some of your comments regarding data center partners. Because as I look back over the last five years, I just don't see any precedent for the momentum that you have in the market place right now between your server partners, white box partners, hyperscale partners that are deploying it, hosted, et cetera. And so my question is, relative to the doubling that we've seen year-on-year in each of the last two years, what does that partner expansion mean for datacenters growth? And then if I could sneak one more in, two new products just announced in the Gaming platform, the 1070 Ti and a Collector's Edition on Titan Xp. What does that mean for the gaming platform?
Jen-

Hsun Huang: Yes, Craig, thanks a lot.

Let's see. We have never created a product that is as broadly supported by the industries and has grown nine consecutive quarters. It has doubled year-over-year and with partnerships of the scale that we're looking at. We have just never created a product like that before. And I think the reason for that is several folds.

The first is that it is true that CPU scaling has come to an end. That's just laws of physics. The end of Moore's Law is just laws of physics. And yet, the world for software development and the world -- the problems that computing can help solve is growing faster than any time before. Nobody's ever seen a large-scale planning problem like Amazon before.

Nobody's ever seen a large planning problem like DD before, the number of millions of taxi rides per week is just staggering. And so nobody's ever seen large problems like these before, large-scale problems like these before; and so high performance computing and accelerated computing using GPUs has become recognized as the path forward. And so I think that that's at the highest level of the most important parameter. Second is artificial intelligence and its emergence and applications to solving problems that we historically thought were unsolvable. Solving the unsolvable problems is a real realization.

I mean, this is happening across just about every industry we know, whether it's Internet service providers to healthcare, to manufacturing, to transportation, logistics. You just name it, financial services. And so I think artificial intelligence is a real tool. Deep learning is a real tool that can help solve some of the world's unsolvable problems. And I think that our dedication to high performance computing and this one singular architecture, our seven year headstart, if you will, in deep learning and our early recognition of the importance of this new computing approach, both the timing of it, the fact that it was naturally a perfect fit for the skills that we have and then the incredibly -- the incredible effectiveness of this approach, I think has really created the perfect conditions for our architecture.

And so I think I really appreciate you noticing that. But this is definitely the most successful product line in the history of our company.

Operator: Your next question comes from the line of Chris Caso with Raymond James.

Christopher Caso: I have a question on the automotive market and the outlook there. And interestingly, with the other segments growing as quickly as they are, auto is becoming a smaller percentage of revenue now.

And certainly, the design traction seems very positive. Can you talk about the ramp in terms of when the auto revenue, when we could see that as getting back to a similar percentage of revenue? Is that growing more quickly? Do you think that is likely to happen over the next year with some of these design wins coming out? Or is that something we should be waiting for over several years?
Jen-

Hsun Huang: I appreciate that, Chris. So the way to think about that is, as you know, we've really, really reduced our emphasis on infotainment even though that's the primary part of our revenues so that we could take, literally, hundreds of engineers and including the processors that we're building now, a couple of 2,000, 3,000 engineers, working on our autonomous machine and artificial intelligence platform for this marketplace to take advantage of the position we have and to go after this amazing revolution that's about to happen. I happen to believe that everything that moves will be autonomous someday. And it could be a bus, a truck, a shuttle, a car.

Everything that moves will be autonomous someday; it could be a delivery vehicle, it could be little robots that are moving around warehouses, it could be delivering a pizza to you. And we felt that those -- this was such an incredibly great challenge and such a great compute problem that we decided to dedicate ourselves to it. Over the next several years, and if you look at our DRIVE PX platform today, there's over 200 companies that are working on it. 125 startups are working on it. And these companies are mapping companies, they're Tier 1s, they're OEMs, they're shuttle companies, car companies, trucking companies, taxi companies.

And this last quarter, we announced an extension of our DRIVE PX platform to include DRIVE PX Pegasus which is now the world's first auto grade full [indiscernible] platform for robotaxis. And so I think our position is really excellent and the investment has proven to be one of the best ever. And so I think in terms of revenues, my expectation is that this coming year, we'll enjoy revenues as a result of the supercomputers that customers will have to buy for training their networks, for simulating the -- all these autonomous vehicles driving and developing their self-driving cars. And we'll see fairly large quantities of development systems being sold this coming year. The year after that, I think is the year when you're going to see the robotaxis ramping and our economics in every robotaxi is several thousand dollars.

And then starting, I would say, late 2022, 2021, you're going to start to see the first fully automatic autonomous cars, what people call level 4 cars, starting to hit the road. And so that's kind of how I see it. Just next year is simulation environments, development systems, supercomputers. And then the year after that is robotaxis. And then a year or two after that will be all the self-driving cars.

Operator: Your next question comes from the line of Matt Ramsey with Canaccord Genuity.

Matthew Ramsay: I have, I guess, a two-part question on gross margin. Colette, I remember, I don't know if maybe 3 years ago, 3.5 years ago at Analyst Day, you guys were talking about gross margins in the mid-50s and that was inclusive of the Intel payment. And now you're hitting numbers at 60% excluding that. I want to -- if you could talk a little bit about how mix of the data center business and some others drives gross margin going forward? And maybe Jen-Hsun you could talk a little bit about, you mentioned both are being such a huge chip in terms of transistor count.

How you're thinking about taking costs out of that product as you ramp in into gaming next year and the effects on gross margins.

Colette Kress: Thanks, Matt, for the question. Yes, we've been on a steady stream of increasing the gross margins over the years. But this is the evolution of the entire model. The model of the value-added platforms that we sell and inclusive of the entire ecosystem of work that we do, the software that we enable in so many of these platforms that we bring to market.

Datacenter is one of them; our ProVis, another one. And if you think about all of our work that we have in terms of gaming, that overall expansion of the ecosystem. So this has been continuing to increase our gross margin. Mix is more of a statement in terms of each quarter, we have a different mix in terms of our products and some of them have a little bit of seasonality. And depending on when some of those platforms come to market, we can have a mix change within some of those subsets.

It's still going to be our focus as we go forward in terms of growing gross margins as best as we can. You can see in terms of our guidance into Q4 which we feel comfortable with that guidance that we will increase it as well. Jen-

Hsun Huang: Yes. With respect to yield enhancement, the way to think about that is we do it in several ways. The first thing is I'm just incredibly proud of the technology group that we have in VLSI and they get us ready for these brand new nodes, whether it's in the process readiness, through all the circuit readiness, the packaging, the memory readiness.

The readiness is so incredible -- incredibly important for us because these processors that we're creating are really, really hard. They're the largest things in the world. And so we get one shot at it. And so the team does everything they can to essentially prepare us. And by the time that we take off a product for real, we know for certain that we can build it.

And so the technology team in our company is just world-class. Absolutely world-class, there's nothing like it. Then once we go into production, we get the benefit of ramping up the products. And as yields improve, we'll surely benefit from the cost. But that's not really where the focus is.

I mean, in the final analysis, the real focus for us is continue to improve the software stack on top of our processors. And the reason for that is each one of our processors carry with it an enormous amount of memory and systems and networking and the whole data center. Most of our data center products, if we can improve the throughput of a data center by another 50%, or in our case, often times, we'll improve something from 2x to 4x, the way to think about that is that billion-dollar data center just improved this productivity by a factor of two. And all of the software work that we do on top of CUDA and the incredible work that we do with optimizing compilers and graph analytics, all of that stuff then all of a sudden translates to a value to our customers, not measured by dollars but measured by hundreds of millions of dollars. And that's really the leverage of accelerated computing.

Operator: Your next question comes from the line of Hans Mosesmann with Rosenblatt.

Hans Mosesmann: Jen-Hsun, can you comment on some of the issues this week regarding Intel and their renewed interest in getting into the graphic space and their relationship at the chip level with AMD?
Jen-

Hsun Huang: Yes, thanks, Hans. There's a lot of news out there. I guess some of the things I take away, first of all, Raj leaving AMD is a great loss for AMD. And it's a recognition by Intel probably that the GPU is just incredibly, incredibly important now.

And the modern GPU is not a graphics accelerator. The modern GPU, we just left the word G -- the letter G in there. But these processors are domain specific parallel accelerators. And they're enormously complex. They're the most complex processes built by anybody on the planet today.

And that's the reason why IBM uses our processors for the world's largest supercomputers, that's the reason why every single cloud, every single -- every major cloud, every major server maker in the world has adopted NVIDIA GPUs. It's just incredibly hard to do. The amount of software engineer that goes on top of it is significant as well. So if you look at the way we do things, we plan a roadmap about five years out. It takes about three years to build a new generation and we build multiple GPUs at the same time.

And on top of that, there's some 5,000 engineers working on system software and numeric libraries and solvers and compilers and graph analytics and cloud platforms and virtualization stacks in order to make this computing architecture useful to all of the people that we serve. And so when you think about it from that perspective, it's just an enormous undertaking. Arguably, the most significant undertaking of any processor in the world today. And that's the reason why we are able to speed up applications by a factor of 100. You don't walk in and have a new widget and a few transistors and all of a sudden, speed up applications by a factor of 100 or 50 or 20.

That's just something that's inconceivable unless you do the type of innovation that we do. And then lastly, with respect to the chip that they built together, I think it goes without saying now that the energy efficiency of Pascal GeForce and the Max-Q design technology and all of the software that we created has really set a new design point for the industry. It is now possible to build a state-of-the-art gaming notebook with the most leading-edge GeForce processors and be able to deliver gaming experiences that are many times greater than a console in 4K and had that be in a laptop that's 18 millimeters thin. The combination of Pascal at Max-Q has really raised the bar. And I think that that's really the essence of it.

Operator: Unfortunately, we have run out of time. I'll now turn the call over to you for closing remarks. Jen-

Hsun Huang: We have another great quarter. Gaming is one of the fastest-growing entertainment industries and we're well-positioned for the holidays. AI is becoming increasingly widespread in many industries throughout the world and we're hoping to lead the way with all major cloud providers and computer makers moving to deploy Volta and we're building the future of autonomous driving.

We expect robotaxis using our technology to hit the road in just a couple of years. We look forward to seeing many of you at the SE 17 this weekend. Thank you for joining us.

Operator: This concludes today's conference call. You may now disconnect.