Logo of NVIDIA Corporation

NVIDIA (NVDA) Q2 2022 Earnings Call Transcript

Earnings Call Transcript


Operator: Good afternoon. My name is Mel, and I will be your conference operator today. At this time, I would like to welcome everyone to NVIDIA's Second Quarter Earnings Call. All lines have been placed on mute to prevent any background noise. After the speakers' remarks, there will be a question and answer session.

[Operator Instructions] Thank you. Simona Jankowski, you may begin your conference.

Simona Jankowski: Thank you. Good afternoon, everyone. And welcome to NVIDIA's Conference Call for the Second Quarter of Fiscal 2022.

With me today from NVIDIA are Jensen Huang, President and Chief Executive Officer, and Colette Kress, Executive Vice President, and Chief Financial Officer. I'd like to remind you that our call is being webcast live on NVIDIA's Investor Relations website. The webcast will be available for replay until the conference call to discuss our financial results for the First Quarter of Fiscal 2022. The content of today's call is NVIDIA's property. It can't be reproduced or transcribed without our prior written consent.

During this call, we may make forward-looking statements based on current expectations. These are subject to a number of significant risks and uncertainties, and our actual results may differ materially. For our discussion, the factors that could affect our future financial results and business, please refer to the disclosure in today's earnings release, our most recent Form 10-K and 10-Q, and the reports that we may file on Form 8-K with the Securities and Exchange Commission. All our statements are made as of today August 18, 2021, based on information currently available to us. Except as required by law, we assume no obligation to update any such statements.

During this call, we will discuss non-GAAP financial measures. You can find a reconciliation of these non-GAAP financial measures to GAAP financial measures in our CFO commentary, which is posted on our website. With that, let me turn the call over to Colette.

Colette Kress: Thanks, Simona. Q2 was another strong quarter with revenue of $6.5 billion and year-on-year growth of 68%.

We set records for total revenue, as well as for gaming, data center, and professional visualization. Starting with gaming, revenue was 3.1 billion, was up 11% sequentially, and up 85% from a year earlier. Demand remained exceptionally strong, outpacing supply. We are now four quarters into the Ampere architecture product cycle for gaming, and it continues to be our best effort. At COMPUTEX in June, we announced two powerful new GPUs for gamers and creators, the GeForce, RTX 3080 Ti, and RTX 3070 Ti, delivering 50% faster performance than their prior-generation with acclaimed features, such as real-time retracing, NVIDIA DLSS, AI, rendering, Reflex, and broadcast.

Laptop demand was also very strong. OEMs adopted Ampere architecture GPU s in a record number of designs. From the top of the line gaming laptops to those -- to mainstream price points as low as $799 that brings the power of GeForce GPUs to gamers, students, and creators on the go. Ampere architecture-powered laptops feature our third generation Max-Q power optimization technology that enables ultrathin designs, such as the new Alienware X15, the world's most powerful sub-16-millimeter gaming laptop. NVIDIA RTX technology has reset computer graphics and spurred our biggest ever refresh cycle.

Ampere's been our fastest ramping gaming GPU architecture on Steam. And the combination of Turing and Ampere RTX GPUs has only upgraded about 20% of our installed base. 80% have yet to upgrade to RTX. And the audience for global eSports will soon approach 0.5 billion people, while the number of those who live stream games is expected to reach over 700 million. The number of PC gamers on Steam is up almost 20% over the past year.

More than 60 RTX games now support NVIDIA's RTX ray tracing or DLSS, including today's biggest game franchises, such as Minecraft, Fortnite, and Cyberpunk. New RTX games this quarter includes Red Dead Redemption 2, one of the top-rated games of all time, popular titles like Rainbow Six Siege and rushed, and Minecraft RTX in China with over 400 million players. To competitive gamers, NVIDIA Reflex, which includes latency, is now supported by 20 games. Let me say a few words on crypto currency mining. In an effort to address the needs of miners and direct GeForce to gamers, we increased the supply of Crypto currency Mining Processors, or CMP, and introduced low hash rate GeForce GPUs with limited Ethereum mining capability.

Over 80% of our Ampere architecture base GeForce shipments in the quarter were low hash rate GPUs. The combination of crypto to gaming revenue is difficult to quantify. CMP revenue, which is recognized in OEM, was 266 million, lower than our original 400 million estimates on reduced mining profitability and we expect a minimal contribution from CMP going forward. GeForce NOW reached a new milestone this quarter, surpassing 1,000 PC games, more than any other cloud gaming service. The premium tier is available for a subscription of $10 per month, giving gamers access to RTX class performance, even on an underpowered PC, Mac, Chromebook, iOS, or Android device.

Moving to pro visualization. Q2 revenue was a record 519 million, up 40% sequentially, and up 156% year-on-year. Strong sequential revenue growth was led by desktop workstations, driven by demand to outfit, design offices at home as remote work becomes the norm across industries. This is also the first big quarter of the Ampere architecture ramp for pro visualization. King verticals driving Q2 demand include automotive, public sector, and healthcare.

At SIGGRAPH last week, we announced an expansion of NVIDIA Omniverse, our simulation and collaboration platform that provides the foundation of the Metaverse. Through new integrations with Blender, the world's leading open-source 3D animation tool, and Adobe, we're opening the Omniverse platform to millions of additional users. We are also collaborating with Apple and Pixar to bring advanced physics capabilities to Pixar's Universal Scene Description framework, embracing open standards to provide 3D workflows to billions of devices. Omniverse enterprise software is in the early access stage and will be generally available later this year on a subscription basis from NVIDIA's partners, including Dell, HP, Lenovo, and many others. Over 500 Companies are evaluating Omniverse Enterprise, including BMW, Volvo, and Lockheed Martin.

And more than 50,000 individual creators have downloaded Omniverse since it entered open beta in December. Moving to automotive. Our Q2 revenue was 152 million, down 1% sequentially, and up 3% year-on-year Sequential revenue declines in infotainment were largely offset by growth in self-driving. Looking further out, we have substantial design winds set to ramp that we expect will drive a major inflection in revenue in the coming years. This quarter, we announced several additional wins.

Self-driving startup AutoX unveiled its latest autonomous driving platform for RoboTaxis powered by NVIDIA DRIVE. The performance and safety capabilities of the software-defined NVIDIA DRIVE platform have enabled AutoX to become one of the first companies in the world to provide full self-driving mobility services without the need for a safety driver. In autonomous trucking, DRIVE's ecosystem partner, plus, signed a deal with Amazon to provide at least 1,000 self-driving systems to Amazon's fleet of delivery vehicles. These systems are powered by NVIDIA DRIVE for high-performance, energy-efficient, and centralized AI compute. An autonomous trucking startup embarked is building on NVIDIA DRIVE.

The system is being developed for trucks for four major OEMs, Freightliner, Navistar International, PACCAR, and Volvo, representing the vast majority of class 8 or largest size trucks in the U.S. The NVIDIA DRIVE platform is being rapidly adopted across the transportation industry from passenger-owned vehicles to robotaxi to trucking and delivery vehicles. We believe everything that moves will be autonomous someday. Moving to Data Center. Revenue of 2.4 billion grew 16% sequentially, and 35% from the year-ago quarter.

The year-ago quarter, which was our first quarter to include Mellanox. Growth was driven by both hyperscale customers and vertical industries, each of which has record revenues. Our flagship A100 continuing to ramp across hyper-scale and cloud computing customers, with Microsoft Azure announcing general availability in June, following AWS and Google Cloud Platforms' general availability in prior quarters. Vertical industry demand was strong, with sequential growth led by financial services, supercomputing, and telecom customers. We also had exceptional growth in Inference, which reached a record more than doubling year-on-year.

Revenue from Inference focussed processors includes the new A30 GPU, which provides four times the Inference performance of the T4. Customers are also turning to NVIDIA GPUs to take AI to production and shifting from CPUs to GPUs, driven by the stringent performance, latency, and cost requirements of deploying and scaling deep learning AI workloads, and NVIDIA Networking products posted solid results. We see momentum across regions driven by our technology of leadership with upgrades to high-speed products, such as ConnectX-6, as well as new customer wins across cloud service providers, enterprise, and high-performance computing. We extended our leadership in supercomputing. The latest top 500 niche shows that NVIDIA technologies power 342 of the world's top 500 supercomputers, including 70% of all new systems and eight of the top 10, to help companies harness the new industrial high-performance computing revolution.

We deliver a turnkey AI Data Center solution with the NVIDIA DGX SuperPOD, the same technology that powers our new Cambridge-1 supercomputer in the UK and a number of others in the top 500. We expanded our AI software and subscription offerings, making it easier for enterprises to adopt AIs from the initial development stage through to deployment and operations. We announced NVIDIA Base Command, our software-as-a-service offering for operating and managing large-scale, multi-user, and multi-team AI development workloads on DGX SuperPOD. Base Command is the operating and management system software for distributed training customers. We also announced the general availability of NVIDIA Fleet Command, our managed edge AI software-as-a-service offering.

Fleet Command helps Companies solve the problem of securely deploying and managing AI applications across thousands of remote locations, combining the efficiency and simplicity of central management with the cost performance and data sovereignty benefits of real-time processing at the edge. Early adopters of Fleet Command include some of the world's leading retail, manufacturing, and logistics companies and the specialty software Companies that work with them. The new NVIDIA Base Command and Fleet Command software and subscription offerings followed last quarter's announcements of the NVIDIA AI Enterprise software suite, which is early access with general availability expected soon. Our enterprise software strategy is supported by the NVIDIA certified system program with the server OEMs, which are bringing to market over 55 systems ready to run on NVIDIA's AI software out of the box to help enterprises simplify and accelerate their AI deployment. The NVIDIA ecosystem keeps getting stronger.

NVIDIA Inception, our acceleration platform for AI startups, just surpassed 8,500 members with cumulative funding of over 60 billion and numbers in 90 countries. Inception is one of the largest AI startup ecosystems in the world. CUDA now has been downloaded 27 million times since it launched 15-years ago, with 7 million in the last year alone. TensorRT for inference has been downloaded nearly 2.5 million times across more than 27,000 Companies. And the total number of developers in the NVIDIA ecosystem now exceeds 2.6 million, up 4 times in the past 4 years.

Let me give you a quick update on Arm. In nearly one year since we initially agreed to combine with Arm, we have gotten to know the Company, its business, and its people much better. We believe more than ever in the power of our combination, and the benefits it would deliver for Arm, for the UK, and its customers across the world in the era of AI. Arm has great potential. We love their business model and commit to keeping its open licensing approach.

And with NVIDIA's scale and capabilities, Arm will make more IP, and sooner, for their mobile and embedded customers while expanding into Data Center, IOT, and other new markets. NVIDIA accelerates computing, which starts with the CPU. Whatever new markets are open with the CPU and our accelerated computing opportunities. We've announced accelerated platforms for Amazon Graviton, Ampere Computing, MediaTek, and more about expanding cloud computing, AI, cloud gaming, Supercomputing, Edge AI, to Chrome PCs. We plan to invest in the U.K.

and we have with the Cambridge-1 supercomputer. And through ARM making U.K. a global center in science, technology, and AI. We are working through the regulatory process, although some Arm licensees have expressed concerns and objected to the transaction, and discussions with regulators are taking longer than initially thought. We are confident in the deal and that regulators should recognize the benefits of the acquisition to Arm, its licensees, and the industry.

Moving to the rest of the P&L, the GAAP gross margin of 64.8% for the second quarter was up 600 basis points from the year earlier, reflecting the absence of certain Mellanox acquisition-related costs. GAAP gross margins were up 70 basis points sequentially, non-GAAP gross margins were 66.7% up 70 basis points from a year earlier and up 50 basis points sequentially, reflecting higher ASPs within the desktop, G4s, GPUs, our continued growth in high end and pure architecture products, partially offset by a mix shift within Data Center. Q2 GAAP EPS was $0.94 up 276% from a year earlier. Non-GAAP EPS was $1.04, up 89% from the year-earlier, adjusting for the 4 to 1 stock split effective this quarter. Q2 cash flow from operations was a record 2.7 billion.

Let me turn to the outlook for the Third Quarter of Fiscal 2022. We expect another strong quarter with sequential growth driven largely by accelerating demand in Data Center. In addition, we expect sequential growth in each of our three other market platforms. Gaming demand is continuing to exceed supply as we expect channel inventories to remain below target levels as we exit Q3. The contribution of CMP to our revenue outlook is amongst.

Revenue is expected to be 6.8 billion plus or minus 2%, GAAP and non-GAAP gross margins are expected to be 65.2% and 67% respectively, plus or minus 50 basis points. GAAP and non-GAAP operating expenses are expected to be approximately 1.96 billion and 1.37 billion, respectively. GAAP and non-GAAP other income and expenses are both expected to be an expense of approximately 60 million, excluding gains and losses on equity securities. GAAP and non-GAAP tax rates are supposed to be expected 11%, plus or minus 1% excluding discrete items. Capital expenditures are expected to be approximately 200 million to 225 million.

Further financial details are included in the CFO commentary and other information available on our IR website. In closing, let me highlight upcoming events for the financial community. We will be attending the following virtual events. The BMO Technology Summit on August 24th, the New Street Big Ideas in Semiconductors Conference on September 9th, the Citi Global Tech Conference on September 13, the Piper Sandler Global Technology Conference on September 14th, and the Evercore ISI Auto Check and AI Forum on September 21st. Our earnings call to discuss the third-quarter results are scheduled for Wednesday, November 17th.

We will now open the call for questions. Operator, would you please pull for questions?

Operator: Thank you. [Operator Instructions]. Your first question comes from the line of Vivek Arya of Bank of America. Your line is now open.

You may ask your question.

Vivek Arya: Thanks for taking my question. I actually had a near and longer-term question on the data center. I think near-term you mentioned the possibility of accelerating data center growth from the 35% rate. I was hoping if you could give us some more color around that confidence and visibility.

And then longer-term, Jensen, we've seen a lot of announcements from NVIDIA about your Enterprise software opportunity. I honestly don't know how to model that. It sounds very promising, but how should we model it? What problem are you trying to solve? Is it cannibalizing demand you might have otherwise seen from your public Cloud customers, or is this incremental to growth? So just any guidance or any just insights into how to think about NVIDIA's Enterprise software opportunity longer-term? Thank you.

Jensen Huang: Yes, thanks for the question. We are seeing accelerated -- as we've already reported that we have record revenues in both hyperscale cloud and industrial enterprise this last quarter.

And we're seeing accelerated growth. The exploration in hyperscale and cloud comes from the transitioning of the cloud services providers in taking their AI applications, which are now heavily deep learning-driven into production. There were some things that we've spoken about in the past that we'll make Vinny the ideal partner to scale up with. And there are several elements of our platform. Number one, Ampere GPU, which is now a universal GPU for AI, for training, but incredibly good for [Indiscernible] it’s terrific and its throughput, it's terrific, and it's a fast response client as well.

And therefore, the cost of deployment, the cost of operating the AI applications is the lowest. The second is the introduction of TensorRT, which is our optimizing compiler that makes it possible for us to compile and optimize any AI application to our GPUs. And whether its computer vision or natural language understanding conversational AI, recommender systems, the type of applications that are deploying AIs is really quite vast. And then lastly, this software Inference server that we offer is called Triton, which supports every one of our GPUs. It supports CPU s as well as GPUs still.

So every internet service provider could operate their entire data center using Triton. These several things are really accelerating our growth. So the first element is the deployment and transition of deep learning AI applications into large-scale deployment. In the Enterprise, the application that is driving AI, as you know, every enterprise wants to get a race towards being a tech company and take advantage of connected clouds and connected devices and artificial intelligence to achieve it. And they have an opportunity to deploy AI services out of the edge.

And in order to do so, there are several things that have to happen. First, we have to create a computing platform that allows them to do training in the IT environment that they understand, which is virtualized, which is largely managed by VMware. And our collaboration with VMware, creating a new type of system that could be integrated into Enterprise has been quite a significant effort, and it's in volume production today. The second is a server that allows the enterprise customers to deploy their AI models out to the Edge. And the AI engine, the software suite that we've been developing over the last 10 years, now has been integrated into this environment and allows the enterprises to basically run AI out of the vast.

There are three elements of our stuff of product there. For instance, NVIDIA AI Enterprise. And that basically puts all of the state-of-the-art AI solvers and engines and libraries that we've industrialized and perfected over the years, have made available to Enterprise license. The second is an operating system platform called Base Command that allows for distributed Gala software development for training and developing models, and then the third is Fleet Command, which is an operating system software product that lets you operate and deploy and manage the AI models out to the Edge. These three software products, in combination with the server called NVIDIA Certified, taken out through our network of partners is our strategy to accelerate the adoption of AI by the Enterprise customers.

So we're really enthusiastic about entering into the software business model. This is an opportunity that could represent, of course, tens of millions of servers, we could link all of them with the GPU accelerated, and we believe that Enterprises will be deploying and taking advantage of AI to revolutionize the industry. And using a credit traditional enterprise software licensing business model. This could represent billions of dollars of profit inputs.

Operator: Thank you.

The next question comes from the line of Stacy Rasgon of Bernstein. Your line is now open. You may ask your question.

Stacy Rasgon: Hi, guys. Thanks for taking my questions.

I wanted to go back, Colette, the sequential guidance, you gave a little bit of colour by segments. I've been looking at your gaming revenues. It's like three quarters in a row, you've been up, call it ballpark, 10% or 11%. And my understanding is that was a function of your ability to bring on supply. So I guess what does the supply issue look like as you're going from Q2 into Q3? And do you think you can still maintain that kind of sequential growth or does it dial down because I also need to -- I also would play that against your other commentary suggesting that the sequential growth -- and I assume on a dollar basis would be driven primarily by Data Center.

So how do I think about the interplay within those comments of sequential growth of gaming, especially given the trajectories out of the last several quarters?

Colette Kress: Yeah. Let me start and I'll let Jensen add a bit, Stacy, to your question. I guess we're providing the guidance for Q3 of 6.8 billion in revenue. Now excluding CMP, we expect our revenue to grow over 500 million sequentially. A lion's share of that sequential revenue increase will be coming from Data Center.

We do expect gaming to be up slightly on a sequential basis but remember we are still supply constrained. Automotive and ProBiz are also expected to be up slightly quarter-over-quarter. And from the CMP perspective, we'll probably just have minimal amounts in Q3. So our Q3 results don't have seasonality with them for gaming and are really about the supply that we believe we can have for Q3. I'll see if Jensen wants to add any more colours.

Jensen Huang: Yes, thanks for your questions. As you know RTX is a fundamental recap of computer graphics. This is a technology called ray tracing that has been the holy grail of computer graphics for quite a long time for 35 years, and our NVIDIA research for 10 years, we finally made it possible to do real-time ray tracing with RTX. RTX's demand is quite incredible, and as you know, we have a large [Indiscernible] that uses an architecture called GTX based on programmable shelters that we invented some 20 years ago. And now, we've reset the entire installed base, and Ampere is off to just the incredible starting the best-selling GPU architecture in the history of our Company.

And yet, we've only upgraded some 20% -- less than 20% of our total installed base. There's another 80% of the world's PC gaming market that we have yet to upgrade to RTX. Meanwhile, the number of PC gamers in the world grew substantially. Steam grew 20% this last year. And so I think the -- we're right at the beginning of our RTX transition.

Meanwhile, computer graphics has expanded into so many different new markets. RTX, we've always believed would re-invent the way that people did design. And we're seeing that happening right now as we speak as Workstations is growing faster than ever and has achieved record revenues. And at the same time, because of all of our work with cloud gaming, we now see public clouds putting in cloud graphics, whether its workstations or PC s or private gaming consoles up in the Cloud. So we're seeing strong demand in PCs, in laptops, in workstations, in mobile workstations, in the cloud.

And so RTX is really doing great. Our challenge there is that demand is so much greater than supply. And again, as Colette said, [Indiscernible].

Operator: Thank you. The next question comes from the line of Matt Ramsay of Cowen.

Your line is now open.

Matt Ramsay: Yes. Thank you very much. Good afternoon, everybody. Before my questions, Jensen, I just wanted to say congrats on the Noyce Award, that's a big honor.

For my question, I wanted to follow on Stacy's question about supply. And Colette, maybe you could give us a little bit of commentary around supply constraints in gaming in the different tiers or price tiers of your gaming cards. I'm just trying to get a better understanding of how you guys are managing supply across the different price tiers. And I guess it translates into a question of, are the gaming ASPs that we're seeing in the October quarter guidance, are those what you would call sustainable going forward, or do you feel like that mix may change as supply comes online? Thank you.

Colette Kress: I'll start here.

Thanks for the question on our overall mix as we go forward. First, our supply constraint in our gaming business is largely attributed to our desktop and notebook. That can mean a lot of different things from our components that are necessary to build so many of our products. But our mix is really important. Our mix as we are also seeing many of our gamers very interested in our higher-end, higher performance products.

We will continue to see that as a driver about overall lifts both our revenue and can lift our overall gross margins. So there are quite a few different pieces into our supply that we have to think about, but we are going to try and make the best solutions for our gamers at this time.

Operator: Thank you. For the next question, we have the line from C.J. Muse from Evercore.

Your line is now open. C.J. Muse: Yes, thank you. Good afternoon. I guess a follow-up question on the supply constraints.

When do you think that they'll ease? And how should we think about gaming into the January quarter vis-a-vis typical seasonality, given -- I would assume you would continue to be supply constrained. Thank you.

Jensen Huang: Colette, I can take it or you can. Either one of us.

Colette Kress: Go ahead Jensen and I'll follow it up if there are some other things,

Jensen Huang: Okay.

We're supply constraint in graphics and we're supplying constraining graphics while we're delivering record revenues in graphics. Cloud gaming is growing, cloud graphics is growing. RTX made it possible for us to address the design in the creative workstations. Historically the rendering of ray tracing and photorealistic images have largely been done on CPUs. And for the very first time, and you could actually accelerate it with NVIDIA GPUs and with RTX GPUs.

And so the workstation market is really doing great. The backdrop of that of course is that people are building offices in their homes. And for many of the designers and creators of our worlds and 20 million of them, they have to create -- they have to build a workstation or an office at home as well as build one at work [Indiscernible] And meanwhile, of course, RTX has reached that -- all of our consumer graphics with the 200 million installed base of PC gamers, and it's time to upgrade. And so there's a whole bunch of reasons when choosing referenced revenues while we're on supply constraints. We have enough supply to meet our second-half Company growth plans.

And next year, we expect to be able to achieve our Company's growth plans for next year. Meanwhile, we're having and are securing pretty significant long-term supply commitments as we expand into all these different marketing initiatives that we've set ourselves up for. And so I would expect that we will see a supply-constrained environment for the vast majority of next year is my guess at the moment. But a lot of that has to do with the fact that our demand is just too great. RTX is really a once-in-a-generation reset of the computer -- modern computer graphics.

Nothing like this has happened [Indiscernible] computer graphics. And so the invention is really [Indiscernible] and you could see its impact.

Operator: Thank you. The next question comes from the line of Harlan Sur of JP Morgan. Your line is now open.

Harlan Sur: Good afternoon and congratulations on the strong results outlook and execution. The Mellanox networking franchise -- this has been a really strong and synergistic addition to the video compute portfolio. I think kind of near to midterm the team is benefiting from the transition to 200 and 400-gig networking connectivity and Cloud and Hyperscale. And then I think in addition to that, you guys are getting some good traction with the Bluefield smart tech products. Can you just give us a sense of how the business is trending year-over-year, and do you expect continued quarter-over-quarter networking momentum into the second half of this year, especially as the cloud and hyperscalers are going through a server and Capex spending cycle?

Jensen Huang: Yeah, I really appreciate that question.

Now, the heaviest solid growth quarter, and the Mellanox networking business have really grown incredibly. There are three dynamics happening all at the same time. The first is the transition that you're talking about. You know that the world's data center [Indiscernible] centers are users for computing costs just aggregated, which basically means with a single application is running on multiple servers at the same time. This is what makes it possible for them to scale up.

The more users for an AI application or service, you just have to add more servers. And so the ease of scale-out that this aggregated computing provides also puts enormous pressure on the networking. And that Mellanox has the world's lowest latency and the highest bandwidth and performance networking on the planet. And so the ability to scale out and the ability to provide this aggregate in applications are really much, much better with Mellanox networking. So that's number one.

Number two, almost every Company in the world has to be a high-performance computing Company now. You see that the cloud service providers, one after another, are building effective supercomputers. Historically was [Indiscernible] and supercomputing firms, the cloud-service providers have to build supercomputers themselves. And the reason for that is because artificial intelligence entombs gigantic models. The rate of growth of network sizes AI models buys this is doubled every two months.

It's doubling not every year or 2 years, it's doubling every 2 months. And so you can imagine the size when I'm talking about trending AI models that are 100 trillion parameters large. The human brain has a 150 plus trillion synopsis, and so for nuance. And so that gives you a sense of the scale of AI models that people are developing. And so you're going to see supercomputers that are built out of Mellanox, InfiniBand, and their high-speed networking, along with NVIDIA GPU computing in more and more cloud service providers.

You're also seeing it in enterprises or used in the discovery of [Indiscernible] there is our digital mileage and revolution going on as a computation statement. The large-scale computing that we're able to do now and AI, better understand biology and better understand chemistry, and bringing both of those skills into the field of Information Sciences. And so you're seeing large supercomputers growth in enterprises around the world as well. And so the second dynamic has to do with our incredibly great networking InfiniBand networking, which was the de facto standard in high-performance computing. And the third dynamic is the Data Center’s storing software.

In order to orchestrate and run a data center, with just a few people essentially running their entire data center, hundreds of thousands of servers it is just one computer in front of you. That entire data center is software-defined. And the amount of software that goes into that software-defined data center running on today's GPUs is the networking stack, the storage stack, and now, because of zero trusts, the security stack. All of that is putting enormous pressure on the available computing capacity for applications, which is ultimately what data centers are designed to do. And so for the software-defined data center needs to have a place to take infrastructure software and accelerate it, to offload it, to accelerate it.

And very important to isolate it from the application plan, so that intruders can't jump into the operating system with your all -- of your data center, establish of your data center. And so the answer to that is detailed, the ability to offload, accelerate, and isolate this -- the data center software infrastructure, and to fully up to all the -- your CPUs through to run what they're supposed to run, which is the application. Now, just about every data center in the world is moving towards a zero-trust model and Bluefield is just incredibly well-positioned. For these three dynamics, this aggregated computing, which needs really strong and fast networking, every Company needing high-performance computing, and then lastly, software decline in data centers going zero trusts. And so these are really important dynamics and I appreciate the opportunity to tell you all that.

And you can just tell how super excited I'm about the prospects in the networking business and in the importance that they have in building modern data centers.

Operator: Thank you. The next question comes from the line of Aaron Rakers of Wells Fargo. Your line is open.

Aaron Rakers: Yeah.

Thanks for taking my question. I think you hit on a lot of my questions around the Data Center in that last. So maybe I will just ask the kind of on a P&L basis, one of the things that I see in the results, or more importantly, the guide is you're now collect guiding over a 67% gross margin, potentially. I'm curious as we move forward, how do you think about the incremental operating gross margin upside still from here, and how you're thinking about the operating merchant leverage for the Company from here through the P&L. Thank you.

Jensen Huang: Let me take that and then you could just follow up --

Colette Kress: Go forth.

Jensen Huang: -- With details, that'll be great. I think at the highest level -- I really appreciate the question. At the highest level, the important thing to realize is that artificial intelligence, is the single greatest technology force that the computer industry has ever seen and potentially the world's ever seen. The automation opportunities -- automation opportunity, which drives productivity, which translates directly the cost savings to companies, is enormous.

And it opens up opportunities for technology and computing Companies like it's never happened before. And let me just give you some examples. The fact that we could apply so much technology to warehouse logistics, retail automation, customer call center automation, is really quite unprecedented. The fact that we could automate truck driving and [Indiscernible] delivery, providing an automated chauffeur. Those kinds of services and benefits and products are never imaginable before.

And so the size of the IT industry if you will, the industry that computer companies like ourselves are part of, has expanded from them before. And so -- the thing that we want to do is to invest as smartly, but as quickly as we can to go after the large business opportunities, where we can make a real impact. And while doing so, to do so in a way that is architecturally sensible. One of the things that is really an advantage of our Company is the nature of the way that we build products, the nature of the way that we build software, our discipline around the architecture, which allows us to be so efficient while addressing climate science, on the one hand, digital biology on the other, artificial intelligence and robotics and self-driving cars. And of course, we always talked about computer graphics and video.

Using one architecture and having the ability to -- and having the discipline now for almost 30 years has given us incredible operating leverage. That's where the vast majority of our operating leverage comes from, which is architectural. The technologies are architectural, our products are architectural in that way, and the Company is even built architecturally in that way. And so hopefully, as we go after these large, large market opportunities that AI has provided us, and we do so in a smart and disciplined way, with great leverage through our architecture, we can continue to drive really great operating leverage for the Company and for our shareholders.

Operator: Thank you.

We have the next question, which comes from the line of John Pitzer of Credit Suisse. Your line is open.

John Pitzer: Yeah. Good afternoon, guys. Thanks for letting me ask a question.

I apologize for the short-term nature of the question, but it's what I get asked most frequently. I want to return to the impact of crypto or the potential impact of crypto. Colette or Jensen, is there any way to gauge the effectiveness of the low hash rate GeForce, why only 80% and not 100%? And how confident are you that the CMP business being down is a reflection of crypto cooling-off versus perhaps LHR not being that effective? And I bring it up because there's a lot of blogs out there that would suggest that there -- as much as you guys are trying to limit the ability of miners to use GeForce, there are some workarounds.

Jensen Huang: Yes. There -- go ahead.

Colette Kress: Let me start there and answer a couple of the questions about our strategy that we've put in place in this last couple of quarters. As you recall, what we put in place was the low hash rate cards, as well as putting industry [Indiscernible] cards. The low hash rate cards were to provide for more supply for our GeForce gamers that are out there. We articulated one of the metrics that we were looking at is what percentage of those cards in Ampere, we were able to sell with low-rate hash cards. Almost all of our cards in Ampere are low hash rates, but also we're selling other types of cards as well.

But at this time, as we move forward, we're much higher than 80%, but just at the end of this last quarter, we were approximately at 80. So yes, that is moving up, so the strategy is in place and we'll continue as we move into Q3. I'll move it to Jensen here to see if he can discuss it further.

Jensen Huang: There's the question about the strategy of how we're scaling GeForce supply to games. We moved incredibly fast this time with CMPs, and with our LHR settings for GeForce.

And our entire strategy is about steering GeForce supply to the industry. And we have every reason to believe that because of the drive-in gaming, which is really a measure of gamers, the rate of growth of theme adoption of our GPUs, there's some evidence that that was successful. But there are somewhat reasons why it's just different this time. The first reason, of course, is that the LHR, which is new, and the speed at which we responded with CMPs, steered GeForce to fight the damage. The second is where at the very beginning of the Ampere and RTX graphics.

As I mentioned earlier, RTX was a complete re-invention of computer graphics. Every evidence is that gamers are incredible -- and game developers are incredibly excited about ray tracing, this form of computer renderings. Graphics rendering is just dramatically more beautiful. And we're at the beginning of that cycle, and only 20% has been upgraded so far. We have 80% to go in a market that is already quite large and an installed base that's quite low but also growing.

Last year, the gamers grew 20% and just [Indiscernible]. The third reserves that our demand is strong and our channel weak, and you can see what we're doing with the shortage of supply as quickly as we're shipping it. It's virtually all of the worlds. And then lastly, and we just have more growth drivers today because of RTX whenever. We have the biggest wave of NVIDIA laptops, just the laptops are our fastest-growing segment of computing and we have the largest wave of laptops coming.

The demand for RTX and Workstation, whereas previously the Workstation market was a slow-growing market is now a fast-growing market and its achieved record. And after more than a decade of working on cloud graphics, our cloud graphics is in great demand. And so all of these segments are seeing high demand while we continue to supply them [Indiscernible] so I think the situations are very different and RTX is making a huge difference.

Operator: Thank you. We have the next question, which comes from the line of Chris Casper (phon) of Raymond James.

Your line is now open.

Chris Casper: Thank you. Good evening. My question is about the split between the Hyperscale in the vertical customers in the Data Center business and the trends you see in each. I think in your prepared remarks, you said both would be up in the October quarter.

But I'm interested to see if you're seeing any different trends there, particularly in the vertical business, as perhaps business conditions normalize and Companies return to the office, and they adjust their spending plans accordingly.

Colette Kress: Yes. Let me start to outlook the question and I'll let Jensen [Indiscernible]. So far with our Data Center business, with our Q2 results, our vertical industry is still quite a strong percentage. Essentially, 50% of our data center business is going to our vertical industries.

Our Hyperscale s makes up the other portion of that, slightly below the 50%. And then we also have our supercomputing business with a very small percentage of it doing quite, quite well. As we move into Q3, as we've discussed, we will see an acceleration of both our vertical industries and our Hyperscales as we move into Q3. With that backdrop, we'll see if Jensen has additional commentary.

Jensen Huang: There is a fundamental difference in the Hyperscale use of HPC or AI versus the industrial use of HPC Internet.

In the world of hyperscalers and Internet service providers, they're making recommendations on movies, and songs, and articles, and search results, and so on, so forth. And the improvement that I can see that deep learning and artificial intelligence large recommender systems that can provide is really working for them. In the world of industry, the reason why artificial intelligence is transformative. Recognizing that most of the things that I just mentioned earlier, it's not moving on dynamics in the world's largest industries, whether it helps healthcare or in logistics or transportation or retail. The vast majority of the reasons why, and some of the physical sciences industries, whether it's energy or transportation, and also for healthcare.

The simulation of physics, the simulation of the world, was not achievable using traditional first principle simulation approaches. But artificial intelligence or data-driven approaches have completely shaken that up and put it on its head. Some examples, whether it's fusing artificial intelligence so then you could feed up the simulation or the prediction of the approaching structure or the 3D structure approaching, which was recently achieved by a couple of very important networks, it's ground-breaking. And by understanding the approaching structure, 3D structure, we understand -- that we can better understand its function and how it would adapt to other approach and other [Indiscernible] And it's a fundamental step of the process in drug discovery and that has just taken a giant leap forward. In the areas of chronic science, it is now possible to consider using data-driven approaches to create models that overcome this -- not overcome but accelerate and make it possible for us to simulate much larger simulations of Multi-Physics Geometry aware simulations, which is basically climate science.

These are really important fields of work that wouldn't have been possible for another decade at least. And just as we've made possible, using artificial intelligence, the realization of real-time ray tracing in every field of science, whether its climate simulation, energy discovery, drug discovery. We're starting to see that the industry recognizing that the fusion of the first principle simulation and data-driven artificial intelligence approaches, it's going to get a giant leap up. And that is a second dynamic. The other dynamic for the industry is for the very first time they can deploy AI model out to the edge to do a better job with agriculture, to do a better job with asset protection and warehouses, to do a better job with automating retail, AI is going to make it possible for all of these types of automation to finally be realized.

And so the dynamics are all very different. That last one has to do with Edge AI, which was just made possible by putting AI right at the point of data and right at the point of action because you need to be low cost, you need to be high performance, and instantly responsive, and you can't afford to stream all of the data to the cloud all the time. And so each one of them has a partially different understanding.

Operator: Thank you. Your final question comes from the line of Silgan Stein (phon) of [Indiscernible] Your line is open.

Silgan Stein: Great, thanks so much for taking my question. Jensen, I'm wondering if you can talk for a moment about Omniverse. This looks like really cool technology, but I tend to get very few questions from investors about it, but It looks to me like this could be a potentially very meaningful technology for you longer-term. Can you explain perhaps what capabilities and what markets this is going after? It looks like, perhaps this is going to position you very well in augmented and virtual reality, but maybe it's a different market or group of markets. It's a bit confusing to us, so if you could maybe help us understand it, I think we'd really appreciate it.

Thank you.

Jensen Huang: I really appreciate the question. And it's one of the most important things we're doing. The Omniverse, first of all, just what is it? Is a stimulator. It's a simulator that's physically accurate and physically based.

And it was made possible because of two fundamental technologies we invented. One of them is, of course, RTX, the ability to physically stimulate light behaviour in the world, which is very fixing. The second is the ability to compute or simulate the physics of -- simulate the artificial intelligence behaviour of agents and objects inside a world. So we have the ability now to simulate physics in a realistic way and to create a new architecture that allows us to do it in the cloud, distribute it a computed way, and to be able to scale it out to a very large [Indiscernible] So the question is what would you do with such a thing. The simulator, there is a simulation of virtual worlds with portals -- we call them connectors -- portals based on an industry-standard, open standard that was pioneered by Pixar, and as we mentioned earlier, that we're partnering with Pixar and Apple to make it even more broadly adopted.

It's called USD, Universal Scene Description. They're basically portals or wormholes into virtual worlds. And this virtual world will simulate -- to be simulating -- it could be a concert for consumers, it could be a theme park for consumers. In the world of industries, you could use it for simulating robots so that robots could learn how to be robots inside these virtual worlds before they're downloaded from the simulator to the real world. You could use it to simulate factories, which is one of the early works that we've done with BMW.

We've got a shared GTC Factory of the Future that is designed completely in Omniverse, robots trading Omniverse with goods and materials that are its original CAD data put into the battery. The logistics plan, like an ERP system, except this an ERP system of physical grids and physical simulation simulated through this Omniverse world, and you could plan the entire factory in Omniverse. This entire factory now becomes what is called the digital twin. In fact, it could be a factor, it could be a stadium, it could be an airport, it could be an entire city, it could even include the cars. The digital twin would allow us to simulate new algorithms, new AIs, new and optimization algorithms before we deploy them into the physical world.

And so what is Omniverse? Well, Omniverse is going to be an overlay, if you will, of virtual worlds, that increasingly people call the Metaverse. And we've now heard several companies talk about the Metaverse. They all come from different perspectives. Some of them from social perspectives, some are here from a gaming perspective, some of them, in our case, from an industrial and design and engineering perspective. But the Omniverse is essentially an overlay of the Internet -- an overlay of the physical world and it's going to fuse all these different worlds together long-term.

And you'll be able to -- you mentioned VR, and now you'll be able to go into the Omniverse worlds using virtual reality. And so you wormhole into the virtual worlds using VR. You could have an AI or an object portal into our world using augmented reality so you could have a beautiful piece of art that you've somehow purchased and belongs to you because of [Indiscernible] and it's only enjoyed in the virtual world and you can overlay it into your physical world using AI. I'm fairly sure at this point that Omniverse or the Metaverse is going to be a new economy that is larger than our current economy. And we'll enjoy a lot of our time in the future in Omniverse and the Metaverse, and we'll do a lot of our work there, and we'll have a lot of robots.

They're doing a lot of the work on our behalf. We've got the [Indiscernible] they show the results. Omniverse to us is an extension of our AI strategy, is an extension of our high-performance computing strategy, and it makes it possible for companies and industries to be able to create digital tools that simulate their physical version before they deploy it or while they are operating.

Operator: Thank you. I will now turn the call over back to Mr.

Jensen Huang for closing remarks.

Jensen Huang: Thank you. We had an excellent quarter fueled by surging demand for NVIDIA computing. Our pioneering work in accelerated computing continues to vent graphics, scientific computing with AI. Enabled by NVIDIA accelerated computing, developers are creating the most impactful technologies of our time.

From natural language understanding, invest in many systems, to autonomous vehicles, and in the logistics centers, to digital biology and climate science research, the Metaverse world that obeys the laws of physics. This quarter we announced NVIDIA Base Command and Fleet Command to develop deploy, scale, and orchestrate the AI workflow that runs on the NVIDIA AI Enterprise software suite. With our new Enterprise Software, a wide range of NVIDIA-powered systems, and a global network of system and integration partners, we can accelerate the world's largest industry as they raise the benefits from the transformative power of AI. We are thrilled to have launched NVIDIA Omniverse, a simulation platform nearly five years in the making that runs physically realistic virtual worlds and connects to other digital platforms. We imagine engineers, designers, and even autonomous machines connecting to Omniverse to create digital twin simulated worlds that help train robots, operating autonomous factories, simulate fleets and autonomous vehicles, and even predict the human impact on Earth's climate.

The future will have artificial intelligence augmenting our own and the Metaverse augmenting our physical world. It will be populated by real and AI visitors, and open new opportunities for artists, designers, scientists, and even businesses, a whole new digital economy [Indiscernible] Omniverse is a platform for building the Metaverse vision. We're doing some of our best work and most impactful work in our history. I want to thank all of NVIDIA's employees for the remaining work and the exciting future we're inventing together. Thank you.

See you next time.

Operator: Thank you. This concludes today's conference call. You may now disconnect.