Wide-moat Nvidia (NAS: NVDA) once again reported stellar quarterly results and provided investors with even rosier expectations for the upcoming quarter, as the company remains the clear winner in the race to build out generative artificial intelligence capabilities.

We’re encouraged by management’s commentary that demand for its upcoming Blackwell products should exceed supply into calendar 2025, and we see no signs of AI demand slowing either.

We raise our fair value estimate to $1,050 from $910 as we model stronger data center revenue growth over the next several quarters while maintaining our longer-term growth rates from a higher installed base of AI equipment. Shares were up about 6% on the earnings report, and we think the reaction is justified and view shares as fairly valued.

Revenue in the April quarter was $26.0 billion, up 18% sequentially as more supply for Nvidia’s graphics processing units came online. Revenue was up 262% year over year and ahead of guidance of $24.0 billion.

Data center revenue remains the focus, coming in at $22.6 billion (87% of total revenue), up 23% sequentially and 427% year over year. Demand continues from cloud computing leaders as they support their customers in building AI models, as well as from enterprises, consumer internet leaders like Meta Platforms, and sovereign governments building AI into telecom and other services. Nvidia still wields exceptional pricing power, with an adjusted gross margin of up 220 basis points sequentially to 78.9%, ahead of guidance of 77%.

Nvidia expects July-quarter revenue to be $28.0 billion, up 8% sequentially and 107% year over year, and ahead of FactSet consensus estimates of $26.6 billion. We anticipate Nvidia will once again beat these estimates, and we model $29.7 billion of revenue. The firm has earned at least $3 billion of incremental data center revenue in each of the past three quarters, driven by AI demand, and we anticipate that a fourth similar quarter is on the near-term horizon.

Business strategy and outlook

Nvidia has a wide economic moat, thanks to its market leadership in graphics processing units, or GPUs, hardware and software tools needed to enable the exponentially growing market around artificial intelligence. In the long run, we expect tech titans to strive to find second-sources or in-house solutions to diversify away from Nvidia in AI, but most likely, these efforts will chip away at, but not supplant, Nvidia’s AI dominance.

Nvidia’s GPUs handle parallel processing workloads, using many cores to efficiently process data at the same time. In contrast, central processing units, or CPUs, such as Intel’s processors for PCs and servers, or Apple’s processors for its Macs and iPhones, process the data of “0s and 1s” in a serial fashion. The wheelhouse of GPUs has been the gaming market, and Nvidia’s GPU graphics cards have long been considered best of breed.

More recently, parallel processing has emerged as a near-requirement to accelerate AI workloads. Nvidia took an early lead in AI GPU hardware, but more important, developed a proprietary software platform, Cuda, and these tools allow AI developers to build their models with Nvidia. We believe Nvidia not only has a hardware lead, but benefits from high customer switching costs around Cuda, making it unlikely for another GPU vendor to emerge as a leader in AI training.

We think Nvidia’s prospects will be tied to the AI market, for better or worse, for quite some time. We expect leading cloud vendors to continue to invest in in-house, while CPU titans AMD and Intel are working on GPUs and AI accelerators for the data center. However, we view Nvidia’s GPUs and Cuda as the industry leaders, and the firm’s massive valuation will hinge on whether, and for how long, the company can stay ahead of the rest of the pack.

Moat rating

Learn how to find companies with sustainable competitive advantages.

We assign Nvidia a wide economic moat rating, thanks to intangible assets around its graphics processing units and, increasingly, switching costs around its proprietary software, such as its Cuda platform for AI tools, which enables developers to use Nvidia’s GPUs to build AI models.

Nvidia was an early leader and designer of GPUs, which were originally developed to offload graphic processing tasks on PCs and gaming consoles. Nvidia has emerged as the clear market share leader in discrete GPUs (over 80% share, per Mercury Research). We attribute Nvidia’s leadership to intangible assets associated with GPU design, as well as the associated software, frameworks, and tools required by developers to work with these GPUs. Recent introductions, such as ray-tracing technology and the use of AI tensor cores in gaming applications, are signs, in our view, that Nvidia has not lost its GPU leadership in any way. A quick scan of GPU pricing in both gaming and data center shows that Nvidia’s average selling prices can often be twice as high as those from its closest competitor, AMD.

Meanwhile, we don’t foresee any emerging companies as becoming a third relevant player in the GPU market outside of Nvidia or AMD. Even Intel, the chip industry behemoth, has struggled for many years in trying to build a high-end GPU that would be adopted by gaming enthusiasts, and its next effort for a discrete GPU is slated to launch in 2025. We do see integrated GPU functionality within many of Intel’s PC processors, as well as portions of Apple’s and Qualcomm’s system-on-chip solutions in smartphones, but we perceive these integrated solutions as “good enough” for nongamers, but not on par with high end GPU needs.

GPUs perform parallel processing, in contrast to the serial processing performed by central processing units used to run the software and applications on PCs, smartphones, and many other types of devices. CPU examples include Intel’s and AMD’s PC and server processors, as well as Apple’s and Qualcomm’s smartphone processors. These CPUs conduct serial processing of 0s and 1s in a particular order in order to run an instruction set to run software and perform functions. In contrast, parallel processing does not need to run in a linear order. This is particularly useful, for example, when capturing an image. A GPU often has more cores than a CPU and performs simpler processing (such as capturing the data within a single image pixel) but throws many more cores at the image to catch all of the data quickly. If CPUs were to conduct this task, they would have to capture the pixels in order—one can envision the CPU starting at the top and working its way down the image, while the GPU takes a snapshot in full.

In our view, the nature of parallel processing in GPUs is at the heart of Nvidia’s dominance in its various end markets. PC graphics were the initial key application, allowing for more robust and immersive gaming over the past couple of decades. Cryptocurrency mining also involves many mathematical calculations that can run in parallel (in other words, each calculation is independent of the others), again effectively “mining” crypto faster than a CPU.

In the past decade, however, the parallel processing of GPUs was also found to more efficiently run the matrix multiplication algorithms needed to power AI models. AI development has two key phases. The first is AI training. Using an image recognition example, developers might feed 50,000 images into a model, labeling them as either a picture of a cat, or not a cat. The model will look for the various aspects of each image and determine the scores and weights that are most common within a “cat” image. This may be an image with whiskers (although mice have whiskers too) or one with pointy ears (although the ears of a fox are also pointy), but a combination of all these factors may lead the model to effectively recognize cats in future images.

The second AI phase is inference, where the AI model makes a decision on an image, based on its prior training. In the cat example, the user would feed an unlabeled image into the model, and the model provides an output of whether it recognizes a cat or not.

Similar techniques are used in large language models associated with generative AI. In these cases, LLMs are fed with massive amounts of data, which may come from the internet, research papers, databases, and so on Based on this data, the LLM determines scores and weights associated with language, and which words are associated with one another.

As an overly simple example, a model might be asked to predict the word to come after “peanut butter and…” “Jelly” might be the next word when thinking about food categories, although “peanuts,” “honey,” or other foods could be matches. But if the model were to think of categories outside of food, words like “diet,” “allergies,” or others could be fit. This leads to scores (that is, how often is “jelly” the next food word, versus “honey,” and so on), but also the weighting of such dimensions (that is, how often is the next word a food word, versus a health word, versus a geographic location, versus many other types of categories).

GPUs are best suited to make these many billions of calculations needed for LLMs to predict the next word in a query (GPT-3 was trained on 175 billion parameters, for example). More important, Nvidia made shrewd moves to build and expand the Cuda software platform, creating and hosting a variety of libraries, compilers, frameworks, and development tools that allowed AI professionals to build their models. Cuda is proprietary to Nvidia and only runs on Nvidia GPUs, and we believe this hardware plus software integration has created high customer switching costs in AI that contribute to Nvidia’s wide moat.

Even if a chip competitor were to build a GPU on par with Nvidia, we surmise that the code and models built on Cuda to date would not be ported over to another GPU, giving Nvidia an incumbency advantage. It is conceivable that alternatives may emerge that might never touch Cuda or an Nvidia GPU, but Nvidia has virtually no competition in this arena in 2023, so any enterprise building an LLM but waiting for alternatives might be left in the dust.

In AI inference, we believe that a variety of chip solutions will be used over time to detect the patterns and provide the output associated with AI models. CPUs handle many inference workloads today, but GPUs will likely be part of the inference equation too. Meta Platforms even indicated that it is moving its inference workloads to GPUs rather than CPUs. In a bullish scenario for GPU vendors, it’s possible that GPUs might not only dominate AI training, but the vast majority of AI inference too.

Beyond Nvidia’s AI prowess today, which we believe is exceptionally strong, we think the company is making the proper moves to widen its moat even further. Nvidia’s software efforts with Cuda remain impressive, while Nvidia is also expanding into networking solutions, most notably with its acquisition of Mellanox. We don’t want to discount Nvidia’s know-how here either. Many AI models don’t run on solo GPUs, but rather a connected system of many GPUs running in tandem. Nvidia’s proprietary NVLink products do a good job of connecting Nvidia GPUs together to run these larger models.

Nvidia is elbowing out even further with its DGX solutions, priced at well over six figures, which tie in up to 8 GPUs into integrated solutions. Nvidia is even offering DGX Cloud and has hyperscaler partners where Nvidia will run a portion of the customer’s data center to optimize AI workloads. In a best-case scenario, Nvidia might not only have the best GPUs on the market, but the best cloud infrastructure in AI, potentially becoming a cloud computing leader that enterprises might cherish even more than the world’s leading hyperscalers today, such as Amazon’s AWS or Microsoft’s Azure.

Looking at the competitive landscape, AMD is a well-capitalized chipmaker with GPU expertise, although we view the company as being in a position of weakness on the software front. Here, we think switching costs around Cuda will keep Nvidia ahead of AMD for the foreseeable future, although the key valuation question for both companies centers around how much business AMD can chip away from Nvidia in the years ahead.

Intel has not had much success in AI accelerators or GPUs but can’t be ruled out. Perhaps the biggest threat might be from in-house chip solutions from hyperscalers, such as Google’s tensor processing units, or TPUs, Amazon’s Trainium and Inferentia chips, or Microsoft’s and Meta Platform’s chips in development. There’s no guarantee that any of these chips will be superior to Nvidia’s GPUs across a wide range of applications, but it’s likely that each of these in-house chips might perform specific workloads better than a general AI GPU from Nvidia or others.

However, we believe that cloud computing companies will have to offer their enterprise customers a full menu of GPUs and accelerators so that they can run AI workloads. Amazon and Google may use Trainium and TPUs to train their own AI models, respectively, but we think they may struggle to encourage a host of enterprise customers to optimize their AI models for these in-house semis. Doing so would prevent these enterprise customers from switching among cloud providers, and enterprises typically loathe to be locked in to a single vendor. Thus, neutral merchant GPU vendors will likely be demanded by enterprise customers, and again, we foresee Nvidia remaining at the head of the pack for quite some time.