Home

Donate
Analysis

Examining the Source of Nvidia’s Power in the AI Industry

Megan Kirkwood / Dec 4, 2025

Megan Kirkwood is a fellow at Tech Policy Press. This is the first in a series of three posts on Nvidia’s dominance in the AI industry.

Nvidia CEO Jensen Huang is recognized by President Donald Trump during remarks at the White House AI Summit at Andrew W. Mellon Auditorium in Washington, D.C., Wednesday, July 23, 2025. (Official White House photo by Joyce N. Boghosian)

In the summer of 2024, Nvidia made headlines after the company took the top spot as the most valuable company in the world when its market capitalization closed at roughly $3 trillion, surpassing Microsoft. Fast forward only a year, and Nvidia reached a $5 trillion market cap, the first company to do so. While its stock has fallen from that recent crest, the company currently stands as the most valuable company in the world. The majority of Nvidia’s wealth and value derive from its role in producing semiconductors used for intensive computing tasks. One analyst estimates that Nvidia holds “between 70% and 95% of the market share for artificial intelligence chips.”

The story of the company’s meteoric rise usually goes along the lines that the CEO and founder, Jensen Huang, “bet that GPUs would be essential to building artificial intelligence, and he tailored his company to accommodate what he believed would be tech’s next big boom,” as The New York Times reported. Tae Kim, a senior technology writer at Barron's and author of the book The Nvidia Way, says that Huang is “able to see the future, know how the technology is going to develop years beforehand, and he positions NVIDIA to invest in the technology before it happens.” A closer look at the company’s history does not reveal a crystal ball, but rather a persistent desire to dominate, own and operate the entire tech stack that dictates a market. There is a pervasive throughline in the company’s history that Nvidia is aiming to set the standard for entire industries, with its ultimate vision to inject AI into every industry and government around the world, all built on top of its technology.

In a series of posts, I will consider Nvidia’s rise to dominance as a GPU chip designer and the ways the company has vertically expanded to become an “AI factory.” The series will also look at how Nvidia uses partnerships and investments to expand across industries and sectors, including governments. While some jurisdictions have opened antitrust investigations into the company, its partnerships with governments in their push to build data centers and compute infrastructure may jeopardize such scrutiny.

Fabulous profits, fabless business model

Before it became a multi-trillion dollar company at the heart of the AI boom, most people, besides gamers, had likely never heard of Nvidia, and maybe had only heard the term “semiconductor” when there was a shortage of them following the global supply chain disruptions during the COVID-19 pandemic. Nvidia’s centrality in the AI tech stack is down to its role in the crucial business of producing the graphics processing units (GPUs) that AI firms rely on to train their models. This makes it a “fabless” company, as it designs chips which are sent to foundries, which manufacture the chips, the biggest being Taiwan Semiconductor Manufacturing Company (TSMC), which manufactures Nvidia’s chips. The split between fabless and foundry businesses, or the foundry model, occurred in the late 1980s largely to increase efficiency and better allocate the massive cost and expertise needed to design and make chips as semiconductors increased in complexity.

Nvidia started out selling graphics cards for personal computer gaming. Incorporated in 1993, it produced its first chip, the NV1, in 1995. In those early years, the company set forth its ambition not only to create graphics cards but also to establish its “own quirky standard” for game developers. Then, as recounted in a 2002 Wired article, software giant Microsoft created the graphics standard Direct3D, later known as DirectX, which was being built into Microsoft-powered PCs. With so many developers already using Microsoft’s standard, Nvidia’s was rendered unnecessary.

With the company on the brink of bankruptcy, Nvidia pivoted to adopt Microsoft's Direct3D standard. According to Wired, this choice resulted in a partnership that eventually positioned Nvidia to get the contract to develop the chipset for the popular Xbox gaming device – a contract that was “worth as much as $500 million a year.” Investors Ben Gilbert and David Rosenthal have pointed to this moment as a return “to that original quixotic vision for the company” for creating an industry, all the way down to its standards, APIs and interface. But, “they're doing it with Microsoft this time, instead of against Microsoft.”

In 2002, Nvidia released a programming language called CG, meaning C for Graphics, the development of which was aided by Nvidia’s acquisition of Exluna, a company which “made software rendering tools” whose “personnel were merged into the CG project.” CG allowed game developers “to control the shape, appearance, and motion of objects drawn using programmable graphics hardware.” This was co-developed with Microsoft and was compatible with Microsoft's HLSL (High-Level Shading Language). While HLSL only worked with Microsoft's DirectX API, CG worked with DirectX and OpenGL, a cross-platform API for rendering graphics.

It is the compatibility with Microsoft’s dominant DirectX and similarity to Microsoft’s HLSL language that was “NVIDIA's biggest selling point.” Additionally, CG allowed developers to create graphics effects “and share them among other CG applications, across graphics APIs, and most operating systems.” Without overtly threatening Microsoft, CG was a way for Nvidia to offer competitive software features which only worked on their chips, a strategy Nvidia would later revisit.

Setting the standards

By this point, Nvidia had established itself as the industry leader, overtaking long-time competitor 3dfx, previously one of the biggest graphics companies, which Nvidia acquired in 2000. Indeed, Nvidia went on an acquisition spree in the 2000s, primarily targeting graphics companies and semiconductor companies, which helped cement its dominance and develop new services.

Around this time, Nvidia partnered with Advanced Micro Devices (AMD), another fabless semiconductor company. Although nowadays considered a competitor in the GPU market, AMD in the 2000s was hugely successful in the PC processor market, mostly making CPUs. AMD’s biggest competitor at the time was CPU giant Intel. Nvidia partnered with AMD to exclusively produce its nForce integrated chipset for AMD to “move beyond gaming”, as the chipset was “designed to handle multimedia tasks, like theater-quality DVD playback.” However, a Wired article chronicling the partnership made it clear that it is “a move that puts Nvidia squarely in competition with Intel.” Nvidia, which had previously been complementary to Intel’s business because Nvidia needed its GPUs to be compatible with Intel’s CPU, had now switched to working with a competitor on a competing product, putting “that partnership [...] in jeopardy.” The Wired article includes an interview with Huang, who suggested partnering with AMD was about “control.”

‘The structure of the arrangement for companies building chipsets [for Intel’s Pentium 4, single-core CPUs for desktops, laptops and entry-level servers] is so constrained that the opportunities are fleeting. They can only succeed where Intel is not,’ he says. ‘Going into that marketplace right now is a waste of our energy. We decided to go where we have the freedom to innovate. Once we build up that position and have an architecture that people recognize, then it's time to do a Pentium 4 chipset.’ In other words, he's trying to end-run Intel, as he attempted with Microsoft in 1995. This time, with a following among OEMs [original equipment manufacturers] and gamers, a team of first-rate engineers, and a powerful brand, Nvidia is far stronger.

The interview reveals the through-line of Huang’s wish to set the standards which shape entire industries. Intel had long been the dominant industry incumbent, leaving its mark even today as most data center servers, desktops and laptops comprise Intel’s x86 architecture, the “de-facto industry standard that has withstood the test of time.”

The move beyond gaming

By 2008, Huang was reported to be “turning his focus beyond gamers to a host of new customers that will need number-crunching power” from “oil companies doing deep-sea seismic analysis, Wall Street banks modeling portfolio risk and biologists visualizing molecular structures to find drug target sites,” according to Forbes. This stemmed from the recognition that Nvidia’s GPUs worked well in other cases that required parallel processing, including in scientific computing. In 2004, academics Kyoung-Su Oh and Keechul Jung wrote about the benefits of using GPUs for testing neural networks, which are machine learning models, for “image processing and pattern recognition,” where “the main problem is the computational complexity in the testing stage, which accounts for most of the processing time.” Recognizing growing interest from the research community while at the same time looking for a way to avoid commoditization, Nvidia branched out from the gaming industry.

In 2006, Nvidia announced the launch of its software technology CUDA, short for Compute Unified Device Architecture, “which helped program the GPUs for new tasks, turning them from single-purpose chips to more general-purpose ones that could take on other jobs in fields like physics and chemical simulations,” according to the Times. CUDA is a parallel processing software platform which can access the GPU to speed up processing, largely targeted for use by researchers. The CUDA Toolkit includes GPU-accelerated libraries and development tools, and is compatible with popular programming languages like C, C++, and Python. While CUDA did not immediately illuminate a path to profit, like its previous efforts developing CG, CUDA is another way for Nvidia to offer competitive software features which only work on their chips. It was also a way to not only enter a market but to create it entirely.

Artificial intelligence and GPUs

The fates of both Nvidia as a company and the field of AI research were bound together in 2012. This was the year computer scientists Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton entered the ImageNet computer image recognition competition. The competition, launched in 2010 by Professor Fei Fei Li and graduate students at Stanford University, invited researchers to train their models on the ImageNet dataset, which contained three million images arranged into five thousand categories. Models entered into the competition were evaluated by how accurately they classified images according to a test dataset of one hundred thousand images.

Krizhevsky, Sutskever, and Hinton created a neural network called AlexNet. They used a method called deep learning, meaning their neural network contained more layers in the model, improving accuracy. According to researchers Arvind Narayanan and Sayash Kapoor, authors of the book AI Snake Oil, AlexNet had eight layers of depth, which was “almost unprecedented at the time,” but it meant that the depth required an equally intensive amount of computing power. Previous work had illustrated the benefits of Nvidia GPUs and CUDA for neural networks, which the AlexNet team used.

AlexNet won the competition by a huge margin, sending shockwaves through the field of AI. In their book, Narayanan and Kapoor describe AlexNet as a moment of permanent change, cementing the dominance of deep learning for any machine learning application. The authors also point out that, in tandem, GPUs became “essential” to train deep neural networks, which benefited Nvidia “immensely from this boom.” According to Nvidia, “almost every deep learning framework today uses CUDA/GPU computing to accelerate deep learning training and inference.”

It is now widely understood that CUDA is the moat around Nvidia’s castle. Semiconductor analyst Linley Gwennap argues “that software remains the stumbling block for all companies that want to challenge Nvidia's lead in processing artificial intelligence.” CUDA essentially offers an entire software stack that is uniquely compatible with the hardware. AI Now Institute’s Amba Kak and Dr. Sarah West further explain that “proprietary CUDA compiling software is the most well-known to AI developers, which further encourages the use of Nvidia hardware as other chips require either more extensive programming or more specialized knowledge.” They argue that “CUDA serves as a software barrier to entry to Nvidia’s market, because a lively developer ecosystem of memory optimization and other software libraries has been built around Nvidia’s chips.” Aidan Pak, an analyst for the investment advisory firm Adams Street, describes CUDA as maintaining:

...a strong network effect: as more developers and organizations invested time and resources in writing laborious CUDA kernels, the platform’s capabilities and usability grew exponentially. By the time deep learning was gaining widespread adoption in the later half of the 2010s, CUDA had firmly established itself as the standard for GPU acceleration. [...] With the majority of the deep learning ecosystem explicitly optimized for CUDA, the proprietary nature of the software layer created a powerful form of vendor lock-in to NVIDIA GPUs.

In a self-reinforcing cycle, the more developers use CUDA and contribute to the knowledge base, the more it becomes central to “the ecosystem, allowing it to set standards and best practices, which further drives adoption and creates a positive feedback loop where each new adopter adds value to the ecosystem, making it more attractive to subsequent users.” CUDA and the Nvidia GPUs on which it sits are central to the AI ecosystem, with the company’s expanding hardware and software offerings further keeping its users within its proprietary ecosystem.

Breaking the network effects?

There are attempts to offer CUDA alternatives, such as OpenCL, a programming language that works with any GPU, and OpenAI’s open-source programming language, Triton. Though it was designed to work with any GPU, Triton has long been compatible only with Nvidia GPUs. Nvidia has ensured that Triton remains compatible with its latest chip designs, ensuring it is integrated into Nvidia’s ecosystem and is not an outside challenger.

Kak and West also point out that “even if and when comparable offerings to Nvidia’s software stack are available, it is likely that switching costs will be at least moderately high as AI teams move to new software.” They make the point that Nvidia can leverage its scale “to reinvest in software, so it can create custom industry-specific libraries” to further entrench its software moat. Nvidia now boasts that “more than 1,600 generative AI companies are building on NVIDIA. CUDA [....] offers developers more than 300 libraries, 600 AI models, numerous SDKs, and 3,500 GPU-accelerated applications. CUDA has more than 48 million downloads.”

Looking back at Nvidia’s initial goals to build a programming standard for gaming, it appears that CUDA has surpassed that goal by becoming the de facto standard for the entire AI industry. Arguably, there is a connection between its development of CG and building CUDA, as both serve the original vision to “build up that position and have an architecture that people recognize.” As the company embedded its dominance, it produced its own “Pentium 4 chipset,” the hugely popular H100 GPU and its more recent Blackwell architecture.

The next part of the series will look at how the company has embedded its dominance both through vertical integration and horizontally, across industries and governments.

Authors

Megan Kirkwood
Megan Kirkwood is a researcher and writer specializing in issues related to competition and antitrust, with a particular focus on the dynamics of digital markets and regulatory frameworks. Her research interests span technology regulation, digital platform studies, market concentration, ecosystem de...

Related

Analysis
The Rise and Fall of Nvidia’s Geopolitical StrategyOctober 8, 2025
Perspective
Policymakers Have to Prepare Now for When the AI Bubble BurstsNovember 24, 2025
Transcript
Transcript: Donald Trump's Address at 'Winning the AI Race' EventJuly 24, 2025

Topics