
| |||||||||||||||
| |||||||||||||||
| |||||||||||||||
Ironically, Nvidia’s secret weapon is its software
Let’s take it as a given that everyone else’s chips will get better…
One of the bear cases on Nvidia these days is that chip building programs at Google and Amazon (historically, two of the largest, most voracious purchasers of Nvidia GPUs) are going to continue to make progress. Google’s TPUs and Amazon’s Trainium chips will enable hyperscalers to run certain instances and workloads with silicon that costs less than a Blackwell cluster and is a more efficient draw on the data centers’ electricity.
As Ankur Crawford told us on the show last week, the type of compute needed for me to pull up my crispy oven buffalo wing recipe shouldn’t be the same as what AbbVie might need to discover the cure for male pattern baldness - surely a ten trillion dollar drug, I don’t understand what the hold up is. And, in the future, it won’t be. Some workloads will be sent to the GPUs while others are handled by ASICs that are purpose-built, cheaper to run, and good enough for the job. We’ll see hyperscalers getting a lot more focused on which chips, for which workloads, for which customers, at what time of day, etc. And so people worry that Nvidia will somehow be forced to cede market share as lesser chips become more useful to the millions of corporate customers and their needs throughout AI ecosystem.
And I don't dismiss it out of hand. But while I've been hearing these arguments from the Nvidia bears (it's a miracle there are any left), I've been able to compartmentalize them. That $700 billion capex figure I mentioned? It's already obsolete. The big five hyperscalers are on track to spend over $600 billion on infrastructure in 2026 alone, with roughly 75% of that tied directly to AI. Goldman Sachs notes that consensus estimates have been too low two years running. The market keeps getting bigger faster than anyone models it.
Here is the part the bears keep glossing over. Google has been building its own custom AI chips, TPUs, since 2016. Nvidia's data center revenue still grew 142% last year. Those two facts coexisting tells you something important about how this market actually works. Custom silicon doesn't replace the GPU ecosystem. It runs alongside it, handling narrower workloads while Nvidia owns the rest. Anthropic, OpenAI and others have been explicit that inference demand is growing faster than anyone projected. The "we don't know how big this gets" argument stops being a hunch and starts looking like the most defensible position in the room.
If we're all going to be walking around employing agents as personal assistants and digital coworkers three years from now, it's highly doubtful we're going to run into a glut of compute, unless the efficiency of the models just absolutely explodes, enabling us to do a lot more with a lot less than we think. Also a possibility. I try not to close my mind off to these things. I won't be the first to know and neither will you.
So if we can’t be sure of the overall size of the market, then it’s okay for us to harbor some uncertainty about whether or not a near-monopoly like the Nvdia GPU business may cede some market share to other players. We don’t have to be sure of this happening or not. Better to stay focused on the overall size of the pizza than start worrying about whose slice is bigger or smaller than expected. We’ll eventually get to the slice measuring part of this as the market matures. I don’t think that’s a 2026 concern.
The actual moat
Most people think Nvidia's moat is the chips. It's not. It's the software that runs on them, and that software has a 20-year head start. CUDA, which stands for Compute Unified Device Architecture, started as a Stanford grad student's side project. Ian Buck was messing around with GPUs for non-graphics computing in the early 2000s, built a programming language called Brook to make it work, and Nvidia hired him. Jensen saw immediately what this could become. The genius move wasn't the technology. It was the go-to-market. Nvidia made CUDA compatible across its entire product line, from cheap gaming cards all the way up to data center hardware. A student could learn it on a $300 GPU and apply those exact same skills to a machine running a frontier AI model. That continuity created an army of developers who knew one platform, deeply, and had no reason to learn anything else.
Here's what that army looks like today. As of this year's GTC conference, Nvidia counts 6 million developers building on CUDA, with over 7,000 applications running on the platform. That number was 1.8 million five years ago. For competitors, the picture is bleak. Switching a real production AI workload from CUDA to AMD's alternative, called ROCm, isn't a weekend project. It's a multi-month engineering effort with no guarantee it performs as well on the other side. That friction is the moat. Better chips can be designed. You cannot design away 20 years of institutional knowledge baked into millions of developers, thousands of libraries, and every major AI framework ever built.
The Azure Example
I really want to hit on this point because those of us who have been through a few cycles ought to have learned it whenever we think about competition, moats, market share and the like. There’s a human element to what makes a particular tech company’s products proliferate. Sometimes the engineers and IT folks just want to keep their jobs. Or stick to what they’re already comfortable with. In the 1980’s, the unofficial slogan was “Nobody ever gets fired for buying IBM.”
We have seen this movie before. A generation ago, Microsoft quietly seeded the entire Fortune 500 with IT departments full of people holding Microsoft certifications. Those certifications weren't charity. They were a calculated land grab. By the time cloud computing arrived and every company on earth needed to make a decision about where to run their infrastructure, the people making that decision already knew one ecosystem. Azure won deals not because it was always the best product, but because the person evaluating it had spent their entire career inside Microsoft's world. CUDA is running the same play, one generation later, in a much bigger market. The researchers, the engineers, the PhD students building AI systems today learned to do it on CUDA. When their companies write the checks, that history travels with them.
The new "nobody gets fired for buying IBM" is nobody gets fired for building on CUDA. It is already the default answer in every AI job posting, every university curriculum, every research lab onboarding checklist. The researchers and engineers making nine-figure infrastructure decisions five years from now are learning on it right now. By the time their employers are writing the checks, the decision is already made. This is exactly how Microsoft won a generation ago and became such a dominant force during the cloud wars. In this current technological revolution, the lock-in is already here.
CUDA is being taught in universities, required in job postings, and baked into the workflows of every serious AI research lab on earth. When a company goes to hire the team that will build their AI infrastructure, they are not getting people who evaluated all the options and chose Nvidia. They are getting people who only know one way to do it.
Disclaimer:
I am personally invested in Nvidia for over a decade. I’ve sold some along the way (it’s a 10,000 percent-plus return, no choice) but I intend to hold my core position forever. I do not give investment advice on this site, I am sharing my point of view. You are free to make your own decisions, as always. As is the case anytime I talk about stocks, my complete compliance disclaimer applies. Have fun reading all nine million words of it, because apparently, we have decided as a society that adults are incapable of taking responsibility for their own actions. The market is rough, stocks go down, get a helmet.
Okay, I think that should cover it.

Adam Parker says Nvidia is misunderstood
Adam and Rob on The Compound and Friends this week
My friends Adam Parker (Trivariate Research) and Rob Sechan (NewEdge Capital) came on the show this week and to say that we had a blast would be an understatement. It was one of the most fun, information-packed episodes we’ve ever done.
Michael and I in studio this week
Adam wanted to talk Nvidia and its moat because he thinks people don’t fully appreciate how meaningful that’s going to be as AI spreads out and infiltrates every aspect of our lives. It’s all going to be built on the CUDA architecture that 6 million engineers have been trained upon. This should keep competing chips in a supporting role and the Nvidia GPUs as the lead performer in the show.
This is how the stock gets from $4 trillion to $10 trillion in market cap, in Adam’s view.
What do you think? Is he going to be right? You can watch and listen to the whole thing at the links below. Thanks for all the reviews of our show you’ve been leaving at Spotify, Apple and YouTube, by the way. It helps a lot and we appreciate it.


NVDA breaking out (again)

After nine months of consolidation - and three straight earnings reports in which both the results and forward guidance were mind-blowing in their scope - we are seeing the stock finally break out again. In the chart above, I am showing you weekly closes going back three years with a 50-week moving average and 14-period RSI in the bottom pane. Momentum is confirming price as the stock challenges its old highs from last summer.
It’s been biding its time in this range between =$180 and $200 and then, out of nowhere, it resumed the climb. Now we’re looking the name just below the July 2025 highs around $210.
Nvidia is a late reporter but that doesn’t matter. Because in the coming two weeks we will be hearing from its five or six largest customers. If early indications from the recent reports out of Lam Research (chip equipment) and Micron (memory chips for AI) can be believed, demand for semis is not only still in full swing, it could actually be accelerating.
What you’re seeing in Nvidia’s move this past week is a sudden psychology shift among the bulls. While the stock was consolidating, it looked like you had plenty of time to get in. What’s the rush?
The forceful break back above $200 may have changed some people’s attitudes and injected a bit of urgency into the scene. You couldn’t give it away at $180 all winter, now they have to have it at $205.
You see how price can shift the narrative. This time, it happened overnight.
Nvidia’s on my list of The Best Stocks in the Market. Sean and I may cover it for CNBC Pro this coming week.


Why Nvidia is Going to 10 Trillion Dollars
THE COMPOUND & FRIENDS
Why Nvidia is Going to 10 Trillion Dollars
Michael Batnick and Downtown Josh Brown are joined by Adam Parker and Rob Sechan to discuss: software stocks, AI and corporate profitability, searching for value in the current market, and much more!








