Perhaps everyone always feels like they are living in an age of unprecedented progress, but this certainly feels like this is the most interesting decade I've been alive for. The future is both exciting and terrifying, and I think it's okay to feel both in equal measure. Despite A.I. being everywhere nowadays, it still feels like very few have fully grasped how impactful it will be on all our lives.
During our lifetimes, machines will take over the bulk of digital work, assume the control of almost all vehicles and perhaps even gain sentience. There will be robots wandering around, perhaps billions of them, and there will be known cures for all forms of disease. Along the way we might experience periods of huge job displacement, large scale accidents caused by AI mistakes, or worse misaligned A.I. deliberately causing harm.
This progress is now in motion and can't be reset. The most fundamental flaw of the luddite argument is that it is feasible to collectively agree not to build the next powerful and interesting tool. While we might not be able to bend the arc of history to that extent, I believe that humans still have a huge role to play in determining the specifics of this technological revolution.
I believe that it is important we are rolling out compute quickly so that the benefits of A.I. are diffused throughout the economy as soon as possible. While research on the frontier is done in a sensible way, and the corporations involved are being held accountable by their staff and externally for ensuring they aren't reckless with humanity's future in pursuit of speed or the profit motive.
The Build-Out
In contrast to the software boom which has dominated investing for the previous two decades, the A.I. boom is capital intensive. In the age of software, having an idea for a killer app was almost everything, and execution was about the quality of engineers, not the number of CPUs you bought. All has changed. Even if AGI (Open AI definition, can fully replace remote workers) was invented tomorrow, its real-world impact might be surprisingly low for several years.
That is because we will need many, many instances of the model (or agents) running concurrently. There are perhaps 800 million white collar workers in the world. Let's say AI can replace 50% of them, and let's be generous and say that by running 24/7 and by being more efficient, a single agent can replace 10 workers. Helpfully, it also takes about 10 H100s to run Claude Code in fast mode. It might be a bit optimistic to think that AGI could run on the same amount of compute as the current frontier, but they are getting more efficient as well as smarter, so let's run with it.
This means we would require 400 million GPUs, and let's not forget, if the cost of this labour falls dramatically, the demand for it will increase dramatically to, so for lots of reasons this is probably a low number for what's required. Currently, for all the talk you've heard about the massive AI infrastructure build out, there are 12 million H100s (and equivalent) in production.
I'm not advocating that AGI is imminent, or indeed that we should be planning the build out for when it happens; as AI enthusiasts go my timelines are pretty long, or at least my confidence interval is much wider than many. However, I include the extreme outcome to give context to the scale of the infrastructural project required if we want to get maximum gains from this technology.
GPUs Aren't Even the Only Problem
We've all heard about the crazy revenue growth of NVDA, and its meteoric rise to the largest company in the world. This is because they hold the dominant position for the design of GPUs, which happened to be ideal for the large numbers of concurrent (or "parallelised") calculations which drive LLMs. As A.I. investment and use ramped up, access to these GPUs became the primary bottleneck to scaling up. This has come with many side stories, with many auxiliary products to these GPUs, such as networking cables and cooling products, and even electricity generation seeing massive demand spikes.
More recently, there has been an acute (and chronic!) shortage in digital memory. All chips need memory and as A.I. demand expands it is a natural new market for memory. However, over time memory capacity has become a more and more important part of model function. Exacerbating this effect, AI is very reliant on high bandwidth memory (HBM), where memory makers are layering more and more DRAM wafers to create this technology. So, there is a multiplier on the total memory use when high bandwidth is involved. The moves in memory stocks are perhaps less well known that NDVA, but they are no less extreme, and they are still very much in flux until the market figures out when supply will catch up to demand (and at what price).
A.I. Will Eat the World, but First It Needs to Eat Money
In 2026, the world will spend roughly $1 trillion building the 20GW of datacentres that are scheduled for completion. This is double what was spent in 2025 and follows a consistent trend of doubling since 2023 (this is easy to verify by comparing to Nvidia's datacentre revenues during the period, which are an excellent proxy due to their dominant market position).
While this doubling can't go on forever, it so far has been more than justified by demand growth. The best way to measure demand growth so far, and estimate it going forward, is to look at revenue growth from the major model owners ("labs"), specifically Anthropic and OAI (Google's AI revs are not cleanly reported). These have grown at 10x and 3x respectively over the last year. This far outstrips the 2x growth in datacentre capacity, but they have so far been able to compensate for this by increasing the share of capacity allocated to inference (at the cost of Research and Development). This is a natural thing to do as these companies mature, but its effect will be increasingly small over time.
Looking forward, Open AI predict $300B in revenue in 2030 and Anthropic around $150B. This implies YoY growth of roughly 80% across the two during the next 5 years, using YE 2025 ARRs. Part of this growth might be explained by increasing revenues per GW, but there's only so much margins on compute will move, so the bulk of the growth will come from volume increases, that is increasing their datacentre capacity. Even being generous with the margin growth, we can conclude that demand for datacentre capacity will continue to grow by at least 50% per year over the period, as the technology develops and diffuses into the global economy. To further contextualize this assumption, Google have declared they want to double their AI datacentre capacity every 90 days.
The question then becomes, is there any way that supply keep up with this demand? On the current trend it sounds easy, given that capacity has been growing at 100% YoY for the last 3 years, but on closer inspection it is much less straightforward. Let's start with the HyperScalers, Google, Amazon, Microsoft & Meta, as they have been the ones doing most of the heavy lifting to this point.
Amazon, Google and Meta all followed the trend in 2026, roughly doubling from the previous year at $200B, $180B and $130B respectively, while Microsoft has been significantly lower growing Capex at roughly 40% YoY to $100B in 2026. The problem is that they simply won't be able to keep up this pace of growth into 2027, never mind 2030.
Let's look more closely at Google as an example. Even with the $180B in Capex is expected to produce FCF around $70-80B. The problem is they also spend $70-80B a year on buybacks and dividends. They could cancel these programs (which the market would hate) and reinvest their earnings growth, which could be as much as $50B and maybe get to $300B for 2027, but doing so would still only be 65% growth and leave them with no space at all in future years.
The same story applies to the others. Capex growth will slow to something like 50% in 2027 and drop significantly again after that. My guess is that they will be able to continue to grow their earnings at 20%+ (thanks mainly to AI!) and that I think they will be willing and able to grow their capex at roughly the same rate. Therefore, I think 25% is a reasonable estimate for datacentre growth due to HyperScaler Capex. This only accounts for half the growth I've estimated the labs are going to need, so others are going to need to step up.
There are others in the space. First and foremost are the Labs themselves. Between them OAI and Anthropic have just raised $140B recently, and both plan IPO in the next several months which will probably free up another $250B. As their revenues grow and start to cover their operational costs, they will be able to conduct their own capex. However, even though $300B might sound like a lot, spread over 5 years, and in the context of the current spend of $1T a year it's not really going to move the needle.
We also have Oracle and the "Neoclouds", such as CoreWeave, Nebius and many smaller names. Between these 3 companies Capex will be roughly $100B in 2026. They are growing capex much faster than the HyperScalers and as specialists they are ideally placed to scale up to fill the gap. Because they are starting from a smaller base, their ramp up would need to be very significant to fill the gap left by the HSs, jumping $350B for next year and $1.5T by the end of the decade. This is achievable! With a combination of equity raises, corporate debt and project debt such as GPU financing, the public markets can make that kind of capital available to these companies IF they are sufficiently convinced that the investments will pay off.
The Sentiment Gap
Last week I listened to an interesting but ridiculous conversation between Dwarkesh and Dylan Patel, the latter being the founder of Semi-Analysis. It was interesting because the details were very well laid out and it's clear that Dylan is an expert in his business. It was ridiculous because the premise was so farfetched. They were discussing the bottlenecks which would prevent the supply chain from delivering >200GW per year by 2030. This is ridiculous to me because I'm sitting here thinking if we assume datacentre inflation continues at 15% a year, by 2030 it'll cost $100B per GW to deliver a new datacentre. That implies for capital not to be the bottleneck in their scenario, in which they went into minute detail, there would need to be available $20T of capital for investment in 2030, or 15% of Global GDP (current grossed up by 5% p.a.).
Compare that to what we are seeing in capital markets on a daily basis:
- NVDA is trading at roughly 23x reasonable estimates of 2026 earnings ($8), this puts it below blue-chip stocks whose earnings grow consistently with GDP, implying that risks to downside for future earnings exceed upside potential. In short, the market is pricing close to 0% growth for NVDA earnings past 2026.
- At Google's last earnings release, their profit and revenue numbers exceeded even the most optimistic expectations, but when they announced a Capex number far above expectations, the stock sold off. The implications here are that Google made more money than anyone was expecting, driven partially by their investments in AI to that point. Then they told the market that they were doubling down on these bets that are working better than everyone expected, and the market absolutely hated it, hated it more than they liked the good numbers.
- The Neoclouds, especially the ones more aggressively financing fast expansion with debt like Oracle and CoreWeave, have seen their stocks crushed in the past 6 months down 47% and 32% respectively.
- Memory names continue to be priced on extremely low multipliers. They have rallied a lot, but nowhere near as much as they have raised their prices and increased their volumes, so the stock prices are significantly lagging the jump in earnings. This is because the market believes this spike in memory demand to be temporary and that at some point the cycle will revert and these companies will be left with a glut of supply. It is very likely that the current very acute supply demand imbalance will ease somewhat over the next couple of years, but that is not what's being priced, which is essentially that AI demand is a fad.
Result
Unlike the HyperScalers, who have access to hundreds of billions of cashflow from their existing businesses to fund their Capex, Neoclouds are dependent on capital markets to fund their expansion. This means that the pace of their capex growth is dependent, both directly and indirectly, on market sentiment towards this type of investment.
Recently we have seen a big outperformance of NBIS to its peers, mainly driven by their less aggressive approach to debt financing. The market is rewarding their more conservative approach and try as CEOs might to detach themselves from short term share price movements, the reality is that it matters, for staff morale, for how your company is viewed by your customers and suppliers and for their own confidence.
Then there is the more direct relationship – if your shares are trading at all time highs, it's much easier to raise capital to pay for future investment, something that is very difficult to do with a lower market cap and is unpopular with shareholders if done below the price that they bought in for. Meanwhile in the debt markets, Oracle have already had to cancel one of the datacentres it was building for Open AI because it couldn't source the financing.
The net result of all of this is that Neoclouds will have to grow their capacity much less quickly than they would have done in a more supportive market. Analysts estimate that they will cumulatively build 35GW of datacentres by 2030. To many this is still a big number, but if you subscribe to my thesis that demand will continue to grow at 50% a year, and the HSs are tapped out at 25%, the gap to be filled will exceed 100GW.
Solutions
As things stand the A.I. rollout will be dramatically underfunded, meaning users of A.I., willing customers who are happy to pay, will be constrained for years and potentially trillions of dollars of consumer surplus will be lost. So what is the solution?
Given the where the market is now, it seems almost impossible that this can be fully avoided. The 100GW shortfall I mention above implies a funding gap of $7-10 trillion. However, I think the gap can be partially filled via:
- The profit motive. Right now Labs are willing to rent from 3rd parties like Neoclouds at a roughly 25% of the cost of the datacentre build per year for 5 year deals. This allows for a 20% profit on the deal (spread over 5 years) plus whatever residual value the Neocloud can derive from these 5 year-old chips. This is either an incredible deal or a terrible one depending on your view of A.I. driven demand growth over the next several years, but this deal will have to improve to entice more capital into similar investments.
- The finance community adapting to actively direct more money towards these investments. The whole goal of finance is to take societies accumulated resources (savings) and direct them towards the most productive investments. This will be achieved by drawing funds from the AI industry and making it easy for them to take advantage of this opportunity, while also slowly convincing sceptics within the finance industry of the value-add from this monumental infrastructure build out.
Investing is Still Investing
Thus far I have laid out the case for why one should invest broadly in A.I., both for the world and your own personal gain. Anyone who does this will be directionally right and should win no matter what specific investments are chosen, or how infrequently they are monitored. However, this is also an amazing period for differentiation, where those who stay close to both the technology and the markets will have opportunities for dramatic outperformance. Some of the ways in which we can augment returns in this way include:
Within sectors for whom AI provides tailwinds, we will find the companies with bold leaders who understand the opportunity early and act decisively to take advantage, without overstretching themselves. One thing I have learned is that is possible to squander almost any opportunity with poor execution. You can learn so much just by taking the time to listen to the tone and content of press conferences. Ultimately, we are looking for serious leaders who are excited without being exuberant, and most importantly who know their place in the market and concentrate on delivering value to their customers, without trying to do everything and losing focus.
While fundamentals certainly drive markets in the long term, short term there is a lot more going on. Much of what happens is driven by buyers and sellers who are rules based rather than acting on opinions about the latest news. There are many versions of this, but a nice recent example is to look at the moves in SaaS names during the Iran war. There are several days where you can see these heavily shorted stocks not only outperforming on days where the war worsened, but actually rallied outright. This is not because they have some inverse relationship to the global economy, but rather because on these sorts of days hedge funds are forced to 'derisk', as in reduce sizes in all their positions. Understanding when you are trading against someone who is being forced to reduce vs someone who is looking at the same information as you and concluding you are wrong can be extremely powerful.
Understanding that the market is more consistent than it is right, and preparing for the implications this has. We often refer to putting the market on a hand – if you can understand deeply the point of view the market price action is implying, it can help a lot to identify your own mistakes in some circumstances and predict the market's future errors in others.
For example, I have a strongly held view that the market is still mispricing memory stocks, but it isn't good enough to just decide the other market participants collectively are idiots, we need to see if we can form (and seek evidence for) a reasonable thesis which explains their behaviour. In this case, I believe it is because people who have been trading these stocks for a long time, and indeed the companies themselves, have seen many demand and supply cycles where the market switches violently from undersupplied to oversupplied and vice versa. In fact, the market for memory was oversupplied as recently as 2022, leading to losses for many of these companies that year.
I believe that the AI demand is different, and they are viewing it as another cycle. This is not a ridiculous position for them to take, in fact it is my view that needs justification from the default, nonetheless it is my view. I agree with them that the current acute shortage will not last and as suppliers continue to increase prices, eventually they will destroy enough demand as to find a level where supply and demand meet, but this is where our viewpoints diverge. At that point they will assume, as has always been the case before, that this is the top of a cycle that will eventually lead to another period of oversupply. Whereas I believe that the demand from AI is extremely robust, and that when the new level has been found, consistent demand far above the previous cycles relatively close to where prices peak.
So I think it makes sense for me to be long memory stocks now, but I am also mindful that we are not yet at the point of maximum disagreement. This will come when prices stabilize and the first small price reductions are recorded. At that point, I expect them to crush these stocks and assuming they do this will be an incredible opportunity. In the meantime, we will keep following the stocks and the analyst commentary surrounding them, constantly questioning our view and actively seeking disconfirming evidence.
Our Principles for Investing
Curiosity is at the core of our approach. This is why I am so keen to build relationships with industry insiders, because I can't wait to peel back the curtain and see what's next for the technology. I like to think of this as the basic research part of the job, where you are discussing topics and learning, without necessarily having direct link as to how this information will be profitable. One of the reasons why this is important is because it is so hard to predict where the next opportunity, and I really subscribe to Pasteur's view that "chance favours prepared mind".
Once we get closer to views about the trading, the related topic of truth-seeking becomes crucially important. Opinions lead to positions, and positions intrinsically tie us to our existing opinions. We have to fight the urge to close our minds to information which might cause us to change our opinion. This becomes especially true if the position has gone against us and instead of us simply updating our view based on new information, we are accepting that we were wrong in the first place.
Finally, a careful examination of 2nd degree knowledge. It is important to spend time and energy assessing our confidence level our views. Bearing in mind the strength of the theory underlying each, the evidence to support it and the level to which we can reasonably assign a plausible explanation for why our view diverges from the market. Even when all these things line up, it is important to keep a sense of epistemic humility. It is great to have strongly held opinions, but it is dangerous to confuse them for certainty. We are not in the business of making specific and grandiose claims about the future, our goal is to find opportunities we can express our views for maximum return with minimal risk.
Sample Topics of Interest
Lab Moats
I am mildly concerned that the labs will struggle to create a lasting moat. I believe they can keep setting the frontier, but I wonder will they become victims of their own success? Let's Software Engineering as the current example. Right now it's worth it for many firms to pay a premium for Claude because it can do tasks that other models are simply incapable of doing. However, at the current rate of progress it seems quite likely that in the near future all the major models, including some open-source ones, will be capable of doing all SWE tasks with perfect accuracy.
At that point, what are the economics behind paying for the frontier model? With open source I can already replace each team with a prompt engineer who is producing the work of 7 or more software devs. They spend half their time working with the agents to review the work and set the next task, and the other half in meetings a piloting sessions with the users. Given that this prompt engineer is likely one of the team of 7, the other 6 of whom have been let go, it's unlikely they'll be able to command much pricing power, so I've already saved 70-80% of the costs of that team with an overall boost to productivity. It's heavy lift for the frontier to replace this person completely, but even if it does, the bulk of the total cost savings have already been made, so the pot at which additional advancements are aiming for is shrinking.
There is a lot of concern in markets that compute will become commoditised and all the value will accrue to the frontier models and the consumers. However, I can see the opposite occurring, where the models become commoditised, because they are all so good at many tasks that they become indistinguishable, and the value accrues at the bottleneck – which is compute. In fact, perhaps one way the frontier labs could command margin is by delivering more efficient models – delivering the same output for watts of datacentre capacity.
The other problem, especially for Open AI and Anthropic, is distribution. While they have a fully differentiated model and brand loyalty it's okay, because people will go to the effort, but Microsoft and Google make it so easy to default to them as full service solutions. A small example is note taking. We use Google Meets and until recently people had been either using separate software to record sessions and take notes, or even record meetings of their phones, upload the audio file to Claude. But now Meets has added a button right there on the app to do all this for you. Even with a Claude subscription it's difficult to justify the hassle. I struggle to see how they will stay relevant to business with such a distributional disadvantage.
AI Winners Outside of Tech
I'm keen to understand who the big winners will be outside of technology names. Here I'm thinking of companies with high wage bills for white collar workers and also high degrees of regulatory capture. One thought is the investment banks, fit the bill quite nicely. Particularly those without loan books, which would suffer in the event of large increases in unemployment, especially if the job losses are in well paid white-collar workers who previously would have had very high credit scores, and therefore who's loans are priced as extremely safe. This is a very early stage thought and will need more time and research before it becomes a viable trade idea.
Deflation and Employment Loss
Many if not most new technologies are both deflationary in the specific and expansionary in the general. We all know that technology has been driving economic growth for centuries, but it is important to acknowledge that in the specific, technology is deflationary. The motor car lowered the price of transportation and killed the horse industry, but created manufacturing jobs and over time the lower cost of transport facilitated the growth of many other industries, including tourism.
I suspect the way our economies work is that lots of small disruptions are happening all the time, creating isolated deflation and improving overall productivity via the process of creative destruction. Even if the gains come at a lag to the deflationary / unemployment, at any given time the gains from previous disruptions are outweighing the losses from new ones. Thus far this is pure speculation, I'm sure there are studies one could find to explore this time lag effect, but I have not yet done so. Nonetheless, if we follow the thought, there is the inevitable question about how this process reacts to a shock that disrupts a significant amount of the economy in one go. I'm sure that the economy will adapt in time to the productivity gains, but in the short term, could the deflationary effects be large enough to effect the whole price level. And at the same time could the job losses be visible in the whole economy?
I don't have the answers to this, but I think it is a serious question that has very widespread implications both for general human welfare and for investing.
The Future of Models
Right now the whole world seems to use AI and LLMs as interchangeable. Perhaps this is correct, but I want to at least be open to the possibility that the frontier models could take other forms or at least could include other major components in the future.
I find compelling the argument that true intelligence requires some form of internal world model. I also like Chollet's proposition that LLMs are (I'm paraphrasing) essentially multidimensional interpolation machines and will never achieve the kind of learning efficiency or adaptive abilities that humans display in novel environments.
This could all be true and actually not that functionally important. Maybe the new types of models still want to crunch massive matrices together in parallel and so the demand for GPUs marches on. Or maybe it doesn't matter too much if LLMs get stuck, perhaps they can replace 80-90% of remote workers and advance science 100 years within this paradigm. However, I suspect that there are more twists and turns before this story ends!
Call to Action
That was pretty long, so if you are still reading there's a good chance you are very interested in this project. If that's the case and you are either an experienced financial analyst or a machine learning expert in any context, please get in touch. Meanwhile if you would like to be part of this project as an investor, please reach out.