Could Alphabet’s TPUs be a Revenue Driver after Apple’s AI Announcement?
Market News

Could Alphabet’s TPUs be a Revenue Driver after Apple’s AI Announcement?

Story Highlights

Google’s TPUs are rented out to Cloud customers, providing clients with flexible AI packages and contributing to the segment’s 28% year-on-year growth in Q1. These chipsets also contribute to strong projected growth and a 1.25x PEG ratio, but they may not be enough to be considered a revenue driver on their own.

Apple (AAPL) recently announced that it has used Alphabet’s (GOOGL) Tensor Processing Units (TPUs) to develop models for artificial intelligence (AI). For some commentators, this represents a revelation, indicating that the industry is less reliant on Nvidia (NVDA) than originally thought. However, it may not be right to see these TPUs as a revenue driver on their own but rather as part of Google’s broader AI agenda and potentially bright future.

Nonetheless, I’m bullish on Alphabet — most known for its search engine, Google — due to its innovative AI offerings and relatively cheap valuation. This bullishness is shared by Wall Street.

Does Apple’s TPU Choice Matter?

In a recent technical paper, Apple engineers highlighted that the iPhone maker opted for Google’s TPUs, using 2,048 TPUv5p chips for device AI models and 8,192 TPUv4 processors for server AI models. To date, companies operating with AI and machine learning (ML) have typically used Nvidia’s Graphics Processing Units (GPUs).

To some commentators, the absence of any mention of Nvidia within the report is intriguing, perhaps indicating that Google’s TPU can offer comparable performance and efficiency.

It’s also interesting as Google has a very different business model for its TPUs. Unlike Nvidia, which sells its chipsets to hyperscalers and other parts of the market, Google provides access to its TPUs through cloud services. Customers essentially rent access to the tech giant’s TPUs.

Google’s TPUs are custom-designed application-specific integrated circuits (ASICs) optimized for deep learning tasks, offering high efficiency for large-scale matrix operations. This approach requires customers to develop the necessary software within Google’s ecosystem, highlighting the strategic differences between the two companies’ offerings.

Are Google’s TPUs Comparable with GPUs?

Nvidia’s GPUs were developed for graphics rendering, while TPUs and Neural Processing Units (NPUs) are purpose-built for AI/ML (AI/machine learning) workloads. GPUs have parallel processing capabilities that have made them versatile for AI/ML workloads. However, TPUs and GPUs both have distinct advantages and limitations in AI and ML.

TPUs are highly efficient for training large AI models, excelling in tasks requiring extensive matrix computations, and can be connected in large clusters for scalability. They offer cost-effective solutions through Google Cloud, with significant performance per dollar benefits. Nonetheless, TPUs lack the versatility of GPUs and don’t have the same mature ecosystem.

Meanwhile, GPUs, at least according to some metrics like FLOPS (floating point operations per second) are considerably more cost-efficient. In fact, Jensen Huang said in March that Nvidia’s AI chips are still “so good that even when the competitor’s chips are free, it’s not cheap enough.”

So, while TPUs have utility for AI and ML tasks and can be connected in large clusters, providing the necessary computational power for sophisticated AI models, they don’t appear to have the widespread appeal and cost efficiencies of GPUs.

Investing in Google’s TPUs

Google Cloud’s revenue was up 28% year-on-year to $9.6 billion in Q1 2024, driven in part by demand for AI infrastructure, including TPUs. While Google does not disclose revenue from TPUs, supportive data and capacity growth in markets like the North Atlantic suggest that TPUs could see broader adoption.

CEO Sundar Pichai has suggested that Google’s TPU-related revenue could grow significantly due to the increasing demand for AI services. Pichai noted that Google’s AI hypercomputer, which integrates TPUs as well as Nvidia’s GPUs, is attracting a substantial number of funded AI startups and unicorns as customers.

More than 60% of funded generative AI startups and nearly 90% of generative AI unicorns are Google Cloud customers. This trend positions Google well to capture a larger market share in the rapidly expanding AI landscape, driving substantial revenue growth in the coming years.

However, it’s worth noting that Google Cloud only accounted for around 12% of the company’s revenue during Q1. While the company’s AI cloud model may gain further traction, these TPUs are just part of the wider offering. Moreover, these TPUs will help Google reduce computing costs and limit its own reliance on Nvidia.

GOOGL Stock: The Cheapest Among Big Tech

Google is among the cheapest big tech stocks, trading at 21.8x forward earnings and with a price-to-earnings-to-growth (PEG) ratio of 1.25x. The current expected annualized earnings growth rate is 17.4% over the medium term — the next three to five years.

Is GOOGL Stock a Buy, According to Analysts?

On TipRanks, GOOGL comes in as a Strong Buy based on 30 Buys, seven Holds, and zero Sell ratings assigned by analysts in the past three months. The average Alphabet stock price target is $204.62, implying 29.3% upside potential.

See more GOOGL analyst ratings

The Bottom Line on Google

Google certainly isn’t expensive for big tech, and it’s benefiting from growth tailwinds, with an expected annualized earnings growth rate of 17.4%. While TPUs are not the only reason for this growth, they are part of Google’s broader AI package, which appears to be gaining traction.

Disclosure 

Related Articles
Sheryl ShethMSFT, GOOGL, META, AMZN, AAPL: Big Tech’s Big Spend on AI
Bernard ZamboninApple Stock (NASDAQ:AAPL): Why Investors Shouldn’t Worry About Buffett’s Sale
Get real-time notifications on news & analysis, curated for your stock watchlist. Download the TipRanks app today! Get the App