Newsvidia

Google expands TPU supply to challenge NVIDIA AI chip dominance

google tpu supply NVIDIA AI chip

Google expands TPU supply to challenge NVIDIA AI chip dominance

Tech giants like Huawei and AMD develop competing AI chips as Google targets NVIDIA-dependent cloud providers with TPU guarantees

Google is strengthening the external supply of its self-developed artificial intelligence (AI) chips. According to foreign media on the 5th, Google has agreed to install its AI chip, the TPU (Tensor Processing Unit), at the New York data center of cloud provider FluidStack. The TPU is a dedicated AI accelerator developed by Google to speed up deep learning. Additionally, Google is reportedly pursuing similar negotiations with Crusoe, which is building an NVIDIA chip-dedicated data center for ChatGPT developer OpenAI, and CoreWeave, a data center company invested in by NVIDIA.

Google has previously developed AI chips but has mostly used them exclusively for its own services, such as Google Cloud. Additionally, Google has been NVIDIA’s largest customer, purchasing AI chips in bulk and leasing them to Google Cloud customers. However, as the AI chip market continues to grow and companies’ reliance on NVIDIA increases, Google aims to expand the external supply of TPUs to monetize them.

The media, reported:

Google’s targets are primarily new cloud providers heavily dependent on NVIDIA chips,

Adding,

To promote TPU adoption, Google has offered to guarantee up to 3.2 billion dollars if FluidStack cannot cover operational costs.

In the AI chip market dominated by NVIDIA, Chinese big tech companies, including Google, are challenging the status quo. This is to reduce dependence on NVIDIA, which holds a market share of nearly 80–90% in this growing AI chip market. It is a declaration of an AI chip independence war to dismantle NVIDIA’s stronghold as the absolute leader. However, critics point out that most companies’ products still lag significantly in technological capabilities compared to NVIDIA, making them less competitive.

◇ Huawei and AMD Chase in NVIDIA’s Dominance

NVIDIA virtually monopolizes the AI chip market. It made bold early investments in GPUs for AI and established an AI chip ecosystem through the CUDA software platform. Since many developers use NVIDIA AI chips, which operate on CUDA, it has become an industry standard, significantly enhancing the “lock-in effect.”

However, several companies are recently developing AI chips to enhance their own technological capabilities. In the AI chip market, NVIDIA’s most formidable competitor is Chinese company Huawei. Huawei’s AI chip, the “Ascend series,” is a high-performance chip usable for AI model training. Huawei is independently developing a CUDA alternative framework for the domestic AI ecosystem in China.

Huawei’s technological prowess is so threatening that Jensen Huang, NVIDIA’s CEO, reportedly told the U.S. government,

If we don’t export the low-performance AI chip ‘H20,’ Huawei’s technological development will accelerate.

Another Chinese company, Alibaba, also announced last month that it has developed an AI chip within China.

In the U.S., semiconductor design company AMD possesses the AI chip “MI series.” In late 2023, it released the “Instinct MI300” chip series, which supports large-scale optimization and large language models (LLMs). Although the release was over a year later than NVIDIA’s similarly performing H100 AI chip, it has been acknowledged for its “superior performance.” However, AMD’s market share remains low. Focusing solely on the data center market, AMD’s share was 4% as of March this year, a negligible level compared to NVIDIA’s 92%.

◇ Rise in Customized Chip Production

While not general-purpose AI chips like NVIDIA’s GPUs, an increasing number of companies are producing custom ASICs (Application-Specific Integrated Circuits) optimized for specific AI tasks. Although these ASICs are less flexible than general-purpose AI chips made by NVIDIA, AMD, and others, their lower cost makes them popular among cloud providers.

Amazon produces AI training-specific chips like “Trainium” and inference-specific chips like “Inferentia.” While Google and Amazon also use NVIDIA’s AI chips, their strategy is to develop their own AI chips that are faster, cheaper, and optimized for their respective companies. Recently, Anthropic has started using Amazon’s Trainium chips, and Amazon is proposing chip usage to customers outside its own business. OpenAI is collaborating with Broadcom to develop its own XPU AI chip, targeting a release next year. Apple, Microsoft, and ByteDance are also developing AI chips.

※ This article has been translated by Upstage Solar AI.

READ the latest news shaping the Nvidia market at Newsvidia

Google expands TPU supply to challenge NVIDIA AI chip dominance, source

Follow us on LinkedIn!

Market News

🤖 aichipsnews.com – AI Chips

🔋 batteriesnews.com – Batteries

🍀 biofuelscentral.com – Biofuels

👩‍💻 datacentrecentral.com – Data Center

💧 hydrogen-central.com – Hydrogen

👁️ newsvidia.com – Nvidia

Join our weekly newsletter!

Please enable JavaScript in your browser to complete this form.

Your Header Sidebar area is currently empty. Hurry up and add some widgets.