Technology

Satya Nadella, chief executive officer of Microsoft Corp., during the company’s Ignite Spotlight event in Seoul, South Korea, on Tuesday, Nov. 15, 2022.
SeongJoon Cho | Bloomberg | Getty Images

Microsoft is emphasizing to investors that graphics processing units are a critical raw material for its fast-growing cloud business. In its annual report released late on Thursday, the software maker added language about GPUs to a risk factor for outages that can arise if it can’t get the infrastructure it needs.

The language reflects the growing demand at the top technology companies for the hardware that’s necessary to provide artificial intelligence capabilities to smaller businesses.

AI, and specifically generative AI that involves generating human-like text, speech, videos and images in response to people’s input, has become more popular this year, after startup OpenAI’s ChatGPT chatbot became a hit. That has benefited GPU makers such as Nvidia and, to a smaller extent, AMD.

“Our datacenters depend on the availability of permitted and buildable land, predictable energy, networking supplies, and servers, including graphics processing units (“GPUs”) and other components,” Microsoft said in its report for the 2023 fiscal year, which ended on June 30.

That’s one of three passages mentioning GPUs in the regulatory filing. They were not mentioned once in the previous year’s report. Such language has not appeared in recent annual reports from other large technology companies, such as Alphabet, Apple, Amazon and Meta.

OpenAI relies on Microsoft’s Azure cloud to perform the computations for ChatGPT and various AI models, as part of a complex partnership. Microsoft has also begun using OpenAI’s models to enhance existing products, such as its Outlook and Word applications and the Bing search engine, with generative AI.

Those efforts and the interest in ChatGPT have led Microsoft to seek more GPUs than it had expected.

“I am thrilled that Microsoft announced Azure is opening private previews to their H100 AI supercomputer,” Jensen Huang, Nvidia’s CEO, said at his company’s GTC developer conference in March.

Microsoft has begun looking outside its own data centers to secure enough capacity, signing an agreement with Nvidia-backed CoreWeave, which rents out GPUs to third-party developers as a cloud service.

At the same time, Microsoft has spent years building its own custom AI processor. All the attention on ChatGPT has led Microsoft to speed up the deployment of its chip, The Information reported in April, citing unnamed sources. Alphabet, Amazon and Meta have all announced their own AI chips over the past decade.

Microsoft expects to increase its capital expenditures sequentially this quarter, to pay for data centers, standard central processing units, networking hardware and GPUs, Amy Hood, the company’s finance chief, said Tuesday on a conference call with analysts. “It’s overall increases of acceleration of overall capacity,” she said.

WATCH: NVIDIA’s GPU and parallel processing remains critical for A.I., says T. Rowe’s Dom Rizzo

Articles You May Like

Trump says Prince William ‘doing fantastic job’ as they meet for a second time in Paris this evening
Google Willow: All You Need to Know About the Quantum Processor That Outperforms World’s Best Supercomputer
Bentley beats Pauls in middleweight title thriller
Assad ‘granted asylum’ in Moscow – as world leaders hail end of ‘barbaric’ regime
Russia in ‘direct contact’ with Syrian rebels and hopes to keep military bases