HPE Walks Away From Risky $700 Million AI Deal

This probably happens more than we know, but sometimes OEMs and ODMs walk away from big deals because something is fishy. And that happened to Hewlett Packard Enterprise in its fourth quarter of fiscal 2024, a period that ended on October 31, as it “de-booked a large order” for $700 million worth of AI gear because, as chief executive officer Antonio Neri put it, “we had concern with a specific customer.”

There are only a few hundred customers on Earth that can shell out that sum of money for an AI system, so whatever the concern was, it was probably not the financials of a Global 1000 company. Neri did not elaborate on the situation, but did say that HPE booked $1.2 billion in new AI system orders in the fourth quarter, which more than made up for the loss of that deal. HPE has a cumulative deal book of $6.7 billion of AI systems since the first quarter of fiscal 2023, but in the fourth quarter, it only added $500 million in new orders because it walked away from that $700 million deal.

With AI systems costing so much money, HPE can ill afford to go through the hassle of buying parts and building a massive supercomputer that might have 15,000 or so top-end Nvidia “Hopper” H100 or AMD “Antares” MI300X GPU accelerators and then not have the customer pay for it. That is a lot of capital to lock up and have to go to court over.

Despite that drama, which must have been gut wrenching, it had no immediate effect on HPE’s AI systems business, which includes the booking of revenue for the 2.79 exaflops “El Capitan” supercomputer at Lawrence Livermore National Laboratory. The $600 million contract for El Capitan includes three related systems as well as non-recurring engineering fees, and we don’t know how much of the $600 million goes into the AI systems revenue stream or the Server division at HPE. Call it somewhere around $500 million.

With HPE booking $1.5 billion in AI system revenues in the quarter, which is a factor of 4.1X higher than the year-ago period and up 16 percent sequentially, then the other AI system deals that HPE did do in Q4 F2024 accounted for a cool $1 billion.

HPE’s backlog for AI systems stood at $3.5 billion as the quarter came to a close, up a mere $100 million from Q3 2024 and from the year ago period, too. As best as we can figure, HPE has $5.54 billion in AI systems sales, with a tiny slice of that revenue being for software and services, over the past eight quarters.

This business started out slowly because Nvidia was too busy directly supplying the hyperscalers and cloud builders with GPUs to give OEMs like HPE, Dell, and Lenovo very many of its very popular compute engines. But supplies are ramping now and the OEMs are getting a piece of the action at tier two clouds and big AI startups, and are looking to expand into governments and enterprises next. We think that only about 10 percent of HPE’s AI system sales in the past two years have been to traditional enterprises, and that when enterprises adopt GenAI, it will be able to squeeze more profits out of many small deals than it can do with a few larger ones.

As Neri explained it, the big GenAI model builders and startups tend to want to latest-greatest GPUs from Nvidia, which is this case means “Blackwell” B100 and B200 GPU accelerators and all of the different ways that systems can be built from them. But in the enterprise and among governments building sovereign AI systems, Neri said that N-1 and even N-2 GPU generations are just fine for the job. We have been saying this for years, and retrieval augmented generation and fine tuning have been invented expressly to make it so companies can get great results with smaller models on smaller clusters.

A thousand customers buying a hundred GPUs is going to be a lot more profitable than one customer buying 100,000 GPUs. The laws of hyperscale economics have proven this for two decades now.

With that out of the way, let’s talk about HPE at the top line and then drill down into the datacenter.

In the quarter, HPE had record sales of $8.46 billion, up 15.1 percent year on year. Operating income was $693 million, up 36.7 percent thanks mostly to cost cutting and riding the revenue up. HPE posted a $733 million gain on sale of an equity interest in the H3C partnership in China, which helped push net income up by 2.7X to $1.34 billion.

Nearly a year ago, HPE announced a $14 billion acquisition of Juniper Networks, and that deal is winding its way through the regulatory approvals in the United States. It has been approved by the European Union, the United Kingdom, India, South Korea, and Australia. China was not mentioned in the list rattled off by Neri on the call with Wall Street analysts going over the numbers for Q4 2024, but Neri has said previously that he is not worried about regulators in the Middle Kingdom and does not seem to be worried about US regulators, either.

Neri said the Juniper deal will likely close early in 2025, and that big pile of cash you see on the HPE balance sheet in the chart above is aimed at closing that deal and making HPE a bigger player in wired networking.

Thanks to the AI server boom, HPE posted record sales in its Server division, with revenues up 31.7 percent to $4.71 billion and up 10 percent sequentially. Operating income for the Server division was $545 million, up 51.4 percent and representing 11.6 percent of revenues.

But don’t get too excited. If you peel the AI systems out of the Server division, then you get $3.2 billion in sales for traditional servers, which is down a fraction of a point from the year ago period, but up 7.3 percent sequentially. HPE was bragging on the call about double digit order growth for traditional servers, but orders are not revenues, but rather revenues that are a mix of the current quarter and a future quarter.

Two thirds of those traditional server sales were for ProLiant Gen11 machinery, which is the latest-greatest iron from HPE.

The Hybrid Cloud division, which mixes HPE datacenter storage, GreenLake utility-priced systems, and any software or gear sold under subscription into a hairball, had sales of $1.58 billion, up 18 percent, with operating income of $122 million, up 2.4X from the year ago period.

Within this group, HPE has sold 3,000 of the Alletra MP storage arrays, which mix HPE server hardware and flash drives and Vast Data’s file system, to date. The Alletra line is the fast-growing high-end storage product in HPE’s history.

On the GreenLake front, HPE added another 2,000 customers using its utility priced, on premises hardware, bringing the total to 39,000 customers.

Finally, we like to give a longer historical perspective on the core HPE systems business, which you can see here through all of the incarnations of HP and HPE and their financial categories:

As best as we can figure, this core systems business grew by 24.9 percent to $6.92 billion, and operating income came to $728 million, up 58.8 percent and representing 10.5 percent of revenues. This includes servers, storage, switching, systems software, and financing, and this is not a bad business at all even if it is a tough business.

Sign up to our Newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

7 Comments

  1. Imposible deceive to Antonio boss, its one man rare mixture of italian and milonga-tango country. Otherwise Antonio its a wise and lucky Man. Said: imposible

  2. It’s going to be a challenge attempting to live in an N-1/N-2 world when the silicon vendor turns the crank on N every 12 months and greatly curtails production of previous generations.

    • I agree. But N-1 is happening right now. N-2 is perhaps difficult–unless the big clouds dump their older iron onto the secondhand market.

  3. Subject enterprise procurement, “Neri said that N-1 and even N-2 GPU generations are just fine for the job. We have been saying this for years, and retrieval augmented generation and fine tuning have been invented expressly to make it so companies can get great results with smaller models on smaller clusters. A thousand customers buying a hundred GPUs is going to be a lot more profitable than one customer buying 100,000 GPUs. The laws of hyperscale economics have proven this for two decades now.”

    Absolutely I see this on channel data constantly enterprise standardizes on established, whole known and proven product and the volume of secondary resale is large because let’s face it, enterprise knows how to save a buck.

    Specific Nvidia dGPU and commercial GGPU in particular, in the worldwide market and for independent developers not everyone has deep pockets and on CUDA standard platform Nvidia products have at least 3 and as many as 5 lives on resale hand-me-down.

    Nvidia ww channel accelerator volume leaving q3 shows massive availability of Ada L40S. Make an offer. It’s not Hopper but said to approach A100 and for the last 2 years in high demand and off-lease rack server/appliance is available now;

    GH 200 = 7.42%
    H200 = 0.17% I believe rejected waiting for Blackwell
    H100 = 26.52%
    H800 = 2.47%
    H100 SXM is an autonomous driving system = 0.30%
    AD 102 L40S = 57.79% has been the most popular product in terms of volume so said lower power no FP64
    AD 102 L40 = 3.57% and I’ve linked a page on the difference between L40S and L40 higher raw graphics performance
    AD L20 = 0.55%
    AD L4 = 1.08%
    AD L2 = 0.13%

    https://vast.ai/article/comparing-nvidia-l40-vs-l40s-and-more

    Mike Bruzzone, Camp Marketing

Leave a Reply

Your email address will not be published.


*


This site uses Akismet to reduce spam. Learn how your comment data is processed.