HPE expands GreenLake portfolio with AI cloud for LLMs

Spread the love

Bengaluru, HPE has entered the AI cloud market with the introduction of HPE GreenLake for Large Language Models (LLMs). With this move, the company has expanded its HPE GreenLake portfolio to offer LLMs for startups to Fortune 500 companies and enterprises to access on-demand in a multi-tenant supercomputing cloud service.

The LLM allows customers to leverage their own data, train and fine-tune a customised model, to gain real-time insights based on their proprietary knowledge. This new service empowers enterprises to build and market various AI applications to integrate them into their workflows and unlock business and research-driven value.
 

With HPE GreenLake for Large Language Models (LLMs), enterprises can privately train, tune, and deploy large-scale AI using a sustainable supercomputing platform that combines HPE’s AI software and supercomputers. The company will deliver the new offering in partnership with its first partner Aleph Alpha, a German AI startup, to provide users with field-proven and ready-to-use LLM use cases requiring text and image processing and analysis.

The service will include access to Luminous, a pre-trained large language model from Aleph Alpha, which is offered in multiple languages, including English, French, German, Italian and Spanish.

The new offering is the first in a series of industry and domain-specific AI applications that HPE plans to launch in the future. These applications will include support for climate modelling, healthcare and life sciences, financial services, manufacturing, and transportation.

“We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud,” said Antonio Neri, HPE’s President and CEO.

“HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE’s proven, sustainable supercomputers,” added Neri.

Further, Neri said that now, organisations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly.

HPE GreenLake for LLMs runs on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload at full computing capacity.

The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once. This capability is significantly more effective, reliable, and efficient to train AI and create more accurate models, allowing enterprises to speed up their journey from POC to production to solve problems faster.

HPE is accepting orders now for HPE GreenLake for LMMs and expects additional availability by the end of the calendar year 2023, starting in North America with availability in Europe expected to follow early next year.

Leave a Reply

Your email address will not be published. Required fields are marked *