HPE empowers users with artificial intelligence through advanced computing and cloud solutions Bernard Marr

But decades of experience in providing high-performance computing (HPC) and storage solutions means the industry is well-positioned to provide its customers with the infrastructure they need for their transformative AI initiatives.

The global tech giants aim to enable their customers to create their own “private AI cloud”, covering a range of AI use cases from autonomous driving to large language models (LLM) and life sciences.

I recently sat down with Mark Armstrong, HPEs VP and GM of Artificial Intelligence for EMEA. We discussed how organizations at the stage where they are ready to deploy artificial intelligence (AI) can benefit from partnering with infrastructure providers, as well as the challenges they face. Here are some of the highlights of our chat, which you can watch in full here.

Optimization of workload management

In any enterprise AI deployment, getting the infrastructure and architectural elements right is always critical. As Armstrong tells me, that’s where HPE generally starts: The first aspect is to make sure that we’re really working with our customers to determine what the right architecture is and calculate the workloads they need. And frankly, I think we have the strongest team in the world in terms of being able to understand how technology works and optimize it for applications.

Combined with its depth of AI expertise, HPEs long history of working in the HPC domain uniquely positions it to offer:

“If you look at the workloads required for generative AI, those requirements are very similar to high-performance computing, which is why we’ve been successful in the last two years and entering this new and future market of generative AI,” Armstrong tells me. We have provided services.

After all, HPE established its reputation by building and deploying some of the world’s most advanced supercomputers, such as itsCrayandApolloseries.

It also created a number of high-performance storage solutions that are needed to feed real-time, streaming data into machine learning algorithms.

Despite all of these technologies, Armstrong tells me that at the core of the organization’s strategy to help customers optimize their workloads is its customer-centric approach, which focuses on ensuring that solutions are tightly tailored to customers’ needs.

“Using the experience we gain from deploying these massive systems helps us make sure we’re designing the right solutions for the customers they want to solve for productive AI,” he says.

Generative AI for critical business functions

An example of this partnership can be seen in the collaboration between HPE and Aleph Alpha. Building the HPEsApollo 6500 Gen10 PlusHPC platform and deploying HPE’s machine learning development environment helped the German startup create the explainable and auditable AI solutions for its customers that private companies as well as government agencies need today.

In this case, the concept of “data sovereignty” was critical. The idea was to provide a solution that would enable sensitive data to be processed and acted upon without compromising the privacy or business value of the data. This means that professional client lawyers or healthcare professionals, for example, can benefit from accessible analytics made possible by generative AI.

Aleph Alpha aspires to lead the creation of the next generation of AI and they demonstrate this with an incredible strategy to make LLM models available through all major European languages. HPE supports that vision with the compute architecture and solutions they need, Armstrong tells me.

Other notable partnerships include work with Oracle Red Bull Racing to help design and simulate Formula One cars and with Volvo through its Zenseactautonomous driving subsidiary.

Flexible models

Another central aspect of HPEs strategy in this area is its commitment to offering flexible purchase and usage models.

we are a couple [HPEs] Armstrong tells me that capabilities with the availability of our customers to take these systems and use them in different ways.

This includes capital expenditures and a number of service models. All of this contributes to HPE’s vision of enabling its customers to create their own personal AI cloud. LLM-as-a-service is accessible through the GreenLake platform.

“It was about making sure customers could consume this generative AI in a way that was right for their business,” Armstrong says.

This means HPEs customers have a range of options for purchasing and integrating their AI infrastructure, meaning they can adapt to whatever use case or scale they need.

Of course, none of this is new for HPE, which has been offering everything from supercomputers to printer ink refills as a service for decades. The expansion of these models into its generative AI strategy shows that it considers this successful technology to be one of its core offerings today.

How far will generative artificial intelligence go?

Key to HPEs strategy, Armstrong tells me, is the idea that generative AI will greatly simplify the process of businesses using their data to create highly customized services for customers.

I’d suggest that in the next few years we’ll see big innovations coming out of all industries in terms of what generative AI can offer, he says.

This means more models are tuned to deliver company- or industry-specific results, as well as a greater understanding of how data can be kept secure while being used to deliver highly specific customer services.

I think it’s going to be a big step forward, and I do see it emerging — we’re already starting to see it, but I think we’re going to see it more in the next 24 months,” Armstrong says.

you canclick hereTo see my full interview with Mark Armstrong, Vice President and Senior Director of Artificial Intelligence in EMEA for Hewlett Packard Enterprises.

#HPE #empowers #users #artificial #intelligence #advanced #computing #cloud #solutions #Bernard #Marr
Image Source : bernardmarr.com

Leave a Comment