Covering Scientific & Technical AI | Friday, December 13, 2024

HP Enterprise’s Bill Mannel Talks HPC, Intel, Big Data, and More 

Bill Mannel, HP Enterprise

As vice president and general manager of high performance computing and big data for Hewlett Packard Enterprise for almost a year, Bill Mannel oversees two fast-growing and complementary areas within both HP and industry at large. In July, HP and Intel announced they were partnering to help propel HPC into mainstream enterprise usage. By leveraging Intel's scalable HPC framework and HP's Solution framework into HP Apollo 8000s, the two developers plan to target specific vertical markets such as financial and manufacturing.

About a month after the announcement EnterpriseTech sat down with Mannel for a half-hour phone interview where he discussed topics such as the HP-Intel partnership, the National Strategic Computing Initiative, and enterprise adoption of HPC. Following is an edited version of the conversation:

EnterpriseTech: Could you give us a brief overview of how Hewlett Packard views the high performance computing market?

Bill Mannel: The big news about HPC at Hewlett Packard is the fact we see it as a very, very important portion of our market going forward. So we see, last March – it's been almost six months now – we actually created a business unit around high performance computing and big data. The focus there is not only to exploit the high performance computing that we're very much accustomed to – and that's HPC related to engineering and science and all the things we normally associate high performance computing with – but driving HPC into the enterprise.

EnterpriseTech: Did you consider including cloud, given the relationship between cloud and HPC?

Mannel: We actually have cloud as a separate division because it spans so many markets. HP has offerings specifically directed to those customers that want HPC managed on their behalf, by the siphon or on-tap. We either house their equipment in one of our datacenters – in some cases we have our pods, modular datacenters – a container that we drop onto a customer's site and fully manage that, or we'll fully manage the hardware in a customer's datacenter. We have quite a few customers moving in that direction completely or using it as a burst capacity. We work very closely with our brothers and sisters in the cloud [general business unit] and HP.

EnterpriseTech: Who are you generally dealing with at customer or prospective customer sites? Is it IT, engineering, marketing…?

Mannel: It's really a combination. We talk to a lot of folks in the product divisions who are interested in understanding more about their products relative from everything to the engine of the products to how successful their product is in the marketplace. We do interface with the manufacturing portion of organizations. Manufacturing is becoming much more of an automated facility, with lots of sensors gathering data which typically is used to maintain factory flows and is somewhat related to the quality portion of the business – where they're using big data to understand more about creating good quality products for their customers. Finally, marketing would be very highly engaged with this to intercept the data that's coming in, what customers are buying, what's the next thing they can sell them.

If you purchased a car lately, you know almost all of them are big data producers and upload a lot of data. A major automotive manufacturer recent got a very large order for a Hadoop cluster they're going to use to bring home all the data that's coming in. Vehicles can upload up to 1 terabyte a day – driving habits, locations, parameters related to the vehicle – and even in some cases link that in, if you will, a complete customer experience over the long term. You'll see it in general business units, quality control, manufacturing, marketing and, of course, IT themselves. It's hard to talk to a leader in IT who's not being pressured to lead a big-data project of some sort or another.

You hear a lot in the media about big data projects that have not been successful and that's because they start off as, 'Oh, we need to do Hadoop,' as opposed to, 'what do we hope to get from our big data…?'

EnterpriseTech: How would you go about resolving the sometimes-strained relationship between HPC and traditional IT departments?

Mannel: One of the challenges that IT has had traditionally – and especially of late – is the fact they've specifically been thought of as a cost center as opposed to as a value creator for the corporation. As a cost center, you look for ways to consolidate, to economize, to make standard, to simply run your corporation from an IT perspective around a particular infrastructure. That infrastructure might not be the ideal infrastructure on the HPC side of the house.

Apollo 8000

Apollo 8000

I had breakfast with the head of a major IT manufacturer back in early June and this is a really great example of some of the stressors between IT and HPC. This is a very large HP blades customer. They use blades for typical IT back-office requirements and product design work. Engineers were using blades for analysis, computer engineering and test work. If you go back in time, HPC had tended to be off on its own. They had their own special machines and IT didn't touch them. From talking to this IT director, he said the HPC portions of his customer base came to him and said, 'We need to be running faster processors, not only to get faster business results but because a significant amount of budget goes to ISVs and they are licensed per core." They need maximum performance per core, which has the impact of creating more heat and needing more power. IT was not equipped to handle those very high-end processors. They came to us and said, 'We need a more purpose-built solution, the Apollo 8000 line.' It's a water-cooled system that can run the fastest and highest wattage Intel provides.

We just did a proof of concept and it had the same or less TCO than a general-purpose architecture. The customer's in the process of signing the contract. It shows you can do more with special-purpose hardware than general. IT, because of its drivers toward commonality and lower cost, had chosen the route of more general-purpose hardware but the needs of HPC was for more special-purpose solutions. In this case, the customer was looking at savings of $1 million per year in power costs alone they would get from this new solution because they would use, essentially, water to cool. In fact, this customer is looking at a novel design where they'll put Apollo 8000 in a shed where it won't need water-cooling towers, etc., because Apollo can work in a completely room-neutral environment. It can do so without worrying about the ambient temperature of the room it's in.

EnterpriseTech: Is power and cooling becoming a bigger concern?

Mannel: We're seeing that more and more, particularly in those portions of the world where power is at a particular premium. If you look at Japan, you look at Europe, we are seeing that trend. If you want to get more information about Apollo 8000, one customer is the National Renewable Energy Laboratory …, which has worked with us to develop the Apollo 8000, and they are in the process of expanding that system.

EnterpriseTech: Of course, HP made news recently with another partnership: The deal with Intel...

Mannel: We announced an alliance with Intel back in the July timeframe and it's probably the most important initiative I'm working on with my team. We're developing differentiated architectures in different markets so we provide accelerated solutions. We're looking at 2 + 2 = 6 or 2 + 2 = 8 as we optimize the architecture.

If you look at the way Hewlett Packard worked in the past, we would take Intel technology at face value and integrate it into our platforms. Along the way we would make certain tradeoffs around certain features, more or less on our own. Now we are engaged, engineer-to-engineer on a regular basis … we incorporate that feedback into the next design and we have a very good interim process for that. The end result is to develop differentiated solutions that allow us to address particular market problems because these are differentiated solutions. We've actually chosen three markets, plus one: Oil and gas, which is a big opportunity for Intel and HP; financial services; life sciences. Those are the three core markets and we're working on how best to apply HP and Intel technology together. The area we just added recently was government. We used that as an impetus between Intel and HP to add government, specifically this particular initiative as a focus area of ours. I've got a team of my top customer facing as well as engineers working with an Intel team who are of similar capacity to start to look at different designs we could start to deploy based on this initiative. The high bullet point is developing differentiated architecture, providing a solution which includes things like making it easy to use for a broad variety of individuals.

EnterpriseTech: Can we see any results of this alliance today?

Mannel: We're willing to have non-disclosure agreements with customers. Intel and HP today have a differentiated roadmap, which we do share with customers under NDA. That's available today. As we move toward the supercomputing show in November and beyond that we'll be more open on what we're working on together.

President Obama's HPC initiative is expected to drive investment and innovation.

President Obama's HPC initiative is expected to drive investment and innovation.

EnterpriseTech: What impact do you think President Barack Obama's National Strategic Computing Initiative will have on HPC investment and adoption?

Mannel: We've gone back and talked to people we know in government and they're obviously excited. One thing I can say, which I think is very positive, is the fact the government is directing this not just to create a very few leadership machines that might be installed at a few places but is actually making a point of making HPC more broad-based and easier to use, so really creating a capability and looking at different ways of expanding it. We talked about cloud. How do you make it more available to more people at attractive economics and, at the same time, make it more attractive to use? The government agencies are just coming out with their ideas and I think we'll see more of that as we go forward. If I've got my information correct, they'll actually have a short window to come back with proposals to suggest how they would implement a variety of initiatives. And there are things in there about workforce education as well. I have not talked to specific commercial customers but, from high to low, it's got something for everyone in it. I think it's a great initiative, very important for the country at large.

EnterpriseTech: Do you see more widespread adoption of HPC?

Mannel: I would say, based on what I've seen on some larger opportunities in education and research, a lot of the datacenter directors are trying to appeal much more broadly to a new class of users. They're trying to appeal to the marketing departments, the social science portion of the university or research center, so I can see a much broader appeal from that standpoint. We see RFPs that include not only the typical HPC type or very HPC type architectures, but also Hadoop, a NoSQL database to put results in, and we're seeing more of that as we go forward.

EnterpriseTech: Where is HPC technology heading, do you think?

Mannel: In terms of new technology in HPC I think, because of this cross-pollination if you will between HPC and more traditional IT, you'll see more virtualized type environments in HPC and that has not typically been an area HPC datacenters have subscribed to. In fact, there's a lot of overhead with virtualization they don't want to pay the cost for. There are more and more technological advances that have dropped that overhead so if you're running a datacenter, virtualization is much more of a possibility. That is one additional bit of technology that we'll see more and more applied to HPC as we go forward.

The processors we have today are very, very powerful so they can do a lot of work so more and more, the gap is in IO. Being able to get that memory closer to the processor is better for performance. The problem is getting data off my machine after I run my simulation: Simulation might run several weeks, but getting data out of the machine might take several weeks. You'll see more architectures on the HPC side resemble more in-memory architectures. I think you'll see more and more storage become very local to the processors and become a way to store results in a local way versus NAS.

 

 

 

About the author: Alison Diana

Managing editor of Enterprise Technology. I've been covering tech and business for many years, for publications such as InformationWeek, Baseline Magazine, and Florida Today. A native Brit and longtime Yankees fan, I live with my husband, daughter, and two cats on the Space Coast in Florida.

AIwire