Covering Scientific & Technical AI | Tuesday, October 8, 2024

AI Debrief: Penguin President Pete Manca Talks Origin AI 

In this enlightening interview, I sat down with Pete Manca - the President of Penguin Solutions -to discuss the company's innovative Origin AI solution.

As part of the new Executive Debrief series, we explore the performance enhancements and rapid deployment capabilities that OriginAI offers to customers. Manca elaborates on the evolving landscape of artificial intelligence, highlighting specific use cases and the importance of fine-tuning models for enterprise applications.

With a focus on both hardware and software advancements, this conversation sheds light on how Penguin Solutions is positioned to lead in the AI market and deliver significant value to its clients.


Kevin Jackson: Hi Pete, I hope you're doing well. Thanks for sitting down with me and thanks to everyone joining us today for our new Executive Debrief series, where we'll be conducting interviews with tech decision-makers. Pete Manca is here from Penguin Solutions. I'd like to ask some questions about the new OriginAI solution that your organization is putting out, and maybe some general questions about your role as president. Sound good to you?

Pete Manca: That sounds great, Kevin, thanks for inviting me. I'm really excited to talk about the OriginAI story today.

Kevin Jackson: Delightful. So just starting a little basic, can you elaborate on the performance improvements customers can expect with the new OriginAI solution? What's changed and how will that translate for your customers?

Pete Manca: Sure. I think in order to explain that, I need to give a little backdrop on how we deliver solutions to our customers. We fundamentally have two ways of delivering capabilities. One is a more custom option and one is a more packaged option. The custom option is exactly what it sounds like. We give the customer the ability to choose what GPU they want, what network they want, what storage they want, how they want it packaged, and how they want it racked up, whether we're working with a partner or not, like Dell or Supermicro, for example. We give the customer multiple options, and that's really worked well for us. However, the downside of that offer is that it can take longer to install and get up and running because there are so many options.

The other way that we provide solutions to customers is packaged offers where the equipment, the rack and stack, and the configurations are already predefined. That is OriginAI. OriginAI comes out in small configurations starting at 32 systems, or 256 GPUs, all the way up to 16,000 GPUs. Customers can choose small, medium, and large configurations. When we talk about performance, there isn't an absolute performance difference between a custom or an OriginAI solution from a technology point of view, but there is a performance difference in time to value. Customers will be able to get access to their AI infrastructure faster in a more packaged way, delivered to them, already racked and stacked and configured so they can get up and running quickly and start generating revenue much more quickly than they could from a custom offer.

Kevin Jackson: Just digging a little deeper, I was reading about the Factory Burn In and the integration environment to validate AI cluster performance. How does Penguin ensure production readiness and what metrics are used?

Pete Manca: Regardless of whether we're selling OriginAI or a custom solution, we'll take that into the factory. All the components will be racked and stacked. We'll get the elevations correct, do the network configurations, storage configurations, and the day zero bring-up. We'll install firmware, BIOS, and the operating system to the customer's specifications. Then we'll run it and burn it in for days, taking out any early failures. We'll monitor the system through our Clusterware software, making sure that it's performing to the capabilities that the customer expects. Once we get it burnt in, we will literally ship these systems in rack for the customers. When it shows up on site, all we have to do is the cabling around networking and storage and maybe any management networking as well. This gets the customer up and running much faster than drop shipping components at a customer site and building it on site. That's a lot more error-prone than this factory burning process that we have. That's one of the added values that Penguin really has. We've been doing this for years, building solutions for customers, burning them in, racking them, stacking them, and shipping them prebuilt and pre-racked. That's a tremendous added value that we offer.

Kevin Jackson: Lovely. And speaking of customers, are there any specific use cases that you're excited about? Any OriginAI implementations that you'd like to speak to?

Pete Manca: Sure. We've announced a couple recently, like our Voltage Park customer, which is an OriginAI customer. But let me talk again in a broader sense, and then I'll hone in on the answer to your question. We see the market evolving in waves. Wave one really is what we're in right now, which is the large language model type systems where we're talking thousands, if not tens of thousands of GPUs being leveraged to train large language models. The llamas and the ChatGPTs are examples of those types of models, and that's an exciting part of the market. But we see the market evolving. Wave two to us is more around enterprises where enterprises actually fine-tune their models for specific use cases. Think about healthcare and radiology, for example, or imaging. There are a lot of different use cases out there. Gaming is one that we're actually seeing some progress in today. The fine-tuning of these models for specific use cases in the enterprise is quite an exciting opportunity for us. Then take it to the third wave, which is inferencing at the edge, which I think is going to be potentially the biggest wave when we finally get all these inferencing engines running at the edge and data being processed and decisions being made at the edge. That's another great use case for OriginAI, where customers can purchase the smaller variants and put those out in edge locations.

Kevin Jackson: You were just talking about phases. And that actually leads me to my next question. What are you looking forward to here? What are you excited to see with where OriginAI solutions can go in the next few months, maybe the next few years? What are you looking for on the horizon?

Pete Manca: I'm going to circle back and double down on the enterprise and the fine-tuning of the models because that's where I see the real value happening for end users. Imagine a world where fine-tuning these models allows for very specific use cases. I mentioned a couple earlier around radiology or image sensing or gaming. This is where customers will see the impact of AI. I know we're seeing some of it today with these big large models around ChatGPT, for example, where you can do some interesting prompting and get some answers back, but those are pretty generalized. I'm really excited about the wave where fine-tuned models come in, enterprises can target their solutions towards certain use cases and certain outcomes, and customers will get real advantage around that. I think OriginAI is really supremely positioned to serve that market.

Kevin Jackson: More generally for Penguin, what are you excited for most about your new role as president? Where do you want the company to go as a whole under your tutelage?

Pete Manca: That's a great question. I'm very excited about the opportunity at Penguin for a number of reasons. One, obviously, AI is a very hot market. I think that goes without saying, and we're very well positioned with our products and our processes to take advantage of that market. The other thing that really excites me, and we haven't really talked about it much yet, is when I came into the company, I learned that we have a treasure trove of software that supports and enables these AI infrastructures. The investment we're making in our software capabilities is really exciting to me. We have packages like Clusterware that allow either our managed service providers or our managed service employees, or the customers to leverage that for deploying, building, and monitoring their systems. We have a new module that we recently announced around AI management, where we can do predictive analysis and predictive fault analysis, which is pretty exciting. We're using AI within our own products to manage AI. We have a cloud product that allows customers to burst their use cases to the cloud if they run out of capacity on-prem. We have remote management software. We have this really great underlying set of software that we really want to exploit going forward and extend the capabilities of so we can really offer additional value to our customers. If I step back and look at it, we've got a great hardware story, we've got a great software story that's evolving, and we have a great story around services where we can help the customer from day zero bring up, build, deploy all the way through day end on where we can manage the customer's environment for them. It's really a full-stop set of offerings that we have within Penguin, and that's what excites me the most about this opportunity.

Kevin Jackson: Of course. I mean, that sounds very exciting. I believe your customers are going to get a lot of value out of this. You know, we've talked about a lot here, but is there anything I missed, anything that you'd like to speak to on this product or Penguin as a whole?

Pete Manca: No, I think we covered a lot of the important stuff here. I'll just double down and say that I think we're all aware that we're still in the early part of this AI game. There will be ups and downs, of course, like there are in any new market. But I don't think there's any denying that AI is here to stay, and it's going to have a tremendous impact on people's lives going forward. Penguin, again, is just really well positioned to provide solutions to our customers that allow them to provide capabilities and outcomes to their customers, which is really exciting to be in that position.

Kevin Jackson: Of course. Well, I've learned a lot today during our conversation. As always, Pete, thanks so much for joining us in our first executive debrief video. This was just a really interesting update and I'm sure our readers will feel the same. So again, thanks so much for sitting down with me, Pete.

Pete Manca: Yeah, thank you for the time. I really appreciate the conversation and look forward to talking more in the future.


This AI Debrief is the first in a monthly series.

AIwire