AI Debrief: Digital Twins and AI in Next-Gen Nuclear Reactor Operations
In this insightful interview, I spoke with Christopher Ritter, the department manager in the Digital Sciences and Engineering department of the Idaho National Laboratory. We discuss the potential for using digital twins and AI to make the operation of next-generation nuclear reactors safe and efficient.
Kevin Jackson: Hey everyone. Thanks for joining us on another edition of the AI Debrief. Today. We're talking with Chris Ritter, director of the Digital Innovation Center of Excellence at Idaho National Laboratory. Chris, thanks so much for speaking with me today.
Christopher Ritter: It is great to be here, Kevin, this is this will be fun.
Kevin Jackson: I was lucky enough to catch your talk about AI for controlling things at the HPC AI user forum at Argonne National Lab, and you spoke a lot about your work with digital twins and reactors. Can you explain the concept of a digital twin and its significance within the nuclear energy sector? What are the benefits and the challenges?
Christopher Ritter: Yeah, that is a great question. So in the nuclear energy sector, we're working on these microreactors. If you think about nuclear as you know it today, you think of really large facilities, that take a little bit of space, right? And you're physically familiar with the stacks and where they're located.
One newer kind of technology that the Department of Energy and Department of Defense is investing in are microreactors. They're factory fabricable and they can be shipped on rail, so they're a lot smaller. So that's cool from a factory fabrication perspective, but it introduces some new challenges, right? Which is, how can we control those reactors with more autonomy? We could have perhaps a remote fleet of operators operating those reactors in real time. So to do that, we need to get to near autonomy of borrowed operations.
Digital twins are generally defined as a living virtual model. So it's a physical asset and a virtual asset that are working together in real time to bidirectionally communicate and kind of operate and predict how something is going to run. You could think about a simple example, like an autonomous car, right? There's a model that is running in your car that is constantly predicting all the world around it, and then controlling that car based on its surroundings in near real time. That allows you to navigate the streets in a nuclear setting. You could imagine that technology using autonomy to control that facility with humans right in the loop as well, to give you the safest operation. But also, if there's a connection delay or something like that too, you always know that you have this autonomous agent that is running on that reactor in near real time along with that asset. So, that's kind of where we see the benefit of digital twins for nuclear.
Some other challenges that you know we have to think about with digital twins are from a computational perspective, right? We need to ultimately build, in my mind, multiple models. You're probably familiar with physics and form neural networks, and we're using those approaches. We're also using data-driven models, too, and we use them in combination, because, as you know, AI models can be tricked, but if you use multiple models together, we've found that gets really great results with accuracy. So that's one challenge, right? How can you kind of have a multimodal architecture that can be used to predict whether the asset is going to control in real time?
And then another obvious challenge is connection to the internet or connection to the cloud. Today, you will find reactors aren't really connected to the cloud. We're connected to the internet. We got the experience to do that with the AGN 201 reactor here at Idaho State University, which is about 45 minutes south of where I'm currently located. And it's an older reactor, like 1960s technology, these AGN two Oh ones. They were $100,000, so almost affordable for a regular person to buy one. They were going to put them in high schools all around the country, and you're like, what could you do with that? Well, we work with Idaho State researchers to digitize that reactor and put it into a digital DAC, and then work with them to stream that DAC in real time to the cloud, then do our AI predictions in the cloud in near real time. And so that kind of technology is kind of the cutting edge, where we can do this kind of cloud streaming.
You won't see your reactor near you connected to the cloud today, but you will see some of these university reactors connecting up. So that's some of the challenges and opportunities that we're seeing today.
Kevin Jackson: I'm glad you mentioned the AGN-201, reactor. Can you just expand on that a little bit like, what can an analog reactor from the 1960s tell us about current and future reactors?
Christopher Ritter: Yeah, that's a great question. So, for one, it's real. It's a real fissile reactor, so it still has radiologic elements, even if the core is a plastic core. And even though it's a little smaller than the reactors that you're familiar with, it has a regular nuclear control panel. It is under an NRC license, so, so it has all the ingredients of a larger reactor, but in a much smaller sense. And so the way I would think about it is, typically, when you teach a kid how to ride a bike, you don't put them on that mountain bike the first time, right? You give them a tricycle or something like that. We do the same thing when nuclear operators. We typically teach them the first time, but they're at school, right? A lot of people are going to learn on a test reactor. That is at this scale, small. The AGN-201 is five watts, by the way, so if you think about that just for a second--five watts, not five kilowatts, five watts. So it's very, very low power, almost zero power, and it's a safe environment for a human to learn, which I think also lends itself well as a safe environment for AI to learn. And so that is why we're really interested in these university reactors at this stage of where we're at in maturity because we can do it in a very safe way and do it in a way that we can mature these algorithms to deploy them in a more broad sense.
Kevin Jackson: Gotcha, gotcha. And you know, I'm a big proponent of clean energy from nuclear reactors, and I'm well aware of how safe these machines are, but how can digital twins contribute to enhancing nuclear security and safety?
Christopher Ritter: That's a great question. I think if you think about it today, there's a lot on the humans. So there are some automatic controls and facilities and reactors today, but you can imagine, right? If anyone's ever taken a fault tree or probability class, even in statistics, if you have only humans, then you get the best of humans. And humans are pretty good, right? As Elon Musk learned with Tesla with his factories is like, wait, you know, humans are underrated.
Humans could do a lot of things really well, but if you imagine pairing a human with AI, you can get a higher sense of reliability, right? Because humans are not perfect, and AI may not be perfect in every case. And so if you have the combination of both, you kind of have the best of both worlds. In my mind. Now I will say as a caveat, we still need to do testing and experimentation to see what that kind of balance is of human and AI in the loop, right? We have not done yet those validation experiments to prove yet, right, that that is actually a fact, right, that the two combined get you this higher sense of reliability. And I think there's definitely a human factors element to this, of how was the AI communicating to the human, and reporting things?
So another autonomous car example, right? I think everyone's trying to figure all this out right now, right on the car side, right? How do we alert the human? So if you think of like your lane keep assist on a car, it's going to beep at you when you're getting outside the lane, and it's kind of a guardian kind of system, right, to help you. So clearly, that's adding value to the user, right? I would imagine, if there's been human factor studies, right, that they would prove that, yeah, that's better than not having that beep system. But then there are other cars, I have multiple that don't beep at you, and they disengage automatically, without alerting the user that the AI or the autonomous system has disengaged itself. And so those types of little things, I think, are going to have to be worked out over time, and we're going to have to do some experiments and studies to see what is the best user interface to alert people with what the AI is doing, and then also have the human right communicate back to the AI what it's doing. And so there is, there's definitely more studies and research to be done.
Kevin Jackson: Right, and you know, speaking of the human factor that you mentioned there, I saw in your presentation that a digital twin can "predict future temperatures of heat pipe thermocouples" and that the digital twin can then "use predictions to send control requests to the human-machine interface." Can you expand on this? And based on what you were just talking about, what kinds of human-centric checks and balances are involved in this process?
Christopher Ritter: Yeah, that's a great question. So for that particular experiment, that was a couple of years ago, and that was using a facility called magnets, which is a nonnuclear heat pipe test, and that did not run in the cloud, but ran locally in that environment. And so there's a couple kinds of things in the mix here.
That data is coming off that facility in real time and coming to a local, computational . . . basically just laptops, right? There, inside that facility and on those machines, they're predicting 10 minutes out the forecast of what's going to happen, and then the AI algorithms are using that forecast to then do a control request back to the system.
Now your question of what kind of safeguards are in place, well, one, those control requests that are sent over go to a secondary computational system that is validating those inputs to make sure that they're in certain boundary conditions and they're not going to cause any kind of weird errors on the machine. Two, if you remember, we have these HoloLens capabilities, right? So you can put on a HoloLens and you can see, in real time what's happening, kind of like a hologram, right through the machine. And you can get alerts as to what the machine learning algorithm is doing.
These are early experiments of what that looks like, but that's one approach where the user is always in the loop with what's happening from an AI perspective, and you have a secondary computational system that's kind of validating and checking what's going on. You can actually see a little bit of this again in your car. So if you have Lane Keep Assist, that's one system, and in most cars, you'll see there's a separate system that is trying to keep you in the lane, and both of those systems are separate. I've actually seen it before where the Lane Keep Assist gets alerted by the other system that it's getting outside the lane, which is kind of funny, but it makes sense if you think about it in those terms. And so, a validator step in the control space is one approach. Does that make sense?
Kevin Jackson: Yeah, absolutely. And you know, we spent a lot of time talking about how digital twins can affect the operation of these reactors. But what are the potential cost savings associated with digital twins and nuclear energy? How can digital twins impact the design and licensing phases of these next-generation reactors?
Christopher Ritter: Absolutely. That's an area I'm pretty passionate about right now. I've had the experience of working on a very large nuclear reactor program before, and the design statement, it's extremely complicated. Think of 300 engineers working together, and think of a lot of paper. There's a tool. It's Microsoft Word. You may have heard of it. It's very popular in the design of nuclear power plants today. And why is that a problem? From my opinion, when you use Word documents at scale and you're trying to design something extremely complicated with 10s of 1000s of parts, you have one document that describes one system and another document that describes the other. Those interfaces are really hard to manage. When you're on a smaller project, how do you do it? Well, you keep it in your head, right? And you can kind of memorize where things are. The more complicated the system gets, the harder it is for one person to keep it in their head. So we create lots of meetings to try to get these people to talk together and get the interfaces correct. But in the real world, right, there are a lot of changes that happen.
Maybe in part of your supply chain, you can't get a particular part. Maybe a law changes in the time span that you're designing your plant, and you do account for that. Or maybe we thought you were going to have a million gallons of water, but now you're going to have 500,000 gallons of water, and you need to deal with that change, right? So maybe you're going to have less modules in your plant design, or things like that you need to worry about. So accounting for that change.
How do we do it? So historically, there's a concept called digital engineering, and what that means is, if you think about it, kind of like a spider web or a mind map, you kind of link all the things together. So when one thing changes, it ripples through and it tells you, hey, wait, you've got a problem. Here, here, here, and here, and you need to go fix that. So it's kind of the computer giving you an alert that you have a problem. Where I think we can go as a next step from that is to think about the problem completely differently.
So the way it kind of works now right is, that humans will generate the requirements for a plant in Microsoft Word or other documents like that, and then another team right will a drafting team right, will build those CAD models, and then another team will run what we call analysis codes, right? And there are lots of them, but imagine seismic analysis. We'll take the plant virtually right to see what's going to happen, and then all that will kind of occur in a little bit of a loop, if you will. And it takes a long time to go through each change. You take one small change to a requirement, and you're rippling through that whole design. And a lot of different people have to work together, right to affect that change. So what if we flipped it and changed it completely, and we said, Let's give an objective for the computer to solve? We want 300 megawatts of electric, and we want it at this size of space, and we have this much water, and maybe we have this much heat in the area, environmental factors we need to account for. So those are your constraints. Let the computer solve that. What does that mean? Let it design the CAI model for you, let it run all the analysis codes for you, and then give you options, right? Then the human can pick from those options and then run some of their own analysis to check the work and make sure it makes sense.
But think about the problem completely differently, and then it's almost like you could do, if you're familiar with DevOps, like for software. You could kind of do DevOps for nuclear. You could have this validation system that kind of checks everything and then rapidly iterates. So now all of a sudden, you're going to get a design that's been thoroughly vetted, both by AI and by humans, so you wouldn't have as many design errors, which we think is huge, right? A lot less rework to do, and also the ability to kind of think about what makes the most sense for a particular area, for a particular community, and you get an optimal design at the end too.
So how much could that save? Well, we wrote a paper, and we'll provide a link to you on AI for mega projects, and we looked at hydropower and nuclear energy, and we found that the two are very similar we saw in the 70s with regulatory changes that plants kind of almost doubled in their schedule durations and cost overruns. And we were like, well, what could AI and digital twins do to change that? And so we built a big fault tree, and we took every case including acts of god, things that AI is probably not going to solve for you. And with that fault tree, we were able to derive how much AI could benefit the world. And we found a 21% reduction. And I'll admit, Kevin, when I ran that calc myself the first time, and I was like, only 21% I was hoping for 50% right? But I thought about it like, well, we need to deploy 200 gigawatts of new nuclear power, according to the DOE liftoff report. That is a lot of new nuclear power. If we could save 20% of all of those schedules across, you know, hundreds of new plants, that's a big deal. So, I think it's important to say that AI is going to solve some problems, but not every life problem. And there are other things, of course, that delay projects that are outside of the control of the project team, like environmental factors that are that could occur on any project, right? And so, so that's where, that's why I think that number actually makes a lot of sense, huge benefit.
And in the AI for Energy Report, which I'm sure you've already read, in fact, I know you read it, we talk about this, right? And what the huge benefits could be for the nuclear power industry. But I think that those benefits are bigger than nuclear power, and could apply to any large construction project in America. And if you think about construction and efficiency, it's actually slightly gotten worse since the 50s-60s. And manufacturing at the same time has almost gotten twice as good as it was 50 years ago. And so this would be a way for construction to catch up to some of those automation areas that we've seen in manufacturing by using AI as a as a tool. Does that make sense?
Kevin Jackson: Absolutely, I agree. I think AI is a fantastic tool for efficiency. It's also not a silver bullet. We need humans doing the hard work, right?
Christopher Ritter: Yeah, I don't want to oversell it. AI is going to definitely change the world, but we're going to need great humans, too.
Kevin Jackson: I saw on one of your slides at the HPC-AI user forum that stated that the machine learning model demonstrated, I think it was with AGN-201 had a mean absolute percent error of less than 0.3% which is apparently quite good. Can you tell me what that is and why it's considered a success?
Christopher Ritter: Yes, now that was with the fission battery experiment that was done earlier. That was the 2022 experiment. That's a big deal because this was kind of a one-shot. So they used a facility. This team, by the way, is pretty awesome. It was Jaren Browning and Katie Jesse who put together this experiment in a relatively short time. They had about a year and a half to kind of build out this brand new system that we had never done before. And what was weird about it was usually, like the one we did with AGN-201 we got many shots. So AGN-201 is a university reactor. If we wanted to run it, I'd make a call the next week, we could get a couple of tests then, usually. So we had a lot of attempts to refine our algorithms as we were going.
This team that happened earlier didn't get that opportunity. They had a facility called Sphere that was kind of like a heat pipe on a table, which is the best way to describe it. I came with them as they were running this experiment, and they had one chance to get it right. They got an hour and a half because the facility had other things they needed to do after this experiment. They had one and one and a half hours total to do this experiment, and they're out. So everyone was kind of curious, will this actually work? So getting an error rate that is less than 0.3% for each thermocouple is a huge deal because they didn't get a lot of opportunities to retrain. So, so that's what, that's why we were really excited about that.
Now, the errors that we had with the AGN-201, we didn't disclose publicly, but we can say that in the combination of the two models that we used, we saw high accuracy with that particular experiment as well. But we got a lot more attempts, if you will, on that project.
Kevin Jackson: Right on. So this is more of a general question, but what are your hopes for the near future with digital twins and nuclear energy? And what about the far future? Do you want to make any predictions for years and years from now?
Christopher Ritter: Yeah, I think I'll quote HPCwire, right? I think nuclear energy is a key part of the AI revolution. I think the two are interlocked in ways that we're now seeing, with Microsoft disclosing right their use of nuclear power with datacenters. I think the two have a shared destiny if you will. We need nuclear to realize these AI advances from a power perspective, and we need AI to help get nuclear deployed faster. So I think we'll see more and more of that from a prediction time stance.
I think we'll see semi-autonomous design systems for nuclear power plants that will apply more broadly than nuclear power. I think we'll see that kind of objective constraint approach, and it'll change the whole world for how we engineer power plants. And how do I know that? Well, we have some experiment code that you put in a prompt, it generates the CAD model for you already and it does some analysis on its own. So we know it's possible it's early, low-tier level, but we're seeing that already evolve today.
The second thing is, I think, you know, just like with aircraft, right? You imagine drones, right? There are remote operators. I think that's a foregone conclusion. You're going to see remote operators for power plants over time, because it just makes sense, right? You can get a really highly qualified team together at a remote facility, but to make that possible, you're going to need some kind of Guardian system, right? Something, a system to warn you that you're outside of that rain. And so I think we'll see that, you know, over the next we'll say decade, right? We'll see systems like that come alive.
And then the third thing, which we haven't had a chance to talk about, is, I think you'll see more autonomous experimentation in the laboratories themselves. So like our latest facility, SPL, it's what we call a post-irradiation examination facility, and there's still a lot of humans that kind of control these arms that come down. You could imagine AI helping with autonomous experimentation, so we could see more rapid qualification of fuels and materials, which I'm really excited about. I think that could propel technology forward. And that's, of course, something that the labs are very keen in doing, and then in a longer time step, I think you'll see more and more autonomy on design. I think you'll see more and more autonomy in operations right to the point where maybe we get to full autonomy one day in the distant future. Maybe we get to the point one day too, where we have labs that are just running 24/7 365, on their own, constantly looking and searching for the best materials and the best fuels. And humans are still giving them input of what we want them to search for, but they're kind of constantly on the look for the latest technologies that we could use. And I think we'll see a lot more of that. And I don't think, I think some people are like, Wait, what about, you know, humans, that I think humans are still going to be essential to manage all this and make it possible. And if you think about the amount of new jobs that are going to come from 200 gigawatts of new power, there aren't enough people out there to hire. So I think you imagine AI really as a tool to supplement and accelerate us into the future. So obviously, I'm super optimistic about where we're headed, but I think we're already seeing some early results in this technology.
Kevin Jackson: I agree. I agree. Well, I think I learned a lot today, and I know our audience will too. Chris, thanks so much for speaking with me today. I appreciate it.
Christopher Ritter: Thanks a lot, Kevin. Happy to come anytime.
This transcript has been edited for clarity.