An Unsettling Headline-

For the First Time, Artificial Intelligence Is Being Used at a Nuclear Power Plant

Alex Shultz Published April 13, 2025 | Comments (4)

Diablo Canyon, California’s sole remaining nuclear power plant, has been left for dead on more than a few occasions over the last decade or so, and is currently slated to begin a lengthy decommissioning process in 2029. Despite its tenuous existence, the San Luis Obispo power plant received some serious computing hardware at the end of last year: eight NVIDIA H100s, which are among the world’s mightiest graphical processors. Their purpose is to power a brand-new artificial intelligence tool designed for the nuclear energy industry.

Pacific Gas and Electric, which runs Diablo Canyon, announced a deal with artificial intelligence startup Atomic Canyon—a company also based in San Luis Obispo—around the same time, heralding it in a press release as “the first on-site generative AI deployment at a U.S. nuclear power plant.”

For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

PG&E is deploying the document retrieval service in stages. The installation of the NVIDIA chips was one of the first phases of the partnership between PG&E and Atomic Canyon; PG&E is forecasting a “full deployment” at Diablo Canyon by the third quarter of this year, said Maureen Zawalick, the company’s vice president of business and technical services. At that point, Neutron Enterprise—which Zawalick likens to a data-mining “copilot,” though explicitly not a “decision-maker”—will be expanded to search for and summarize Diablo Canyon-specific instructions and reports too.

“We probably spend about 15,000 hours a year searching through our multiple databases and records and procedures,” Zawalick said. “And that’s going to shrink that time way down.” (Emphasis mine- A. I worked at the nuke plant in my state in my 20s. I did Records Management. I’m not going to explain it all from back then the way I trained people, but it involves reading and interpreting what one has read in application to the function, part, area, etc. a document records, which is learned by reading the document, then coding it so it is efficiently retrieved later. So far, I don’t know that AI does that. Others who are more knowledgeable about records management and retrieval in this era and context may see better things than I see. The best worst I see is really angry and impatient engineers and inspectors in all the disciplines still at the plant. That’s no fun, anyway.)

Trey Lauderdale, the chief executive and co-founder of Atomic Canyon, told CalMatters his aim for Neutron Enterprise is simple and low-stakes: he wants Diablo Canyon employees to be able to look up pertinent information more efficiently. “You can put this on the record: the AI guy in nuclear says there is no way in hell I want AI running my nuclear power plant right now,” Lauderdale said.

That “right now” qualifier is key, though. PG&E and Atomic Canyon are on the same page about sticking to limited AI uses for the foreseeable future, but they aren’t foreclosing the possibility of eventually increasing AI’s presence at the plant in yet-to-be-determined ways. According to Lauderdale, his company is also in talks with other nuclear facilities, as well as groups who are interested in building out small modular reactor facilities, about how to integrate his startup’s technology. And he’s not the only entrepreneur eyeing ways to introduce artificial intelligence into the nuclear energy field.

In the meantime, questions remain about whether sufficient safeguards exist to regulate the combination of two technologies that each have potential for harm. The Nuclear Regulatory Commission was exploring the issue of AI in nuclear plants for a few years, but it’s unclear if that will remain a priority under the Trump administration. Days into his current term, Trump revoked a Biden administration executive order that set out AI regulatory goals, writing that they acted “as barriers to American AI innovation.” For now, Atomic Canyon is voluntarily keeping the Nuclear Regulatory Commission abreast of its plans.

Tamara Kneese, the director of tech policy nonprofit Data & Society’s Climate, Technology, and Justice program, conceded that for a narrowly designed document retrieval service, “AI can be helpful in terms of efficiency.” But she cautioned, “The idea that you could just use generative AI for one specific kind of task at the nuclear power plant and then call it a day, I don’t really trust that it would stop there. And trusting PG&E to safely use generative AI in a nuclear setting is something that is deserving of more scrutiny.”

For those reasons, Democratic Assemblymember Dawn Addis—who represents San Luis Obispo—isn’t enthused about the latest developments at Diablo Canyon. “I have many unanswered questions of the safety, oversight, and job implications for using AI at Diablo,” Addis said. “Previously, I have supported measures to regulate AI and prevent the replacement and automation of jobs. We need those guardrails in place, especially if we are to use them at highly sensitive sites like Diablo Canyon.” (snip-MORE; not tl;dr, though.)

4 thoughts on “An Unsettling Headline-

  1. Ironically this is a relatively good (and proper) use of the Large Language Models they call “AI”.

    It appears to be trained entirely on the relevant documents, not ‘the internet’ like the AI we’re normally talking about, the kind that tells you to use glue on your pizza.

    GIGO rules those, but a well designed data model trained on the relevant documentation and only on the relevant documentation is far less likely to ‘hallucinate’

    This is closer to the old school ‘Expert Systems’ which have been widely usd since the 90’s (and were called ‘AI’ back then. )

    Liked by 1 person

    1. OK. I was hoping this might be in your purview; in the 80s, people wrote down information in codes (not coding,) which were transferred elsewhere onto magnetic tape then loaded on our mainframe at the plant. It took us no time to pull one up on the computer; the requestor could read from that, or go to the vault and look at the paper. When I left there, I went into sales, then office admin, mostly in legal offices. I knew there had been changes-mainframe computers?!?😄-but wasn’t sure how AI could do retrieval any better or faster than we did, and also, not having understanding of, say, a Deficiency Report on a pipe hanger (an example from the blue sky not real life,) can it really decide what are the correct documents, as the guy above was bragging?

      Like

      1. It really depends on how the end users are requesting the information.

        The “millions of pages in multiple databases and procedures” is probably the main driver. In the 80’s, I will lay great odds that what you pulled up at the plant was highly structured with clear and rigid identifirers; what they’re describing is an ‘orders of magnitude’ greater text corpus with much less structure, especially if they have to link documents from different databases with different identifiers, etc.

        Back in your plant I will surmise that experienced workers could zero in quickly for the right documents simply via experience with the system.

        It’s a hard problem to rapidly deliver the information to the end user who may not nkow all the inter-relationships with the different databases and data sources.

        Liked by 1 person

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.