For the First Time, Artificial Intelligence Is Being Used at a Nuclear Power Plant (gizmodo.com)
from cyrano@lemmy.dbzer0.com to technology@lemmy.world on 14 Apr 02:20
https://lemmy.dbzer0.com/post/42155120

For now, the artificial intelligence tool named Neutron Enterprise is just meant to help workers at the plant navigate extensive technical reports and regulations — millions of pages of intricate documents from the Nuclear Regulatory Commission that go back decades — while they operate and maintain the facility. But Neutron Enterprise’s very existence opens the door to further use of AI at Diablo Canyon or other facilities — a possibility that has some lawmakers and AI experts calling for more guardrails.

#technology

threaded - newest

BombOmOm@lemmy.world on 14 Apr 02:27 next collapse

Can we not have the lying bots teaching people how to run a nuclear plant?

besselj@lemmy.ca on 14 Apr 02:50 next collapse

The LLM told me that control rods were not necessary, so it must be true

twice_hatch@midwest.social on 14 Apr 08:42 collapse

The chatbot said 3.6 Roentgen is just fine and the core cannot have exploded, maybe we heard a truck driving by

cyrano@lemmy.dbzer0.com on 14 Apr 02:53 next collapse

<img alt="" src="https://lemmy.dbzer0.com/pictrs/image/232f32e4-1c88-495f-92d7-f4e45c9e9098.webp">

dumbass@leminal.space on 14 Apr 03:09 next collapse

Huh, it is really Russian roulette with how we’re all gonna die, could be WW3, could be another pandemic or could a bunch of AIs hallucinating and causing multiple nuclear meltdowns.

[deleted] on 14 Apr 03:48 next collapse

.

prex@aussie.zone on 14 Apr 07:47 next collapse

I can only hope my bingo card somehow explodes & kills me.

Deceptichum@quokk.au on 14 Apr 09:12 next collapse

Don’t forget he inevitable climate change.

scarabic@lemmy.world on 14 Apr 18:52 collapse

It’s literally just a document search for their internal employees to use.

Those employees are fallible humans trying to navigate tens of thousands of byzantine technical and regulatory documents all published on various dinosaur platforms.

AI hallucination is a very popular thing to get outraged about right now but don’t forget about good old fashioned bureaucratic error.

My employer implemented AI search/summarization of our docs/wiki/intranet/JIRA systems over a year ago and it has been very effective in my experience. It always links to the source docs, but it permits natural language queries and can do some reasoning about the contents of the documents to pull together information across a sea of text.

Nothing that is mission critical enough to lead to a reactor meltdown should ever be blindly trusted to these tools.

But nothing like that should ever be trusted to the whims of one fallible human, either. This is why systems have protocols, checks and balances, quality controls, and failsafes.

Giving employees a more powerful document search doesn’t somehow sweep all that aside.

But hey, don’t let a rational, down-to-earth argument stand in the way of freaking out about a sci-fi dystopia.

MuskyMelon@lemmy.world on 14 Apr 03:13 next collapse

Finally we get the sequel to “Chernobyl” … Based in America…

Slotos@feddit.nl on 14 Apr 03:26 next collapse

Live action at that

Evil_Shrubbery@lemm.ee on 14 Apr 07:27 collapse

They made the prequel already - wiki/Three_Mile_Island_accident.

Goretantath@lemm.ee on 14 Apr 03:29 next collapse

Fucking christ…

NarrativeBear@lemmy.world on 14 Apr 03:51 next collapse

SkyNet is fully operational, operating at 60 teraflops.

DogPeePoo@lemm.ee on 14 Apr 04:00 next collapse

What could possibly go wrong?

Sterile_Technique@lemmy.world on 14 Apr 05:08 next collapse

Diablo Canyon

The nuclear power plant run by AI slop is located in a region called “Diablo Canyon”.

Right. We sure this isn’t an Onion article? …actually no, it couldn’t be, The Onion’s writers aren’t that lazy.

Fuckin whatever, I’m done for the night. Gonna head over to Mr. Sandman’s squishy rectangle. …bet you’ll never guess what I’m gonna do there!!

solomon42069@lemmy.world on 14 Apr 06:11 next collapse

<img alt="" src="https://lemmy.world/pictrs/image/18dd6cfa-9586-43c8-8319-40d75a3d7a64.webp">

Everything reminds me of her…

Obi@sopuli.xyz on 14 Apr 07:09 next collapse

Looks like it’s a bit nippy out there, brrrr.

mac@lemm.ee on 14 Apr 13:14 collapse

Lol, in SoCal these are a landmark that most call “the boobs” or “the titties”

kurcatovium@lemm.ee on 14 Apr 17:20 collapse

I’m shocked!

werefreeatlast@lemmy.world on 14 Apr 07:08 next collapse

Dave, I don’t known what to tell you but you can’t come in alright?

jaybone@lemmy.zip on 14 Apr 07:46 next collapse

What could go wrong?

hansolo@lemm.ee on 14 Apr 09:04 next collapse

Well, considering it’s exclusively for paperwork and compliance, the worst that can happen is someone might rely on it too much and file incorrect, I dunno, license renewal with the DOE and be asked to do it again.

Ah. The horror.

pivot_root@lemmy.world on 14 Apr 09:35 collapse

When it comes to compliance and regulations, anything with the literal blast radius of a nuclear reactor should not be trusted to LLM unless double or triple checked by another party familiar with said regulations. Regulations were written in blood, and an LLM hallucinating a safety procedure or operating protocol is a disaster waiting to happen.

I have less qualms about using it for menial paperwork, but if the LLM adds an extra round-trip to a form, it’s not just wasting the submitter’s time, but other people’s as well.

hansolo@lemm.ee on 14 Apr 09:47 collapse

All the errors you know about in the nuclear power industry are human-caused.

Is this an industry with a 100% successful operation rate? Not at all.

But have you ever heard of a piece of paperwork with an error submitted to regulatory officials and lawyers outside the plant causing a critical issue inside the plant? I sure haven’t. Please feel free to let me know if you are aware of such an incident.

I would encourage you to learn more about how LLM and SLM structures work. This article is more of a nothingburger superlative clickbait IMO. To me, at least it appears to be airgapped if it’s running locally, which is nice.

I would bet money that this will be entirely managed by the most junior compliance person who is not 120 years old, with more senior folks cross checking it with more suspicion than they would a new hire.

gedhrel@lemmy.world on 14 Apr 13:47 collapse

I’m not sure if that opening sentence is fatuous or not. What errors in any industrial enterprise are not human in origin?

pyre@lemmy.world on 14 Apr 09:08 collapse

using AI in a nuclear plant at Diablo Canyon… it’s so on the nose you’d say it’s lazy writing if it were part of the backstory of some scifi novel.

hansolo@lemm.ee on 14 Apr 09:02 next collapse

It’s just a custom LLM for records management and regulatory compliance. Literally just for paperwork, one of the few things that LLMs are actually good at.

Does anyone read more than the headline? OP even said this in the summary.

iAvicenna@lemmy.world on 14 Apr 09:14 next collapse

NOOOOOO ITS DOING NUCLEAR PHYSICS!!!111

hansolo@lemm.ee on 14 Apr 09:31 collapse

It’s eating the rods, it’s eating the ions!

nieminen@lemmy.world on 14 Apr 11:07 collapse

<img alt="" src="https://lemmy.world/pictrs/image/fa1ff7de-1baa-4ba8-9b65-35f8e91fd472.gif">

iAvicenna@lemmy.world on 14 Apr 20:29 collapse

I unfortunately don’t can someone explain?

nieminen@lemmy.world on 15 Apr 02:29 collapse

<img alt="" src="https://lemmy.world/pictrs/image/28313346-ccc9-4e25-9090-cc0039bea415.gif">

This

iAvicenna@lemmy.world on 15 Apr 08:15 collapse

Oh shit had already forgotten about this amid so many other scandals. The guy who said this is running the whole of US like a fucking medieval kingdom, another reality slap in the face. At that time I was like, “surely no one right in the mind would vote for this scammer”.

cyrano@lemmy.dbzer0.com on 14 Apr 09:15 next collapse

I agree with you but you could see the slippery slope with the LLM returning incorrect/hallucinate data in the same way that is happening in the public space. It could be trivial for documentation until you realize the documentation could be critical for some processes.

hansolo@lemm.ee on 14 Apr 09:35 collapse

If you’ve never used a custom LLM or wrapper for regular ol’ ChatGPT, a lot of what it can hallucinate gets stripped out and the entire corpus of data it’s trained on is your data. Even then, the risk is pretty low here. Do you honestly think that a human has never made an error on paperwork?

cyrano@lemmy.dbzer0.com on 14 Apr 13:48 collapse

I do and even contained one do return hallucination or incorrect data. So it depends on the application that you use it. It is for a quick summary / data search why not? But if it is for some operational process that might be problematic.

technocrit@lemmy.dbzer0.com on 14 Apr 16:11 next collapse

Don’t blame the people who just read the headline.

Blame the people who constantly write misleading headlines.

There is literally no “artificial intelligence” here either.

null_dot@lemmy.dbzer0.com on 14 Apr 23:08 collapse

It depends what purpose that paperwork is intended for.

If the regulatory paperwork it’s managing is designed to influence behaviour, perhaps having an LLM do the work will make it less effective in that regard.

Learning and understanding is hard work. An LLM can’t do that for you.

Sure it can summarise instructions for you to show you what’s more pertinent in a given instance, but is that the same as someone who knows what to do because they’ve been wading around in the logs and regs for the last decade?

It seems like, whether you’re using an LLM to write a business report, or a legal submission, or a SOP for running a nuclear reactor, it can be a great tool but requires high level knowledge on the part of the user to review the output.

As always, there’s a risk that a user just won’t identify a problem in the information produced.

I don’t think this means LLMs should not be used in high risk roles, it just demonstrates the importance of robust policies surrounding their use.

pyre@lemmy.world on 14 Apr 09:06 next collapse

to people who say it’s just paperwork or whatever it doesn’t matter: this is how it begins. they’ll save a couple cents here and there and they’ll want to expand this.

Takumidesh@lemmy.world on 14 Apr 14:22 next collapse

Also, it’s not like the paperwork isn’t important.

scarabic@lemmy.world on 14 Apr 18:57 collapse

That’s textbook slippery slope logical fallacy.

TheOakTree@lemm.ee on 14 Apr 19:17 next collapse

True, but it you change the argument from “this will happen” to “this with happen more frequently” then it’s still a very reasonable observation.

scarabic@lemmy.world on 14 Apr 22:50 collapse

All predictions in this vein are invalid.

If you want to say “even this little bit is unsettling and we should be on guard for more,” fine.

That’s different from “if you think this is only a small amount you are wrong because a small amount will become a large amount.”

pyre@lemmy.world on 14 Apr 20:36 next collapse

it’s not actually. there’s barely an intermediate step between what’s happening now and what I’m suggesting it will lead to.

this is not “if we allow gay marriage people will start marrying goats”. it’s “if this company is allowed to cut corners here they’ll be cutting corners in other places”. that’s not a slope; it’s literally the next step.

slippery slope fallacy doesn’t mean you’re not allowed to connect A to B.

scarabic@lemmy.world on 14 Apr 22:48 collapse

You may think it’s as plausible as you like. Obviously you do or you wouldn’t have said it. It’s still by definition absolutely a slippery slope logical fallacy. A little will always lead to more, therefore a little is a lot. This is textbook. It has nothing to do with companies, computers, or goats.

pyre@lemmy.world on 14 Apr 23:04 collapse

this is textbook fallacy fallacy

Objection@lemmy.ml on 14 Apr 20:42 collapse

Slippery slope arguments aren’t inherently fallicious.

uncurable_utopia@lemm.ee on 14 Apr 13:53 next collapse

One “Oops!” and humanity’s gone for…

OmegaLemmy@discuss.online on 14 Apr 18:40 collapse

Tell me you misunderstood what the article was about without telling me that you misunderstood what the article was about

uncurable_utopia@lemm.ee on 14 Apr 21:12 collapse

Brother, it’s a good day to avoid laughing at a bad joke but at least understanding that it WAS a joke. Have a good day, brother…🫂

Tea@programming.dev on 14 Apr 23:02 collapse

The original article at the non-profit website: themarkup.org/…/for-the-first-time-artificial-int…