LLMs helped perpetuate a path traversal bug from 2010 (www.theregister.com)
from floofloof@lemmy.ca to programming@programming.dev on 10 Jun 06:14
https://lemmy.ca/post/45803718

#programming

threaded - newest

vane@lemmy.world on 10 Jun 15:16 collapse

In a 15 year old snippet from stackoverflow and gist. Somebody paid for this article.

floofloof@lemmy.ca on 10 Jun 15:59 collapse

That’s the point though: LLMs recycle junk information, including some potentially dangerous information, without any indication of the context. In a regular search of the web or of Stack Overflow, you’d probably see people commenting on how the code is vulnerable, but when you ask an LLM it doesn’t necessarily communicate that while still delivering the code.

vane@lemmy.world on 10 Jun 16:02 collapse

I’m fine with reading comments and not copy pasting code without reading but I see it’s too much nowadays.

floofloof@lemmy.ca on 10 Jun 16:06 collapse

Yeah, and this particular vulnerability is pretty obvious for even a moderately experienced developer. You’d really have to be pasting without thinking to let this one slip by.

vane@lemmy.world on 10 Jun 16:16 collapse

It is also that previously we have had a dialogue between people about code. Even some historic background if creator of library or some RFC standard was involved. There is also a big broader aspect of the topic if you look into mentioned starckoverflow. Now you have dialogue between entry level dev and 6years old ADHD AI. That doesn’t teach human anything because it’s a tool to solve particular problem, nothing more. That’s not how learning works.