Developer survey shows trust in AI coding tools is falling as usage rises (arstechnica.com)
from cm0002@lemmy.world to programming@programming.dev on 01 Aug 17:17
https://lemmy.world/post/33818839

#programming

threaded - newest

PixelatedSaturn@lemmy.world on 01 Aug 17:33 next collapse

That makes sense. People are starting to use it more, but for less elaborate things where they still have control.

monkeyman512@lemmy.world on 01 Aug 17:45 collapse

I feel like this is a normal cycle of new tech. People get really excited about all the possibilities and don’t have any experience to ground expectations. Eventually people use it enough to realize what is more realistically achievable. Then mentally shifts from “magical solution to everything” to “a tool that is good at some things and bad at others”.

WoodScientist@sh.itjust.works on 01 Aug 21:17 next collapse

The problem for OpenAI and their ilk is that the actual legitimate uses of LLMs are so few and niche that they cannot hope to pay for the immense cost of developing and running these systems. Like, cool. Sure I can use copilot to generate derivative meme images, but what’s that actually worth to me monetarily? I’m not going to subscribe to a monthly service just to access a tool for shit posting.

monkeyman512@lemmy.world on 01 Aug 21:30 collapse

That sounds like a problem for the people dumping money into these companies and keeping them afloat.

speculate7383@lemmy.today on 03 Aug 02:11 collapse

Often known as the “Gartner Hype Cycle”

Arkouda@lemmy.ca on 01 Aug 17:47 next collapse

I have had nothing but issues attempting to use AI to help with code. The amount of times it has given me something clearly incorrect, and I am not very experienced to be able to identify “wrong code” so easily, is way to high.

hperrin@lemmy.ca on 01 Aug 18:12 next collapse

That’s because it’s absolute shit.

ulterno@programming.dev on 04 Aug 14:41 collapse

There is 1 particular thing where I think it can be very useful.
That is when looking for relevant documentation.

It has happened to me a few times that despite enough documentation being available I am unable to find the relevant thing because even though the description in the documentation matches what I need to do, I didn’t use the correct keywords required for the search to find the correct class and I ended up using internet search.
If a language model can find relevant patterns and reduce the requirement onto the documentation writers for having to add tags to make search easier, that would be a win.

But of course, no generation of new text. All that is needed is to parse the documentation and examples provided by developers and to provide links to them.