The more people using chatgpt to generate low quality code they don't understand, the more job safety and greater salary I get.
Sylvartas@lemmy.world
on 04 Oct 2024 17:45
collapse
As far as I know, right now the main problem with flying cars is that they are nowhere near as idiot-proof as a normal car, and don’t really solve any transportation problem since most countries’ air regulations agencies would require them to exclusively take off and land in airports… Where you can usually find tons of planes that can go much further (and are much more cost effective, as you pointed out)
einkorn@feddit.org
on 03 Oct 2024 21:15
nextcollapse
For me, it is a glorified auto-complete function. Could definitely live without it.
CatsGoMOW@lemmy.world
on 03 Oct 2024 22:10
nextcollapse
Same for me, but that glorified auto complete helps a lot.
MeatsOfRage@lemmy.world
on 04 Oct 2024 02:15
collapse
Hell yea. Our unit test coverage went way up because you can blow through test creation in second. I had a large complicated migration from one data set to another with specific mutations based on weird rules and GPT got me 80% of the way there and with a little nudging basically got it perfect. Code that would’ve taken a few hours took about 6 prompts. If I’m curious about a new library I can get a working example right away to see how everything fits together. When these articles say there’s no benefit I feel people aren’t using these tools or don’t know how to use them effectively.
SkyeStarfall@lemmy.blahaj.zone
on 04 Oct 2024 11:04
collapse
Yeah, it’s useful, you just gotta keep it on a short leash, which is difficult when you don’t know what you’re doing
Basically, it’s a useful tool for experienced developers that know what to look out for
a_wild_mimic_appears@lemmy.dbzer0.com
on 04 Oct 2024 20:21
collapse
From the combined comments it looks like if you are a beginner or a pro then it’s great; if you only have just enough knowledge to be dangerous (in german that’s proverbial “gefährliches Halbwissen”) you should probably stay away from it :-)
We always have to ask what language is it auto-completing for? If it is a strictly typed language, then existing tooling is already doing everything possible and I see no need for additional improvement. If it is non-strictly typed language, then I can see how it can get a little more helpful, but without knowledge of actual context I am not sure if it can get a lot more accurate.
ShunkW@lemmy.world
on 03 Oct 2024 21:38
nextcollapse
And yet, higher ups continue to lay off more devs because AI “is the future”.
breckenedge@lemmy.world
on 04 Oct 2024 14:12
collapse
In my experience, most of the tech layoffs have been non-devs. PMs and Designers have been the hardest hit and often their roles are being eliminated.
Warl0k3@lemmy.world
on 03 Oct 2024 23:26
nextcollapse
Its basically a template generator, which is really helpful when you’re generating boilerplate. It doesn’t save me much if any time to refactor/fill in that template, but it does save some mental fatigue that I can then spend on much more interesting problems.
It’s a niche tool, but occasionally quite handy. Without leaps forward technically though, it’s never going to become more than that.
Just beware, sometimes the AI suggestions are scary good, some times they’re batshit crazy.
Just because AI suggests it, doesn’t mean it’s something you should use or learn from.
EleventhHour@lemmy.world
on 03 Oct 2024 21:51
nextcollapse
Devs that are punching above their class, however, probably get great benefit from it. I would think it’s also an OK learning tool, except for how inaccurate it can be sometimes.
eager_eagle@lemmy.world
on 03 Oct 2024 22:10
nextcollapse
I like to use suggestions to feel superior when trash talking the generated code
9point6@lemmy.world
on 03 Oct 2024 22:17
nextcollapse
My main use is skipping the blank page problem when writing a new suite of tests—which after about 10 mins of refactoring are often a good starting point
tdawg@lemmy.world
on 03 Oct 2024 22:29
nextcollapse
Generative AI is great for loads of programming tasks like helping create regular expressions or syntax conversions between languages. The main issue I’ve seen in codebases that rely heavily on generative AI is that the “solutions” often fix today’s bug while making future debugging more difficult. Generative AI makes it easy to go fast in the wrong direction. Used right it’s a useful tool.
TrickDacy@lemmy.world
on 04 Oct 2024 02:01
nextcollapse
I truly don’t understand the tendency of people to hate these kinds of tools. Honestly seems like an ego thing to me.
FlorianSimon@sh.itjust.works
on 04 Oct 2024 02:07
nextcollapse
Carbon footprint. Techbro arrogance. Not sure what’s hard to understand about it.
gaael@lemmy.world
on 04 Oct 2024 03:19
nextcollapse
Also, when a tool increases your productivity but your salary and paid time off don’t increase, it’s a tool that only benefits the overlords and as such deserves to be hated.
stephen01king@lemmy.zip
on 04 Oct 2024 05:15
collapse
Oh, so do you use a 13 year old PC because a newer one increases your productivity without increasing your salary and paid time off?
Cryophilia@lemmy.world
on 04 Oct 2024 11:29
nextcollapse
Personally… I do
I could request a new one, but why? This one works, it’s just slow as all hell.
TrickDacy@lemmy.world
on 04 Oct 2024 12:15
nextcollapse
I could request a new one, but why?
Gives excellent argument for requesting a new one:
slow as all hell.
Cryophilia@lemmy.world
on 04 Oct 2024 15:49
collapse
I’m paid by the hour, I don’t care
stephen01king@lemmy.zip
on 04 Oct 2024 12:24
collapse
I mean, you’re clearly using them because they still work, not because of a hatred for increasing productivity for the overlords. Your choice was based on reasonable logic, unlike the other guy.
leftzero@lemmynsfw.com
on 04 Oct 2024 15:19
collapse
I use a 13 year old PC because a newer one will be infected with Windows 11. (The company refuses to migrate to Linux because some of the software they use isn’t compatible.)
leftzero@lemmynsfw.com
on 04 Oct 2024 04:16
nextcollapse
Having to deal with pull requests defecated by “developers” who blindly copy code from chatgpt is a particularly annoying and depressing waste of time.
At least back when they blindly copied code from stack overflow they had to read through the answers and comments and try to figure out which one fit their use case better and why, and maybe learn something… now they just assume the LLM is right (despite the fact that they asked the wrong question and even if they had asked the right one it’d’ve given the wrong answer) and call it a day; no brain activity or learning whatsoever.
TrickDacy@lemmy.world
on 04 Oct 2024 04:40
collapse
That is not a problem with the ai software, that’s a problem with hiring morons who have zero experience.
leftzero@lemmynsfw.com
on 04 Oct 2024 05:05
collapse
No. LLMs are very good at scamming people into believing they’re giving correct answers. It’s practically the only thing they’re any good at.
Don’t blame the victims, blame the scammers selling LLMs as anything other than fancy but useless toys.
jungle@lemmy.world
on 04 Oct 2024 09:09
nextcollapse
Did you get scammed by the LLM? If not, what’s the difference between you and the dev you mentioned?
leftzero@lemmynsfw.com
on 04 Oct 2024 12:43
collapse
I was lucky enough to not have access to LLMs when I was learning to code.
Plus, over the years I’ve developed a good thick protective shell (or callus) of cynicism, spite, distrust, and absolute seething hatred towards anything involving computers, which younger developers yet lack.
Sorry, you misunderstood my comment, which was very badly worded.
I meant to imply that you, an experienced developer, didn’t get “scammed” by the LLM, and that the difference between you and the dev you mentioned is that you know how to program.
I was trying to make the point that the issue is not the LLM but the developer using it.
leftzero@lemmynsfw.com
on 04 Oct 2024 14:54
collapse
And I’m saying that I could have been that developer if I were twenty years younger.
They’re not bad developers, they just haven’t yet been hurt enough to develop protective mechanisms against scams like these.
They are not the problem. The scammers selling the LLM’s as something they’re not are.
GreenKnight23@lemmy.world
on 04 Oct 2024 08:18
nextcollapse
I sent a PR back to a Dev five times before I gave the work to someone else.
they used AI to generate everything.
surprise, there were so many problems it broke the whole stack.
this is a routine thing this one dev does too. every PR has to be tossed back at least once. not expecting perfection, but I do expect it to not break the whole app.
TrickDacy@lemmy.world
on 04 Oct 2024 11:19
collapse
Like I told another person ITT, hiring terrible devs isn’t something you can blame on software.
GreenKnight23@lemmy.world
on 04 Oct 2024 13:46
collapse
that depends on your definition of what a “terrible dev” is.
of the three devs that I know have used AI, all we’re moderately acceptable devs before they relied on AI. this formed my opinion that AI code and the devs that use it are terrible.
two of those three I no longer work with because they were let go for quality and productivity issues.
so you can clearly see why my opinion of AI code is so low.
TrickDacy@lemmy.world
on 04 Oct 2024 14:18
collapse
I would argue that it’s obvious if someone doesn’t know how to use a tool to do their job, they aren’t great at their job to begin with.
Your argument is to blame the tool and excuse the person who is awful with the tool.
GreenKnight23@lemmy.world
on 04 Oct 2024 15:19
collapse
my argument is that lazy devs use the tool because that’s what it was designed for.
just calling a hammer a hammer.
aesthelete@lemmy.world
on 04 Oct 2024 15:24
nextcollapse
Some tools deserve blame. In the case of this, you’re supposed to use it to automate away certain things but that automation isn’t really reliable. If it has to be babysat to the extent that I certainly would argue that it does, then it deserves some blame for being a crappy tool.
If, for instance, getter and setter generating or refactor tools in IDEs routinely screwed up in the same ways, people would say that the tools were broken and that people shouldn’t use them. I don’t get how this is different just because of “AI”.
TrickDacy@lemmy.world
on 04 Oct 2024 15:46
collapse
Okay, so if the tool seems counterproductive for you, it’s very assuming to generalize that and assume it’s the same for everyone else too. I definitely do not have that experience.
aesthelete@lemmy.world
on 04 Oct 2024 16:05
nextcollapse
It’s not about it being counterproductive. It’s about correctness. If a tool produces a million lines of pure compilable gibberish unrelated to what you’re trying to do, from a pure lines of code perspective, that’d be a productive tool. But software development is more complicated than writing the most lines.
Now, I’m not saying that AI tools produce pure compilable gibberish, but they don’t reliably produce correct code either. So, they fall somewhere in the middle, and similarly to “driver assistance” technologies that half automate things but require constant supervision, it’s quite possible that the middle is the worst area for a tool to fall into.
Everywhere around AI tools there are asterisks about it not always producing correct results. The developer using the tool is ultimately responsible for the output of their own commits, but the tool itself shares in the blame because of its unreliable nature.
TrickDacy@lemmy.world
on 04 Oct 2024 16:23
collapse
Copilot produces useful and correct code for me 5 days a week. I’m sorry you don’t see the same benefits.
FlorianSimon@sh.itjust.works
on 04 Oct 2024 16:54
collapse
You can bury your head under the sand all you want. Meanwhile, the arguments proving the tech “flimsy af” will keep piling up.
TrickDacy@lemmy.world
on 04 Oct 2024 16:56
collapse
cio.com (which I’ve totally heard of before) – the forefront of objective reality and definitely not rage-clickbait
TrickDacy@lemmy.world
on 04 Oct 2024 15:45
collapse
Using a tool to speed up your work is not lazy. Using a tool stupidly is stupid. Anyone who thinks these tools are meant to replace humans using logic is misunderstanding them entirely.
You remind me of some of my coworkers who would rather do the same mind numbing task for hours every day rather than write a script that handles it. I judge them for thinking working smarter is “lazy” and I think it’s a fair judgement. I see them as the lazy ones. They’d rather not think more deeply about the scripting aspect because it’s hard. They rather zone out and mindlessly click, copy/paste, etc. I’d rather analyze and break down the problem so I can solve it once and then move onto something more interesting to solve.
aesthelete@lemmy.world
on 04 Oct 2024 16:16
nextcollapse
They rather zone out and mindlessly click, copy/paste, etc. I’d rather analyze and break down the problem so I can solve it once and then move onto something more interesting to solve.
From what I’ve seen of AI code in my time using it, it often is an advanced form of copying and pasting. It frequently takes problems that could be better solved more efficiently with fewer lines of code or by generalizing the problem and does the (IMO evil) work of making the solution that used to require the most drudgery easy.
aesthelete@lemmy.world
on 04 Oct 2024 16:26
collapse
Why are you typing so much in the first place?
Software development for me is not a term paper. I once encountered a piece of software in industry that was maintaining what would be a database in any sane piece of software using a hashmap and thousands of lines of code.
AI makes software like this easier to write without your eyes glazing over, but it’s been my career mission to stop people from writing this type of software in the first place.
aesthelete@lemmy.world
on 04 Oct 2024 16:55
collapse
Lol, it couldn’t determine the right amount of letters in the word strawberry using its training before. I’m not criticizing the training data. I’m criticizing a tool and its output.
It’s amusing to me that at first it’s “don’t blame the tool when it’s misused” and now it’s “the tool is smarter than any individual dev”. So which is it? Is it impossible to misuse this tool because it’s standing atop the shoulders of giants? Or is it something that has to be used with care and discretion and whose bad outputs can be blamed upon the individual coders who use it poorly?
TrickDacy@lemmy.world
on 04 Oct 2024 16:58
collapse
Gonna go cry myself to sleep now. I feel so inferior
aesthelete@lemmy.world
on 04 Oct 2024 17:02
collapse
That’s it. Don’t respond to the points and the obvious contradictions in your bad arguments only explicable by your personal hard on for the tool, just keep shit posting through it instead.
GreenKnight23@lemmy.world
on 04 Oct 2024 16:31
collapse
sometimes working smarter is actually putting the work in so you don’t have to waste time and stress about if it’s going to work or not.
I get Dreamweaver vibes from AI generated code. Sure, the website works. looks exactly the way it should. works exactly how it should. that HTML source though… fucking aweful.
I can agree, AI is an augment to the tools you can use. however, it’s being marketed as a replacement and a large variety of devs are using it as such.
shitty devs are enabled by shitty tools.
TrickDacy@lemmy.world
on 04 Oct 2024 16:46
nextcollapse
shitty devs are enabled by shitty tools.
No, shitty devs are enabled by piss-poor hiring practices. I’m currently working with two devs that submit mind bogglingly bad PRs all of the time, and it’s 100% because we hired them in a hasty manner and overlooking issues they displayed during interviews.
Neither of these bad devs use AI to my knowledge. On the other hand I use copilot constantly and the only difference I see in my work is that it takes me less time to complete a given task. It shaves 1-2 minutes off of writing a block/function several times an hour, and that is a good thing.
GreenKnight23@lemmy.world
on 04 Oct 2024 17:07
collapse
so your argument is because shitty devs exist that AI can’t be a shitty tool.
Shitty tools exist. shitty devs exist. allowing AI code generation only serves as an excuse for shitty devs when they’re allowed to use it. “oh sorry, the AI did that.” “man that didn’t work? musta been that new algorithm github updated yesterday.”
shitty workers use shitty tools because they don’t care about the quality and consistency of the product they build.
ever seen a legitimate carpenter use one of these things to build a house?
yeah, you won’t because anything built with that will never pass inspection. shitty tools are used by shitty devs.
could AI code generation get better? absolutely! is it possible to use it today? sure. should you use it? absolutely not.
as software developers we have the power to build and do whatever we want. we have amazing powers that allow us to do that, but rarely do we ever stop to ask if we should do it.
aesthelete@lemmy.world
on 04 Oct 2024 18:26
collapse
I get Dreamweaver vibes from AI generated code.
Same. AI seems like yet another attempt at RAD just like MS Access, Visual Basic, Dreamweaver, and even to some extent Salesforce, or ServiceNow. There are so many technologies that champion this…RoR, Django, Spring Boot…the list is basically forever.
To an extent, it’s more general purpose than those because it can be used with multiple languages or toolkits, but I find it not at all surprising that the first usage of gen AI in my company was to push out “POCs” (the vast majority of which never amounted to anything).
The same gravity applies to this tool as everything else in software…which is that prototyping is easy…integration is hard (unless the organization is well structured, which, well, almost none of them are), and software executives tend to confuse a POC with production code and want to push it out immediately, only to find out that it’s a Potemkin village underneath as they sometimes (or even often) were told the entire time.
So much of the software industry is “JUST GET THIS DONE FASTER DAMMIT!” from middle managers who still seem (despite decades of screaming this) to have developed no widespread means of determining either what they want to get done, or what it would take to get it done faster.
What we have been dealing with the entire time is people that hate to be dependent upon coders or other “nerds”, but need them in order to create products to accomplish their business objectives.
Middle managers still think creating software is algorithmic nerd shit that could be automated…solving the same problems over and over again. It’s largely been my experience that despite even Computer Science programs giving it that image, that the reality is modern coding is more akin to being a millwright. Most of the repetitive, algorithmic nerd shit was settled long ago and can be imported via modules. Imported modules are analogous to parts, and your job is to build or maintain the actual machine that produces the outcomes that are desired, making connecting parts to get the various components to interoperate as needed, repairing failing components, or spotting the shoddy welding between them that is making the current machine fail.
tee9000@lemmy.world
on 04 Oct 2024 15:09
nextcollapse
Its really weird.
I want to believe people arent this dumb but i also dont want to be crazy for suggesting such nonsensical sentiment is manufactured. Such is life in the disinformation age.
Like what are we going to do, tell all Countries and fraudsters to stop using ai because it turns out its too much of a hassle?
FlorianSimon@sh.itjust.works
on 04 Oct 2024 16:53
collapse
We can’t do that, nobody’s saying we can. But this is an important reminder that the tech savior bros aren’t very different from the oil execs.
And constant activism might hopefully achieve the goal of pushing the tech out of the mainstream, with its friend crypto, along other things not to be taken seriously anymore like flying cars and the Hyperloop.
You are speaking for everyone so right away i dont see this as an actual conversion, but a decree of fact by someone i know nothing about.
What are you saying is an important reminder? This article?
By constant activism, do you mean anything that occurs outside of lemmy comments?
Why would we not take LLMs seriously?
FlorianSimon@sh.itjust.works
on 04 Oct 2024 17:33
collapse
I’m talking about people criticizing LLMs. I’m not a politician. But I’ve seen a few debates about LLMs on this platform, enough to know about the common complaints against ShitGPT. I’ve never seen anyone on this platform seriously arguing for a ban. We all know it’s stupid and that it will be ineffective, just like crackdowns on VPNs in authoritarian countries.
The reminder is the tech itself. It’s yet another tech pushed by techbros to save the world that fails to deliver and is costing the rest of the planet dearly in the form of ludicrous energy consumption.
And by activism, I mean stuff happening on Lemmy as well as outside (coworkers, friends, technical people at conferences/meetups). Like it or not, the consensus among techies in my big canadian city is that, while the tech sure is interesting, it’s regarded with a lot of mistrust.
You can take LLMs seriously if you’d like. But the proofs that the tech is unsound for software engineering keep piling up. I’m fine with your skepticism. But I think the future will look bleaker and bleaker as times goes by. Not a week goes by without its lot of AI fuckups being reported in the press. This article is one of many examples.
Theres no particular fuck up mentioned by this article.
The company that conducted the study which this article speculates on said these tools are getting rapidly better and that they arent suggesting to ban ai development assistants.
Also as quoted in the article, the use of these coding assistance is a process in and of itself. If you arent using ai carefully and iteratively then you wont get good results with current models. How we interact with models is as important as the model’s capability. The article quotes that if models are used well, a coder can be faster by 2x or 3x. Not sure about that personally… seems optimistic depending on whats being developed.
It seems like a good discussion with no obvious conclusion given the infancy of the tech. Yet the article headline and accompanying image suggest its wreaking havoc.
Reduction of complexity in this topic serves nobody. We should have the patience and impartiality to watch it develop and form opinions independently from commeter and headline sentiment. Groupthink has been paricularly dumb on this topic from what ive seen.
FlorianSimon@sh.itjust.works
on 04 Oct 2024 19:42
collapse
Nobody talked about banning them, once again. I don’t want to do that. I want it to leave the mainstream, for environmental reasons first and foremost.
The fuckup is, IDK, the false impression of productivity, and the 41% more bugs? That seems like a huge deal to me, even though I’d like to see this study being reproduced to draw real conclusions.
This, with strawberrries, Air Canada’s chatbots, the 3 Miles Island stuff, the delaying of Google’s carbon neutrality efforts, the cursed Google results telling you to add glue to your pizza, the distrust of the general public about anything with an AI label on it, to mention just a few examples… It’s starting to become a lot.
Even if you omit the ethical aspects of cooking the planet for a toy, the technology is wildly unsound. You seem to think it can get better, and I can respect that. But I’m very skeptical, and there’s a lot of people with the same opinion, even in tech.
YungOnions@sh.itjust.works
on 04 Oct 2024 15:26
collapse
Typical lack of nuance on the Internet, sadly. Everything has to be Bad or Good. Black or White. AI is either The best thing ever™ or The worst thing ever™. No room for anything in between. Considering negative news generates more clicks, you can see why the media tend to take the latter approach.
I also think much of the hate is just people jumping on the AI = bad band-wagon. Does it have issues? Absolutely. Is it perfect? Far from it. But the constant negativity has gotten tired. There’s a lot of fascinating discussion to be had around AI, especially in the art world, but God forbid you suggest it’s anything but responsible for the total collapse of civilisation as we know it…
TrickDacy@lemmy.world
on 04 Oct 2024 15:53
nextcollapse
I think you nailed it with everything you just said.
FlorianSimon@sh.itjust.works
on 04 Oct 2024 16:48
collapse
If it didn’t significantly contribute to the cooking of all lifeforms on planet Earth, most of us would not mind. We would still deride it because of its untrustworthiness. However, it’s not just useless: it’s also harmful. That’s the core of the beef I (and a lot of other folks) have against the tech.
YungOnions@sh.itjust.works
on 04 Oct 2024 16:55
nextcollapse
Oh for sure. How we regulate AI (including how we power it) is really important, definitely.
a_wild_mimic_appears@lemmy.dbzer0.com
on 04 Oct 2024 20:25
collapse
cooking of all lifeforms on planet Earth
the core of the beef
yum lifeform beef stew
Grandwolf319@sh.itjust.works
on 04 Oct 2024 02:21
nextcollapse
Yep, by definition generative AI gets worse the more specific you get. If you need common templates though, it’s almost as good as today’s google.
mint_tamas@lemmy.world
on 04 Oct 2024 05:41
collapse
… which is not a high bar.
eager_eagle@lemmy.world
on 04 Oct 2024 02:26
nextcollapse
lol Uplevel’s “”“full report”“” saying devs using Copilot create 41% more bugs has 2 pages and reads like a promotional material.
you can download it with a 10 minute email if you really want to see for yourself.
just some meaningless numbers.
VonReposti@feddit.dk
on 04 Oct 2024 05:03
nextcollapse
While I am not fond of AI, we do have access to it at work and I must admit that it saves some time in some cases. I’m not a developer with decades of experience in a single language, so something I am using AI to is asking “Is it possible to do a one-liner in language X where it does Y?” It works very well and the code is rarely unusable, but it is still up to my judgement whether the AI came up with a clever use of functions that I didn’t know about or whether it crammed stuff into a single unreadable line.
TheEighthDoctor@lemmy.world
on 04 Oct 2024 07:47
nextcollapse
I’m a penetration tester and it increases my productivity a lot
GreenKnight23@lemmy.world
on 04 Oct 2024 08:11
nextcollapse
as a dental assistant I can also confirm that AI has increased my productivity, checks notes, by a lot.
yikerman@lemmy.world
on 04 Oct 2024 09:07
nextcollapse
I mainly use AI for learning new things. It’s amazing at trivial tasks.
Gonzako@lemmy.world
on 04 Oct 2024 09:20
nextcollapse
so it’s a vector of attack?
Tattorack@lemmy.world
on 04 Oct 2024 14:40
collapse
Penetration tester, huh? Sounds like a fun and reproductive job.
TheEighthDoctor@lemmy.world
on 04 Oct 2024 15:23
collapse
But it can be very HARD sometimes
Landless2029@lemmy.world
on 04 Oct 2024 08:02
nextcollapse
Everyone keeps talking about autocomplete but I’ve used it successfully for comments and documentation.
You can use vs code extensions to generate and update readme and changelog files.
Then if you follow documentation as code you can update your Confluence/whatever by copy pasting.
I also use it a lot for unit tests. It helps a lot when you have to write multiple edge cases, and even find new one at times. Like putting a random int in an enum field (enumField = (myEnum)1000), I didn’t knew you could do that…
Landless2029@lemmy.world
on 04 Oct 2024 14:25
nextcollapse
Yeah. I’ve found new logic by asking GPT for improvements on my code or suggestions.
I cut the size of a function in half once using a suggested recursive loop and it blew my mind.
Feels like having a peer to do a code review on hand at all times.
I run code snippets by three or four LLMs and the consensus is never there. Claude has been the worst for me.
CaptSneeze@lemmy.world
on 04 Oct 2024 13:23
collapse
Which one has been best? I’m only a hobbyist, but I’ve found Claude to be my favorite, and the best UI by a mile.
histic@lemmy.dbzer0.com
on 04 Oct 2024 14:06
nextcollapse
Garbage in garbage out is how they all work if you give it a well defined prompt you can get exactly what you want out of it most of the time but if you just say fix this problem it’ll just fix the problem ignoring everything else
zbyte64@awful.systems
on 04 Oct 2024 15:42
collapse
You’re holding it wrong
alienanimals@lemmy.world
on 04 Oct 2024 15:44
collapse
Wait till this guy learns how a theremin work.
Choco1ateCh1p@lemmy.world
on 04 Oct 2024 14:35
nextcollapse
Every now and then, GitHub Copilot saves me a few seconds suggesting some very basic solution that I am usually in the midst of creating. Is it worth the investment? No, at least not yet. It hasn’t once “beaten” me or offered an improved solution. It (more frequently than not) requires the developer to understand and modify what it proposes for its suggestions to be useful. Is is a useful tool? Sure, just not worth the price yet, and obviously not perfect. But, where I’m working is testing it out, so I’ll keep utilizing it.
ggppjj@lemmy.world
on 04 Oct 2024 15:23
nextcollapse
It introduced me to the basics of C# in a way that traditional googling at my previous level of knowledge would’ve made difficult.
I knew what I wanted to do and I didn’t know what was possible or how to ask without my question being closed as a duplicate with a link to an unhelpful post.
In that regard, it’s very helpful. If I had already known the language well enough, I can see it being less helpful.
Semi_Hemi_Demigod@lemmy.world
on 04 Oct 2024 15:38
nextcollapse
This is what I’ve used it for and it’s helped me learn, especially because it makes mistakes and I have to get them to work. In my case it was with Terraform and Ansible.
Haha, yeah. It really loves to refactor my code to “fix” bracket list initialization (e.g. List<string> stringList = [];) because it keeps not remembering that the syntax has been valid for a while.
It’s newest favorite hangup is to incessantly suggest null checks without asking if it’s a nullable property that it’s checking first. I think I’m almost at the point where it’s becoming less useful to me.
UnderpantsWeevil@lemmy.world
on 04 Oct 2024 16:08
nextcollapse
Great for Coding 101 in a language I’m rusty with or otherwise unfamiliar.
Absolutely useless when it comes time to optimize a complex series of functions or upgrade to a new version of the .NET library. All the “AI” you need is typically baked into Intellisense or some equivalent anyway. We’ve had code-assist/advice features for over a decade and its always been mid. All that’s changed is the branding.
ByteOnBikes@slrpnk.net
on 04 Oct 2024 18:34
nextcollapse
I learned bash thanks to AI!
For years, all I did was copy and paste bash commands. And I didn’t understand arguments, how to chain things, or how it connects.
HighlyRegardedArtist@lemmy.world
on 04 Oct 2024 20:34
collapse
You do realize that a very thorough manual is but a man bash away? Perhaps it’s not the most accessible source available, but it makes up for that in completeness.
I believe accessibility is the part that makes LLMs helpful, when they are given an easy enough task to verify. Being able to ask a thing that resembles a human what you need instead of reading through possibly a textbook worth of documentation to figure out what is available and making it fit what you need is fairly powerful.
If it were actually capable of reasoning, I’d compare it to asking a linguist the origin of a word vs looking it up in a dictionary. I don’t think anyone disagrees that the dictionary would be more likely to be fully accurate, and also I personally would just prefer to ask the person who seemingly knows and, if I have reason to doubt, then go back and double-check.
Here’s the manpage for bash’s statistics from wordcounter.net:
<img alt="" src="https://lemmy.world/pictrs/image/3601b48a-54e3-4a03-8271-cf02be985e29.jpeg">
HighlyRegardedArtist@lemmy.world
on 04 Oct 2024 21:23
collapse
Perhaps LLMs can be used to gain some working vocabulary in a subject you aren’t familiar with. I’d say anything more than that is a gamble, since there’s no guarantee that hallucinations have not taken place. Remember, that to spot incorrect info, you need to already be well acquainted with the matter at hand, which is at the polar opposite of just starting to learn the basics.
I do try to keep the “unknown unknowns” problem in mind when I use it, and I’ve been using it far less as I latched on to how OOP actually works and built up the lexicon and my own preferences. I try to only ask it for high-level stuff that I can then use to search the wider (hopefully more human) internet more traditionally with. I fully appreciate that it’s nothing more than a very incredibly fancy auto-completion engine and the basic task of auto-complete just so happens to appear intelligent as it gets better and more complex but continues to lack any form of real logical thoughts.
turmacar@lemmy.world
on 04 Oct 2024 22:09
collapse
Even with amazing documentation, it can be hard to find the thing you’re looking for if you don’t know the right phrasing or terminology yet. It’s easily the most usable thing I’ve seen come out of “AI”, which makes sense. Using a Language Model to parse language is a very literal application.
The person I replied to was talking about learning the basics of a language… This isn’t about searching for something specific, this is about reading the very basic introduction to a language before trying to Google your way through it. Avoiding the basic documentation is always a bad idea. Replacing it with the LLMed version of the original documentation probably even more so.
alienanimals@lemmy.world
on 04 Oct 2024 15:42
nextcollapse
The writer has a clear bias and a lack of a technical background (writing for Techies.com doesn’t count) .
You don’t have to look hard to find devs saving time and learning something with AI coding assistants. There are plenty of them in this thread. This is just an opinion piece by someone who read a single study.
if you are already competent and you are aware that it doesn’t necessarily give you correct information, the it is really helpful. I know enough to sense when it is making shit up. Also it is, for some scenarios, faster and easier then looking at a documentation. I like it personally. But it will not replace competent developers anytime soon.
This opinion is a breath of fresh air compared to the rest of tech journalism screaming “AI software engineer” after each new model release.
technocrit@lemmy.dbzer0.com
on 04 Oct 2024 16:25
nextcollapse
I’m fine with searching stack exchange. It’s much more useful. More info, more options, more understanding.
asmodee59@lemmy.world
on 04 Oct 2024 17:19
nextcollapse
Who are those guys they keep asking this question over and over ?
And how are they not able to use such a simple tool to increase their productivity ?
filister@lemmy.world
on 04 Oct 2024 17:37
nextcollapse
To be honest ChatGPT pretty much killed the fun of programming.
ByteOnBikes@slrpnk.net
on 04 Oct 2024 18:35
collapse
What?
filister@lemmy.world
on 05 Oct 2024 03:45
collapse
Programming was like a challenge, you have a problem and you need to solve it. You look into the internet, stack overflow, test different chunks of codes, reading documentation, etc. nowadays is simply splitting one problem into pieces, and then copy pasting.
Zoots@lemmy.world
on 04 Oct 2024 17:43
nextcollapse
Judging this article by it’s title (refuse to click). Calling BS. ChatGPT has been a game changer for me personally
daniskarma@lemmy.dbzer0.com
on 04 Oct 2024 19:03
nextcollapse
It has some uses.
But I’m waiting for a good self hosted model and to have a more powerful gpu to properly run it.
BigBenis@lemmy.world
on 04 Oct 2024 20:33
nextcollapse
It’s great as essentially a StackOverflow that I can talk to in real time. But as with SO, I’ve still got to figure out what pieces are legit and where they go.
RagingRobot@lemmy.world
on 04 Oct 2024 21:48
collapse
AI search results made stack overflow answers harder to find now lol
turmacar@lemmy.world
on 04 Oct 2024 22:12
collapse
It’s definitely exploded but content farms were a problem even before 2022. There’s a reason google results starting with “reddit” / “stack overflow” were trending so hard.
sirico@feddit.uk
on 04 Oct 2024 22:17
nextcollapse
It’s just fancier spell check and boilerplate generator
rsuri@lemmy.world
on 05 Oct 2024 01:56
nextcollapse
I use it occasionally. Recently I used it to convert a written specification in a document to a java object. And it was like 95% correct - but having to manually double check everything and fix the errors eliminated much of the time savings.
However that’s a very ideal use case. Most often I forget it exists.
rolaulten@startrek.website
on 05 Oct 2024 05:04
collapse
I use it a fair bit.
Mind, it’s something like formating a giant json stdout into something I want to read…
I also do find it’s useful for sketching out an outline In pseudo code.
ulkesh@lemmy.world
on 05 Oct 2024 02:22
nextcollapse
No shit. Senior devs have been saying this the whole time. AI, in its current form, for developers, is like handing a spatula to a gourmet chef. Yes it is useful to an extremely small degree, but that’s it…for now.
finitebanjo@lemmy.world
on 05 Oct 2024 04:10
nextcollapse
A convoluted spatula that sometimes accidentally cuts what your cooking im half instead of flipping it and consumes as much power as the entirety of Japan.
interdimensionalmeme@lemmy.ml
on 05 Oct 2024 10:58
collapse
It’s when you only have a pot and your fingers that a spatula is awesome.
I could never bother finish learning C and its awkward syntax. Even though I know how to code in some other language, I just couldn’t write much C at all and it was painful and slow. And too much time passed between attempts that I forgot most of it in between. Now I can easily make simple C apps, I just explain the underlying logic, with example of how I would do it in my preferred language and piece by piece it quickly comes together and I don’t have to remember if the for loop needs brackets of parenthesis or brackets nor if the line terminator is colon or semi colon.
The problem is that you’re still not learning, then. Maybe that’s your goal, and if so, no worries, but AI is currently a hammer that everyone seems to be falling over themselves finding nails for.
All I can do is sigh and shake my head. This bubble will burst, and AI will still take decades to get to the point people think it is already at.
interdimensionalmeme@lemmy.ml
on 06 Oct 2024 03:00
collapse
Au contraire, not only you quickly learn the grab bag of strategy and tricks of the “average programmers” and their default solutions, you no longer get bogged down in the menial wrangling of compiler syntax.
That is IF you actually read, debug and implement this code as part of a larger system.
Of course if it “just works” and you don’t read how it works then you just get a working tool, but don’t really learn how it works inside. Kind of like those people who just drive cars but never did replace their crank bearings and transmission clutch packs
If you do interact with the code I think it will quickly elevate a newbie to a mediocre but capable programmer.
Progressing beyond that is like stepping out and walking after driving for days.
LordCrom@lemmy.world
on 05 Oct 2024 03:41
nextcollapse
I get more benefit from a good IDE that helps me track libraries, cars, functions, grammar checks my code, offers a pop-up with params and options…
I don’t needcode I would grade as a D- from an AI. Most of what I write comes from my code closet anyway. I have skeleton code for so much, and I trust my old code more than AIs new code
I partly disagree, complex algorithms are indeed a no, but for learning a new language it is awesome.
Currently learning Rust and although it cannot solve everything, it does guide you with suggestions and usable code fragments.
Highly recommended.
asdfasdfasdf@lemmy.world
on 05 Oct 2024 23:59
nextcollapse
Is there anything it provided you so far that was better than the guidance from the Rust compiler errors themselves? Every error ends with “run this command for a tutorial on why this error happened and how to fix it” type of info. A lot of times the error will directly tell you how to fix it too.
I agree, although some messages are still cryptic for a newbie like me, but thats maybe more the person on the chair than the compiler 😇.
I’d estimate copilot to be correct in only 10% of the time, solving a situation like that. Most of the time the solution suggested is also wrong, but just differently.
Having said that: sometimes (small chance, 1% maybe) the solution is spot on.
AI mainly helps with the initial syntax and on language constructs and for that it is awesome.
WhyJiffie@sh.itjust.works
on 07 Oct 2024 00:35
collapse
Currently learning Rust and although it cannot solve everything, it does guide you with suggestions and usable code fragments.
threaded - newest
I’m shocked. There must be an error in this analysis. /s
Maybe engage an AI coding assistant to massage the data analysis lol
Places GPT-based “AI” next to flying cars
Flying cars exist, they’re just not cost effective. AFAICT there’s no GPT that is proficient at coding yet.
It’s a lot easier to access ChatGPT than it is to access a flying car
The more people using chatgpt to generate low quality code they don't understand, the more job safety and greater salary I get.
As far as I know, right now the main problem with flying cars is that they are nowhere near as idiot-proof as a normal car, and don’t really solve any transportation problem since most countries’ air regulations agencies would require them to exclusively take off and land in airports… Where you can usually find tons of planes that can go much further (and are much more cost effective, as you pointed out)
For me, it is a glorified auto-complete function. Could definitely live without it.
Same for me, but that glorified auto complete helps a lot.
Hell yea. Our unit test coverage went way up because you can blow through test creation in second. I had a large complicated migration from one data set to another with specific mutations based on weird rules and GPT got me 80% of the way there and with a little nudging basically got it perfect. Code that would’ve taken a few hours took about 6 prompts. If I’m curious about a new library I can get a working example right away to see how everything fits together. When these articles say there’s no benefit I feel people aren’t using these tools or don’t know how to use them effectively.
Yeah, it’s useful, you just gotta keep it on a short leash, which is difficult when you don’t know what you’re doing
Basically, it’s a useful tool for experienced developers that know what to look out for
From the combined comments it looks like if you are a beginner or a pro then it’s great; if you only have just enough knowledge to be dangerous (in german that’s proverbial “gefährliches Halbwissen”) you should probably stay away from it :-)
We always have to ask what language is it auto-completing for? If it is a strictly typed language, then existing tooling is already doing everything possible and I see no need for additional improvement. If it is non-strictly typed language, then I can see how it can get a little more helpful, but without knowledge of actual context I am not sure if it can get a lot more accurate.
And yet, higher ups continue to lay off more devs because AI “is the future”.
In my experience, most of the tech layoffs have been non-devs. PMs and Designers have been the hardest hit and often their roles are being eliminated.
I mean, I’m a dev who got laid off almost a year ago and still can’t find anything. I know tons of others who are in similar positions. So…
Good devs gain little.
I gain a lot.
Feel the same way!
Its basically a template generator, which is really helpful when you’re generating boilerplate. It doesn’t save me much if any time to refactor/fill in that template, but it does save some mental fatigue that I can then spend on much more interesting problems.
It’s a niche tool, but occasionally quite handy. Without leaps forward technically though, it’s never going to become more than that.
Just beware, sometimes the AI suggestions are scary good, some times they’re batshit crazy.
Just because AI suggests it, doesn’t mean it’s something you should use or learn from.
Devs that are punching above their class, however, probably get great benefit from it. I would think it’s also an OK learning tool, except for how inaccurate it can be sometimes.
I like to use suggestions to feel superior when trash talking the generated code
My main use is skipping the blank page problem when writing a new suite of tests—which after about 10 mins of refactoring are often a good starting point
I honestly stopped using it after a week
Generative AI is great for loads of programming tasks like helping create regular expressions or syntax conversions between languages. The main issue I’ve seen in codebases that rely heavily on generative AI is that the “solutions” often fix today’s bug while making future debugging more difficult. Generative AI makes it easy to go fast in the wrong direction. Used right it’s a useful tool.
I truly don’t understand the tendency of people to hate these kinds of tools. Honestly seems like an ego thing to me.
Carbon footprint. Techbro arrogance. Not sure what’s hard to understand about it.
.
Of course you know me better than myself.
I guess you wanted an answer but decided upfront you weren’t gonna like it no matter what? Not much I can do about that.
.
.
.
Whether or not I’ve used Copilot is entirely irrelevant to the points I was making. Please remain on topic.
.
Also, when a tool increases your productivity but your salary and paid time off don’t increase, it’s a tool that only benefits the overlords and as such deserves to be hated.
.
.
.
.
.
.
.
Oh, so do you use a 13 year old PC because a newer one increases your productivity without increasing your salary and paid time off?
Personally… I do
I could request a new one, but why? This one works, it’s just slow as all hell.
Gives excellent argument for requesting a new one:
I’m paid by the hour, I don’t care
I mean, you’re clearly using them because they still work, not because of a hatred for increasing productivity for the overlords. Your choice was based on reasonable logic, unlike the other guy.
I use a 13 year old PC because a newer one will be infected with Windows 11. (The company refuses to migrate to Linux because some of the software they use isn’t compatible.)
Having to deal with pull requests defecated by “developers” who blindly copy code from chatgpt is a particularly annoying and depressing waste of time.
At least back when they blindly copied code from stack overflow they had to read through the answers and comments and try to figure out which one fit their use case better and why, and maybe learn something… now they just assume the LLM is right (despite the fact that they asked the wrong question and even if they had asked the right one it’d’ve given the wrong answer) and call it a day; no brain activity or learning whatsoever.
That is not a problem with the ai software, that’s a problem with hiring morons who have zero experience.
No. LLMs are very good at scamming people into believing they’re giving correct answers. It’s practically the only thing they’re any good at.
Don’t blame the victims, blame the scammers selling LLMs as anything other than fancy but useless toys.
Did you get scammed by the LLM? If not, what’s the difference between you and the dev you mentioned?
I was lucky enough to not have access to LLMs when I was learning to code.
Plus, over the years I’ve developed a good thick protective shell (or callus) of cynicism, spite, distrust, and absolute seething hatred towards anything involving computers, which younger developers yet lack.
Sorry, you misunderstood my comment, which was very badly worded.
I meant to imply that you, an experienced developer, didn’t get “scammed” by the LLM, and that the difference between you and the dev you mentioned is that you know how to program.
I was trying to make the point that the issue is not the LLM but the developer using it.
And I’m saying that I could have been that developer if I were twenty years younger.
They’re not bad developers, they just haven’t yet been hurt enough to develop protective mechanisms against scams like these.
They are not the problem. The scammers selling the LLM’s as something they’re not are.
Ah, gotcha, and I agree
.
I sent a PR back to a Dev five times before I gave the work to someone else.
they used AI to generate everything.
surprise, there were so many problems it broke the whole stack.
this is a routine thing this one dev does too. every PR has to be tossed back at least once. not expecting perfection, but I do expect it to not break the whole app.
Like I told another person ITT, hiring terrible devs isn’t something you can blame on software.
that depends on your definition of what a “terrible dev” is.
of the three devs that I know have used AI, all we’re moderately acceptable devs before they relied on AI. this formed my opinion that AI code and the devs that use it are terrible.
two of those three I no longer work with because they were let go for quality and productivity issues.
so you can clearly see why my opinion of AI code is so low.
I would argue that it’s obvious if someone doesn’t know how to use a tool to do their job, they aren’t great at their job to begin with.
Your argument is to blame the tool and excuse the person who is awful with the tool.
my argument is that lazy devs use the tool because that’s what it was designed for.
just calling a hammer a hammer.
Some tools deserve blame. In the case of this, you’re supposed to use it to automate away certain things but that automation isn’t really reliable. If it has to be babysat to the extent that I certainly would argue that it does, then it deserves some blame for being a crappy tool.
If, for instance, getter and setter generating or refactor tools in IDEs routinely screwed up in the same ways, people would say that the tools were broken and that people shouldn’t use them. I don’t get how this is different just because of “AI”.
Okay, so if the tool seems counterproductive for you, it’s very assuming to generalize that and assume it’s the same for everyone else too. I definitely do not have that experience.
It’s not about it being counterproductive. It’s about correctness. If a tool produces a million lines of pure compilable gibberish unrelated to what you’re trying to do, from a pure lines of code perspective, that’d be a productive tool. But software development is more complicated than writing the most lines.
Now, I’m not saying that AI tools produce pure compilable gibberish, but they don’t reliably produce correct code either. So, they fall somewhere in the middle, and similarly to “driver assistance” technologies that half automate things but require constant supervision, it’s quite possible that the middle is the worst area for a tool to fall into.
Everywhere around AI tools there are asterisks about it not always producing correct results. The developer using the tool is ultimately responsible for the output of their own commits, but the tool itself shares in the blame because of its unreliable nature.
Copilot produces useful and correct code for me 5 days a week. I’m sorry you don’t see the same benefits.
.
.
.
.
.
Have you read the article? It’s a shared experience multiple people report, and the article even provides statistics.
.
You can bury your head under the sand all you want. Meanwhile, the arguments proving the tech “flimsy af” will keep piling up.
cio.com (which I’ve totally heard of before) – the forefront of objective reality and definitely not rage-clickbait
Using a tool to speed up your work is not lazy. Using a tool stupidly is stupid. Anyone who thinks these tools are meant to replace humans using logic is misunderstanding them entirely.
You remind me of some of my coworkers who would rather do the same mind numbing task for hours every day rather than write a script that handles it. I judge them for thinking working smarter is “lazy” and I think it’s a fair judgement. I see them as the lazy ones. They’d rather not think more deeply about the scripting aspect because it’s hard. They rather zone out and mindlessly click, copy/paste, etc. I’d rather analyze and break down the problem so I can solve it once and then move onto something more interesting to solve.
From what I’ve seen of AI code in my time using it, it often is an advanced form of copying and pasting. It frequently takes problems that could be better solved more efficiently with fewer lines of code or by generalizing the problem and does the (IMO evil) work of making the solution that used to require the most drudgery easy.
.
Why are you typing so much in the first place?
Software development for me is not a term paper. I once encountered a piece of software in industry that was maintaining what would be a database in any sane piece of software using a hashmap and thousands of lines of code.
AI makes software like this easier to write without your eyes glazing over, but it’s been my career mission to stop people from writing this type of software in the first place.
.
.
.
Lol, it couldn’t determine the right amount of letters in the word strawberry using its training before. I’m not criticizing the training data. I’m criticizing a tool and its output.
It’s amusing to me that at first it’s “don’t blame the tool when it’s misused” and now it’s “the tool is smarter than any individual dev”. So which is it? Is it impossible to misuse this tool because it’s standing atop the shoulders of giants? Or is it something that has to be used with care and discretion and whose bad outputs can be blamed upon the individual coders who use it poorly?
Gonna go cry myself to sleep now. I feel so inferior
That’s it. Don’t respond to the points and the obvious contradictions in your bad arguments only explicable by your personal hard on for the tool, just keep shit posting through it instead.
sometimes working smarter is actually putting the work in so you don’t have to waste time and stress about if it’s going to work or not.
I get Dreamweaver vibes from AI generated code. Sure, the website works. looks exactly the way it should. works exactly how it should. that HTML source though… fucking aweful.
I can agree, AI is an augment to the tools you can use. however, it’s being marketed as a replacement and a large variety of devs are using it as such.
shitty devs are enabled by shitty tools.
No, shitty devs are enabled by piss-poor hiring practices. I’m currently working with two devs that submit mind bogglingly bad PRs all of the time, and it’s 100% because we hired them in a hasty manner and overlooking issues they displayed during interviews.
Neither of these bad devs use AI to my knowledge. On the other hand I use copilot constantly and the only difference I see in my work is that it takes me less time to complete a given task. It shaves 1-2 minutes off of writing a block/function several times an hour, and that is a good thing.
so your argument is because shitty devs exist that AI can’t be a shitty tool.
Shitty tools exist. shitty devs exist. allowing AI code generation only serves as an excuse for shitty devs when they’re allowed to use it. “oh sorry, the AI did that.” “man that didn’t work? musta been that new algorithm github updated yesterday.”
shitty workers use shitty tools because they don’t care about the quality and consistency of the product they build.
ever seen a legitimate carpenter use one of these things to build a house?
<img alt="Screenshot_20241004-120218_Firefox" src="https://lemmy.world/pictrs/image/c721b0dc-61a2-4155-9f47-962f8244a511.jpeg">
yeah, you won’t because anything built with that will never pass inspection. shitty tools are used by shitty devs.
could AI code generation get better? absolutely! is it possible to use it today? sure. should you use it? absolutely not.
as software developers we have the power to build and do whatever we want. we have amazing powers that allow us to do that, but rarely do we ever stop to ask if we should do it.
Same. AI seems like yet another attempt at RAD just like MS Access, Visual Basic, Dreamweaver, and even to some extent Salesforce, or ServiceNow. There are so many technologies that champion this…RoR, Django, Spring Boot…the list is basically forever.
To an extent, it’s more general purpose than those because it can be used with multiple languages or toolkits, but I find it not at all surprising that the first usage of gen AI in my company was to push out “POCs” (the vast majority of which never amounted to anything).
The same gravity applies to this tool as everything else in software…which is that prototyping is easy…integration is hard (unless the organization is well structured, which, well, almost none of them are), and software executives tend to confuse a POC with production code and want to push it out immediately, only to find out that it’s a Potemkin village underneath as they sometimes (or even often) were told the entire time.
So much of the software industry is “JUST GET THIS DONE FASTER DAMMIT!” from middle managers who still seem (despite decades of screaming this) to have developed no widespread means of determining either what they want to get done, or what it would take to get it done faster.
What we have been dealing with the entire time is people that hate to be dependent upon coders or other “nerds”, but need them in order to create products to accomplish their business objectives.
Middle managers still think creating software is algorithmic nerd shit that could be automated…solving the same problems over and over again. It’s largely been my experience that despite even Computer Science programs giving it that image, that the reality is modern coding is more akin to being a millwright. Most of the repetitive, algorithmic nerd shit was settled long ago and can be imported via modules. Imported modules are analogous to parts, and your job is to build or maintain the actual machine that produces the outcomes that are desired, making connecting parts to get the various components to interoperate as needed, repairing failing components, or spotting the shoddy welding between them that is making the current machine fail.
Its really weird.
I want to believe people arent this dumb but i also dont want to be crazy for suggesting such nonsensical sentiment is manufactured. Such is life in the disinformation age.
Like what are we going to do, tell all Countries and fraudsters to stop using ai because it turns out its too much of a hassle?
We can’t do that, nobody’s saying we can. But this is an important reminder that the tech savior bros aren’t very different from the oil execs.
And constant activism might hopefully achieve the goal of pushing the tech out of the mainstream, with its friend crypto, along other things not to be taken seriously anymore like flying cars and the Hyperloop.
You are speaking for everyone so right away i dont see this as an actual conversion, but a decree of fact by someone i know nothing about.
What are you saying is an important reminder? This article?
By constant activism, do you mean anything that occurs outside of lemmy comments?
Why would we not take LLMs seriously?
I’m talking about people criticizing LLMs. I’m not a politician. But I’ve seen a few debates about LLMs on this platform, enough to know about the common complaints against ShitGPT. I’ve never seen anyone on this platform seriously arguing for a ban. We all know it’s stupid and that it will be ineffective, just like crackdowns on VPNs in authoritarian countries.
The reminder is the tech itself. It’s yet another tech pushed by techbros to save the world that fails to deliver and is costing the rest of the planet dearly in the form of ludicrous energy consumption.
And by activism, I mean stuff happening on Lemmy as well as outside (coworkers, friends, technical people at conferences/meetups). Like it or not, the consensus among techies in my big canadian city is that, while the tech sure is interesting, it’s regarded with a lot of mistrust.
You can take LLMs seriously if you’d like. But the proofs that the tech is unsound for software engineering keep piling up. I’m fine with your skepticism. But I think the future will look bleaker and bleaker as times goes by. Not a week goes by without its lot of AI fuckups being reported in the press. This article is one of many examples.
Theres no particular fuck up mentioned by this article.
The company that conducted the study which this article speculates on said these tools are getting rapidly better and that they arent suggesting to ban ai development assistants.
Also as quoted in the article, the use of these coding assistance is a process in and of itself. If you arent using ai carefully and iteratively then you wont get good results with current models. How we interact with models is as important as the model’s capability. The article quotes that if models are used well, a coder can be faster by 2x or 3x. Not sure about that personally… seems optimistic depending on whats being developed.
It seems like a good discussion with no obvious conclusion given the infancy of the tech. Yet the article headline and accompanying image suggest its wreaking havoc.
Reduction of complexity in this topic serves nobody. We should have the patience and impartiality to watch it develop and form opinions independently from commeter and headline sentiment. Groupthink has been paricularly dumb on this topic from what ive seen.
Nobody talked about banning them, once again. I don’t want to do that. I want it to leave the mainstream, for environmental reasons first and foremost.
The fuckup is, IDK, the false impression of productivity, and the 41% more bugs? That seems like a huge deal to me, even though I’d like to see this study being reproduced to draw real conclusions.
This, with strawberrries, Air Canada’s chatbots, the 3 Miles Island stuff, the delaying of Google’s carbon neutrality efforts, the cursed Google results telling you to add glue to your pizza, the distrust of the general public about anything with an AI label on it, to mention just a few examples… It’s starting to become a lot.
Even if you omit the ethical aspects of cooking the planet for a toy, the technology is wildly unsound. You seem to think it can get better, and I can respect that. But I’m very skeptical, and there’s a lot of people with the same opinion, even in tech.
Typical lack of nuance on the Internet, sadly. Everything has to be Bad or Good. Black or White. AI is either The best thing ever™ or The worst thing ever™. No room for anything in between. Considering negative news generates more clicks, you can see why the media tend to take the latter approach.
I also think much of the hate is just people jumping on the AI = bad band-wagon. Does it have issues? Absolutely. Is it perfect? Far from it. But the constant negativity has gotten tired. There’s a lot of fascinating discussion to be had around AI, especially in the art world, but God forbid you suggest it’s anything but responsible for the total collapse of civilisation as we know it…
I think you nailed it with everything you just said.
If it didn’t significantly contribute to the cooking of all lifeforms on planet Earth, most of us would not mind. We would still deride it because of its untrustworthiness. However, it’s not just useless: it’s also harmful. That’s the core of the beef I (and a lot of other folks) have against the tech.
Oh for sure. How we regulate AI (including how we power it) is really important, definitely.
yum lifeform beef stew
Yep, by definition generative AI gets worse the more specific you get. If you need common templates though, it’s almost as good as today’s google.
… which is not a high bar.
lol Uplevel’s “”“full report”“” saying devs using Copilot create 41% more bugs has 2 pages and reads like a promotional material.
you can download it with a 10 minute email if you really want to see for yourself.
just some meaningless numbers.
While I am not fond of AI, we do have access to it at work and I must admit that it saves some time in some cases. I’m not a developer with decades of experience in a single language, so something I am using AI to is asking “Is it possible to do a one-liner in language X where it does Y?” It works very well and the code is rarely unusable, but it is still up to my judgement whether the AI came up with a clever use of functions that I didn’t know about or whether it crammed stuff into a single unreadable line.
I’m a penetration tester and it increases my productivity a lot
as a dental assistant I can also confirm that AI has increased my productivity, checks notes, by a lot.
I mainly use AI for learning new things. It’s amazing at trivial tasks.
so it’s a vector of attack?
Penetration tester, huh? Sounds like a fun and reproductive job.
But it can be very HARD sometimes
Everyone keeps talking about autocomplete but I’ve used it successfully for comments and documentation.
You can use vs code extensions to generate and update readme and changelog files.
Then if you follow documentation as code you can update your Confluence/whatever by copy pasting.
I also use it a lot for unit tests. It helps a lot when you have to write multiple edge cases, and even find new one at times. Like putting a random int in an enum field (enumField = (myEnum)1000), I didn’t knew you could do that…
Yeah. I’ve found new logic by asking GPT for improvements on my code or suggestions.
I cut the size of a function in half once using a suggested recursive loop and it blew my mind.
Feels like having a peer to do a code review on hand at all times.
Yeah, I also find it super helpful with unit tests, saves a lot of time.
.
Claude is my coding mentor. Wouldn’t want to work without it.
I run code snippets by three or four LLMs and the consensus is never there. Claude has been the worst for me.
Which one has been best? I’m only a hobbyist, but I’ve found Claude to be my favorite, and the best UI by a mile.
Garbage in garbage out is how they all work if you give it a well defined prompt you can get exactly what you want out of it most of the time but if you just say fix this problem it’ll just fix the problem ignoring everything else
Wait till this guy learns how a theremin work.
Every now and then, GitHub Copilot saves me a few seconds suggesting some very basic solution that I am usually in the midst of creating. Is it worth the investment? No, at least not yet. It hasn’t once “beaten” me or offered an improved solution. It (more frequently than not) requires the developer to understand and modify what it proposes for its suggestions to be useful. Is is a useful tool? Sure, just not worth the price yet, and obviously not perfect. But, where I’m working is testing it out, so I’ll keep utilizing it.
It introduced me to the basics of C# in a way that traditional googling at my previous level of knowledge would’ve made difficult.
I knew what I wanted to do and I didn’t know what was possible or how to ask without my question being closed as a duplicate with a link to an unhelpful post.
In that regard, it’s very helpful. If I had already known the language well enough, I can see it being less helpful.
This is what I’ve used it for and it’s helped me learn, especially because it makes mistakes and I have to get them to work. In my case it was with Terraform and Ansible.
Haha, yeah. It really loves to refactor my code to “fix” bracket list initialization (e.g.
List<string> stringList = [];
) because it keeps not remembering that the syntax has been valid for a while.It’s newest favorite hangup is to incessantly suggest null checks without asking if it’s a nullable property that it’s checking first. I think I’m almost at the point where it’s becoming less useful to me.
Great for Coding 101 in a language I’m rusty with or otherwise unfamiliar.
Absolutely useless when it comes time to optimize a complex series of functions or upgrade to a new version of the .NET library. All the “AI” you need is typically baked into Intellisense or some equivalent anyway. We’ve had code-assist/advice features for over a decade and its always been mid. All that’s changed is the branding.
I learned bash thanks to AI!
For years, all I did was copy and paste bash commands. And I didn’t understand arguments, how to chain things, or how it connects.
You do realize that a very thorough manual is but a
man bash
away? Perhaps it’s not the most accessible source available, but it makes up for that in completeness.I believe accessibility is the part that makes LLMs helpful, when they are given an easy enough task to verify. Being able to ask a thing that resembles a human what you need instead of reading through possibly a textbook worth of documentation to figure out what is available and making it fit what you need is fairly powerful.
If it were actually capable of reasoning, I’d compare it to asking a linguist the origin of a word vs looking it up in a dictionary. I don’t think anyone disagrees that the dictionary would be more likely to be fully accurate, and also I personally would just prefer to ask the person who seemingly knows and, if I have reason to doubt, then go back and double-check.
Here’s the manpage for bash’s statistics from wordcounter.net: <img alt="" src="https://lemmy.world/pictrs/image/3601b48a-54e3-4a03-8271-cf02be985e29.jpeg">
Perhaps LLMs can be used to gain some working vocabulary in a subject you aren’t familiar with. I’d say anything more than that is a gamble, since there’s no guarantee that hallucinations have not taken place. Remember, that to spot incorrect info, you need to already be well acquainted with the matter at hand, which is at the polar opposite of just starting to learn the basics.
I do try to keep the “unknown unknowns” problem in mind when I use it, and I’ve been using it far less as I latched on to how OOP actually works and built up the lexicon and my own preferences. I try to only ask it for high-level stuff that I can then use to search the wider (hopefully more human) internet more traditionally with. I fully appreciate that it’s nothing more than a very incredibly fancy auto-completion engine and the basic task of auto-complete just so happens to appear intelligent as it gets better and more complex but continues to lack any form of real logical thoughts.
What about just reading the documentation?
Even with amazing documentation, it can be hard to find the thing you’re looking for if you don’t know the right phrasing or terminology yet. It’s easily the most usable thing I’ve seen come out of “AI”, which makes sense. Using a Language Model to parse language is a very literal application.
The person I replied to was talking about learning the basics of a language… This isn’t about searching for something specific, this is about reading the very basic introduction to a language before trying to Google your way through it. Avoiding the basic documentation is always a bad idea. Replacing it with the LLMed version of the original documentation probably even more so.
The writer has a clear bias and a lack of a technical background (writing for Techies.com doesn’t count) .
You don’t have to look hard to find devs saving time and learning something with AI coding assistants. There are plenty of them in this thread. This is just an opinion piece by someone who read a single study.
if you are already competent and you are aware that it doesn’t necessarily give you correct information, the it is really helpful. I know enough to sense when it is making shit up. Also it is, for some scenarios, faster and easier then looking at a documentation. I like it personally. But it will not replace competent developers anytime soon.
This opinion is a breath of fresh air compared to the rest of tech journalism screaming “AI software engineer” after each new model release.
I’m fine with searching stack exchange. It’s much more useful. More info, more options, more understanding.
Who are those guys they keep asking this question over and over ? And how are they not able to use such a simple tool to increase their productivity ?
To be honest ChatGPT pretty much killed the fun of programming.
What?
Programming was like a challenge, you have a problem and you need to solve it. You look into the internet, stack overflow, test different chunks of codes, reading documentation, etc. nowadays is simply splitting one problem into pieces, and then copy pasting.
Judging this article by it’s title (refuse to click). Calling BS. ChatGPT has been a game changer for me personally
It has some uses.
But I’m waiting for a good self hosted model and to have a more powerful gpu to properly run it.
It’s great as essentially a StackOverflow that I can talk to in real time. But as with SO, I’ve still got to figure out what pieces are legit and where they go.
AI search results made stack overflow answers harder to find now lol
It’s definitely exploded but content farms were a problem even before 2022. There’s a reason google results starting with “reddit” / “stack overflow” were trending so hard.
It’s just fancier spell check and boilerplate generator
I use it occasionally. Recently I used it to convert a written specification in a document to a java object. And it was like 95% correct - but having to manually double check everything and fix the errors eliminated much of the time savings.
However that’s a very ideal use case. Most often I forget it exists.
I use it a fair bit. Mind, it’s something like formating a giant json stdout into something I want to read…
I also do find it’s useful for sketching out an outline In pseudo code.
No shit. Senior devs have been saying this the whole time. AI, in its current form, for developers, is like handing a spatula to a gourmet chef. Yes it is useful to an extremely small degree, but that’s it…for now.
A convoluted spatula that sometimes accidentally cuts what your cooking im half instead of flipping it and consumes as much power as the entirety of Japan.
It’s when you only have a pot and your fingers that a spatula is awesome. I could never bother finish learning C and its awkward syntax. Even though I know how to code in some other language, I just couldn’t write much C at all and it was painful and slow. And too much time passed between attempts that I forgot most of it in between. Now I can easily make simple C apps, I just explain the underlying logic, with example of how I would do it in my preferred language and piece by piece it quickly comes together and I don’t have to remember if the for loop needs brackets of parenthesis or brackets nor if the line terminator is colon or semi colon.
The problem is that you’re still not learning, then. Maybe that’s your goal, and if so, no worries, but AI is currently a hammer that everyone seems to be falling over themselves finding nails for.
All I can do is sigh and shake my head. This bubble will burst, and AI will still take decades to get to the point people think it is already at.
Au contraire, not only you quickly learn the grab bag of strategy and tricks of the “average programmers” and their default solutions, you no longer get bogged down in the menial wrangling of compiler syntax.
That is IF you actually read, debug and implement this code as part of a larger system.
Of course if it “just works” and you don’t read how it works then you just get a working tool, but don’t really learn how it works inside. Kind of like those people who just drive cars but never did replace their crank bearings and transmission clutch packs
If you do interact with the code I think it will quickly elevate a newbie to a mediocre but capable programmer. Progressing beyond that is like stepping out and walking after driving for days.
I get more benefit from a good IDE that helps me track libraries, cars, functions, grammar checks my code, offers a pop-up with params and options…
I don’t needcode I would grade as a D- from an AI. Most of what I write comes from my code closet anyway. I have skeleton code for so much, and I trust my old code more than AIs new code
I partly disagree, complex algorithms are indeed a no, but for learning a new language it is awesome.
Currently learning Rust and although it cannot solve everything, it does guide you with suggestions and usable code fragments.
Highly recommended.
Is there anything it provided you so far that was better than the guidance from the Rust compiler errors themselves? Every error ends with “run this command for a tutorial on why this error happened and how to fix it” type of info. A lot of times the error will directly tell you how to fix it too.
I agree, although some messages are still cryptic for a newbie like me, but thats maybe more the person on the chair than the compiler 😇.
I’d estimate copilot to be correct in only 10% of the time, solving a situation like that. Most of the time the solution suggested is also wrong, but just differently.
Having said that: sometimes (small chance, 1% maybe) the solution is spot on.
AI mainly helps with the initial syntax and on language constructs and for that it is awesome.
as does the compiler and the rust book
I use it as second last resort, and in those times, it did worked out. I had to test, verify, and make changes. Even so, I avoid using them.