If AI is so good at coding - where are the open source contributions? (pivot-to-ai.com)
from HaraldvonBlauzahn@feddit.org to programming@programming.dev on 22 May 18:17
https://feddit.org/post/12897907

#programming

threaded - newest

thingsiplay@beehaw.org on 22 May 18:25 next collapse

Mostly closed source, because open source rarely accepts them as they are often just slop. Just assuming stuff here, I have no data.

joyjoy@lemm.ee on 22 May 19:18 next collapse

And when they contribute to existing projects, their code quality is so bad, they get banned from creating more PRs.

hemko@lemmy.dbzer0.com on 22 May 20:04 next collapse

To be fair if a competent dev used an ai “auto complete” tool to write their code, I’m not sure it’d be possible to detect those parts as an ai code.

I generally dislike those corporate AI tools but gave a try for copilot when writing some terraform script and it actually had good suggestions as much as bad ones. However if I didn’t know that well the language and the resources I was deploying, it’d probably have led me to deep hole trying to fix the mess after blindly accepting every suggestion

thingsiplay@beehaw.org on 22 May 20:19 next collapse

They do more than just autocomplete, even in autocomplete mode. These Ai tools suggest entire code blocks and logic and fill in multiple lines, compared to a standard autocomplete. And to use it as a standard autocomplete tool, no Ai is needed. Using it like that wouldn’t be bad anyway, so I have nothing against it.

The problems arise when the Ai takes away the thinking and brain functionality of the actual programmer. Plus you as a user get used to it and basically “addicted”. Independent thinking and programming without Ai will become harder and harder, if you use it for everything.

pinball_wizard@lemmy.zip on 24 May 11:42 collapse

They do more than just autocomplete, even in autocomplete mode. These Ai tools suggest entire code blocks and logic and fill in multiple lines,

We know. “Improved autocomplete” is still an accurate (the most accurate) description for what current generation AI can do.

When compared to current autocomplete, AI is a delight. (Though it has a long way to go to improve at not adding stupid bullshit. But I’m confident that will get better.)

When measured against a true intelligence, I know I’m interacting with a newbie or a con man, because there’s no honest reason an informed person would even consider making that comparison.

HaraldvonBlauzahn@feddit.org on 22 May 20:57 collapse

People seem to think that the development speed of any larger and more complex software depends on the speed the wizards can type in code.

Spoiler: This is not the case. Even if a project is a mere 50000 lines long, one is the solo developer, and one has a pretty good or even expert domain knowledge, one spends the mayor part of the time thinking, perhaps looking up documentation, or talking with people, and the key on the keyboard which is most used doesn’t need a Dvorak layout, bevause it is the “delete” key. In fact, you don’t need yo know touch-typing to be a good programmer, what you need is to think clearly and logically and be able to weight many different options by a variety of complex goals.

Which LLMs can’t.

hemko@lemmy.dbzer0.com on 22 May 21:25 collapse

I don’t think it makes writing code faster, just may reduce the number of key presses required

magic_lobster_party@fedia.io on 22 May 20:24 collapse

Creator of curl just made a rant about users submitting AI slop vulnerability reports. It has gotten so bad they will reject any report they deem AI slop.

So there’s some data.

atzanteol@sh.itjust.works on 22 May 18:29 next collapse

Have you used AI to code? You don’t say “hey, write this file” and then commit it as "AI Bot 123 aibot@company.com".

You start writing a method and get auto-completes that are sometimes helpful. Or you ask the bot to write out an algorithm. Or to copy something and modify it 30 times.

You’re not exactly keeping track of everything the bots did.

eager_eagle@lemmy.world on 22 May 18:47 next collapse

yeah, that’s… one of the points in the article

atzanteol@sh.itjust.works on 22 May 19:09 collapse

I’ll admit I skimmed most of that train wreak of an article - I think it’s pretty generous saying that it had a point. It’s mostly recounts of people complaining about AI. But if they hid something in there about it being remarkably useful in cases but not writing entire applications or features then I guess I’m on board?

HaraldvonBlauzahn@feddit.org on 22 May 20:45 next collapse

Well, sometimes I think the web is flooded with advertising an spam praising AI. For these companies, it makes perfect sense because billions of dollars has been spent at these companies and they are trying to cash in before the tides might turn.

But do you know what is puzzling (and you do have a point here)? Many posts that defend AI do not engage in logical argumentation but they argue beside the point, appeal to emotions or short-circuited argumentation that “new” always equals “better”, or claiming that AI is useful for coding as long as the code is not complex (compare that to the objection that mathematics is simple as long it is not complex, which is a red herring and a laughable argument). So, many thanks for you pointing out the above points and giving in few words a bunch of examples which underline that one has to think carefully about this topic!

atzanteol@sh.itjust.works on 22 May 23:26 collapse

The problem is that you really only see two sorts of articles.

AI is going to replace developers in 5 years!

AI sucks because it makes mistakes!

I actually see a lot more of the latter response on social media to the point where I’m developing a visceral response to the phrase “AI slop”.

Both stances are patently ridiculous though. AI cannot replace developers and it doesn’t need to be perfect to be useful. It turns out that it is a remarkably useful tool if you understand its limitations and use it in a reasonable way.

vrighter@discuss.tchncs.de on 23 May 04:30 next collapse

it’s a car that only explodes once in a blue moon!

XM34@feddit.org on 23 May 11:17 collapse

No, it’s a car that breaks down once you go faster than 60km/h. It’s extremely useful if you know what you’re doing and use it only for tasks that it’s good at.

vrighter@discuss.tchncs.de on 23 May 12:39 collapse

if that’s the analogy yoou want, make it 20 kmh

XM34@feddit.org on 23 May 14:42 collapse

Yeah, that’s what I thought. Another useless AI hater. You people are even worse than the AI fanboy techbros! AI is a wonderful tool for those who know how to use it. It has increased my productivity by at least 30% and it can do all the mundane and boring coding while I focus on the interesting aspects!

vrighter@discuss.tchncs.de on 24 May 05:26 collapse

did you know 95% of statistics are pulled out of someone’s ass?

XM34@feddit.org on 24 May 21:28 collapse

Just because there’s a percent sign doesn’t mean it’s statistics, smartass. If I finish 4 tickets in the time I usually take to finish 3 tickets, then that’s a roughly 30% efficiency increase. That’s not statistics, it’ s just plain old elementary school algebra!

But don’t bother replying. I realize now that this post is occupied by human dregs that will be out of a job within the next 5 years because they refuse to interact with AI at all.

vrighter@discuss.tchncs.de on 25 May 04:41 collapse

i will still have a job unfucking all the ai slop, once the hype dies down

Beldarofremulak@discuss.online on 23 May 23:45 collapse

Don’t forget all these artists and developers are staring unemployment in the face so it’s no wonder they phone it in when they “try” to use AI.

“Make me a program that does this complex thing across many systems… It didn’t work on the first try AI SLOP REEEEEEE!”

Forks suck at eating soup yet are still useful.

sekxpistol@feddit.uk on 24 May 20:02 collapse

Great analogy! Even in this thread there are heaping amount of copium with people saying, “Meh, ai will never be able to do my job.”

I fucking promise in 5 years, ai will be doing the job they have right now. lol

shnizmuffin@lemmy.inbutts.lol on 23 May 00:27 collapse

Hey @dgerard@awful.systems, care to weigh in on this “train wreak [sic] of an article?”

dgerard@awful.systems on 23 May 00:31 collapse

I asked Github Copilot and it added import wreak to .NET, so we’ll get back to you.

zqwzzle@lemmy.ca on 22 May 21:38 next collapse

We could see how the copilot PRs went:

Corngood@lemmy.ml on 22 May 21:59 next collapse

Or to copy something and modify it 30 times.

This seems like a very bad idea. I think we just need more lisp and less AI.

atzanteol@sh.itjust.works on 22 May 22:17 next collapse

“Hey AI - Create a struct that matches this JSON document that I get from a REST service”

Bam, it’s done.

Or

"Hey AI - add a schema prefixed on all of the tables and insert statements in the SQL script.

Zos_Kia@lemmynsfw.com on 22 May 22:21 next collapse

Yeah integrating APIs has really become trivial with copilots. You just copy paste the documentation and all the boring stuff is done in the blink of an eye ! I love it

atzanteol@sh.itjust.works on 22 May 22:39 collapse

It’s exactly the sort of “tedious yet not difficult” task that I love it for. Sometimes you need to clean things up a bit but it does the majority of the work very nicely.

brb@sh.itjust.works on 23 May 19:17 collapse

People have such a hate boner for AI here that they are downvoting actual good use of it…

pinball_wizard@lemmy.zip on 24 May 11:34 collapse

Good point.

This is the point that the “AI will do it all” crowd is missing. Current AI doesn’t innovate. Full stop. It copies.

The need for new code written by folks who understand what they’re writing isn’t gone, and won’t go away.

Whether those folks can be AI is an open question.

Whether we can ever create an AI that can actually innovate is an interesting open question, with little meaningful evidence in either direction, today.

Reptorian@programming.dev on 24 May 04:44 collapse

I used it only as last resort. I verify it before using it. I only had used it for like .11% of my project. I would not recommend AI.

atzanteol@sh.itjust.works on 24 May 09:21 collapse

My dude, I very code other humans write. Do you think I’m not verifying code written by AI?

I highly recommend using AI. It’s much better than a Google search for most things.

teije9@lemmy.blahaj.zone on 22 May 18:38 next collapse

who makes a contribution made by aibot514. noone. people use ai for open source contributions, but more in a ‘fix this bug’ way not in a fully automated contribution under the name ai123 way

lemmyng@lemmy.ca on 22 May 18:59 collapse

Counter-argument: If AI code was good, the owners would create official accounts to create contributions to open source, because they would be openly demonstrating how well it does. Instead all we have is Microsoft employees being forced to use and fight with Copilot on GitHub, publicly demonstrating how terrible AI is at writing code unsupervised.

Lucien@mander.xyz on 22 May 19:03 next collapse

Bingo

freagle@lemmygrad.ml on 22 May 19:04 collapse

Bing. O.

LeFantome@programming.dev on 23 May 05:07 collapse

Big O

XM34@feddit.org on 23 May 11:25 collapse

Yes, that’s exactly the point. AI is terrible at writing code unsupervised, but it’s amazing as a supportive tool for real devs!

Blue_Morpho@lemmy.world on 22 May 19:13 next collapse

If humans are so good at coding, how come there are 8100000000 people and only 1500 are able to contribute to the Linux kernel?

I hypothesize that AI has average human coding skills.

GiorgioPerlasca@lemmy.ml on 22 May 20:52 next collapse

Average drunk human coding skils

deaddigger@lemm.ee on 23 May 03:44 next collapse

Well according to microsoft mildly drunk coders work better

HaraldvonBlauzahn@feddit.org on 24 May 05:46 collapse

A million drunk monkeys on typewriters can write a work of Shakespeare once in a while!

But who wants to pay a 50$ theater ticket in the front seat to see a play written by monkeys?

HaraldvonBlauzahn@feddit.org on 23 May 08:23 collapse

The average coder is a junior, due to the explosive growth of the field (similar as in some fast-growing nations the average age is very young). Thus what is average is far below what good code is.

On top of that, good code cannot be automatically identified by algorithms. Some very good codebases might look like bad at a superficial level. For example the code base of LMDB is very diffetent from what common style guidelines suggest, but it is actually a masterpiece which is widely used. And vice versa, it is not difficult to make crappy code look pretty.

XM34@feddit.org on 23 May 11:23 collapse

“Good code” is not well defined and your example shows this perfectly. LMDBs codebase is absolutely horrendous when your quality criterias for good code are Readability and Maintainability. But it’s a perfect masterpiece if your quality criteria are Performance and Efficiency.

Most modern Software should be written with the first two in mind, but for a DBMS, the latter are way more important.

HobbitFoot@thelemmy.club on 22 May 20:25 next collapse

As a dumb question from someone who doesn’t code, what if closed source organizations have different needs than open source projects?

Open source projects seem to hinge a lot more on incremental improvements and change only for the benefit of users. In contrast, closed source organizations seem to use code more to quickly develop a new product or change that justifies money. Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?

Phen@lemmy.eco.br on 22 May 20:30 next collapse

There are commercial open source stuff too

schnurrito@discuss.tchncs.de on 22 May 20:52 next collapse

most software isn’t public-facing at all (neither open source nor closed source), it’s business-internal software (which runs a specific business and implements its business logic), so most of the people who are talking about coding with AI are also talking mainly about this kind of business-internal software.

HobbitFoot@thelemmy.club on 22 May 21:40 collapse

Does business internal software need to be optimized?

bignose@programming.dev on 22 May 22:35 collapse

Does business internal software need to be optimized?

Need to be optimised for what? (To optimise is always making trade-offs, reducing some property of the software in pursuit of some optimised ideal; what ideal are you referring to?)

And I’m not clear on how that question is related to the use of LLMs to generate code. Is there a connection you’re drawing between those?

HobbitFoot@thelemmy.club on 23 May 00:42 collapse

So I was trying to make a statement that the developers of AI for coding may not have the high bar for quality and optimization that closed source developers would have, then was told that the major market was internal business code.

So, I asked, do companies need code that runs quickly on the systems that they are installed on to perform their function. For instance, can an unqualified programmer use AI code to build an internal corporate system rather than have to pay for a more qualified programmer’s time either as an internal hire or producing.

bignose@programming.dev on 23 May 06:00 collapse

do companies need code that runs quickly on the systems that they are installed on to perform their function.

(Thank you, this indirectly answers one question: the specific optimisation you’re asking about, it seems, is optimised speed of execution when deployed in production. By stating that as the ideal to be optimised, necessarily other properties are secondary and can be worse than optimal.)

Some do pursue that ideal, yes. For example: many businesses seek to deploy their internal applications on hosted environments where they pay not for a machine instance, but for seconds of execution time. By doing this they pay only when the application happens to be running (on a third-party’s managed environment, who will charge them for the service). If they can optimise the run-time of their application for any particular task, they are paying less in hosting costs under such an agreement.

can an unqualified programmer use AI code to build an internal corporate system rather than have to pay for a more qualified programmer’s time either as an internal hire or producing.

This is a question now about paying for the time spent by people to develop and maintain the application, I think? Which is thoroughly different from the time the application spends running a task. Again, I don’t see clearly how “optimise the application for execution speed” is related to this question.

HobbitFoot@thelemmy.club on 23 May 12:32 collapse

I’m asking if it worth spending more money on human developers to write code that isn’t slop.

Everyone here has been mentioning costs, but they haven’t been comparing them together to see if the cost of using human developers located in a high cost of living American city is worth the benefits.

HaraldvonBlauzahn@feddit.org on 22 May 21:03 next collapse

When did you last time decide to buy a car that barely drives?

And another thing, there are some tech companies that operate very short-term, like typical social media start-ups of which about 95% go bust within two years. But a lot of computing is very long term with code bases that are developed over many years.

The world only needs so many shopping list apps - and there exist enough of them that writing one is not profitable.

pinball_wizard@lemmy.zip on 24 May 11:46 collapse

And another thing, there are some tech companies that operate very short-term, like typical social media start-ups of which about 95% go bust within two years.

This is a very generous sentence you have made, haha. My observation is that vast majority of tech companies seem to operate unprofitably (the programming division is pure cost, no measurable financial befit) and with churning bug riddled code that never really works correctly.

Netflix was briefly hugely newsworthy in the technology circles because they… Regularly did disaster recovery tests.

Edit: Netflix made news headlines because someone decided that Kevin in IT having a bad day shouldn’t stop every customer from streaming. This made the news.

Our technology “leadership” are, on average, so incredibly bad at computer stuff.

MajorasMaskForever@lemmy.world on 22 May 22:38 next collapse

I’d argue the two aren’t as different as you make them out to be. Both types of projects want a functional codebase, both have limited developer resources (communities need volunteers, business have a budget limit), and both can benefit greatly from the development process being sped up. Many development practices that are industry standard today started in the open source world (style guides and version control strategy to name two heavy hitters) and there’s been some bleed through from the other direction as well (tool juggernauts like Atlassian having new open source alternatives made directly in response)

No project is immune to bad code, there’s even a lot of bad code out there that was believed to be good at the time, it mostly worked, in retrospect we learn how bad it is, but no one wanted to fix it.

The end goals and proposes are for sure different between community passion projects and corporate financial driven projects. But the way you get there is more or less the same, and that’s the crux of the articles argument: Historically open source and closed source have done the same thing, so why is this one tool usage so wildly different?

HobbitFoot@thelemmy.club on 23 May 00:46 collapse

Historically open source and closed source have done the same thing, so why is this one tool usage so wildly different?

Because, as noted by another replier, open source wants working code and closed source just want code that runs.

bignose@programming.dev on 22 May 22:43 next collapse

Maybe closed source organizations are more willing to accept slop code that is bad but can barely work versus open source which won’t?

Because most software is internal to the organisation (therefore closed by definition) and never gets compared or used outside that organisation: Yes, I think that when that software barely works, it is taken as good enough and there’s no incentive to put more effort to improve it.

My past year (and more) of programming business-internal applications have been characterised by upper management imperatives to “use Generative AI, and we expect that to make you nerd faster” without any effort spent to figure out whether there is any net improvement in the result.

Certainly there’s no effort spent to determine whether it’s a net drain on our time and on the quality of the result. Which everyone on our teams can see is the case. But we are pressured to continue using it anyway.

dgerard@awful.systems on 23 May 00:34 collapse

Baldur Bjarnason (who hates AI slop) has posited precisely this:

My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

HobbitFoot@thelemmy.club on 23 May 00:44 collapse

That’s basically my question. If the standards of code are different, AI slop may be acceptable in one scenario but unacceptable in another.

Suoko@feddit.it on 22 May 20:30 next collapse

I created this entirely using mistral/codestral

github.com/suoko/gotosocial-webui

Not a real software, but it was done by instructing the ai about the basics of the mother app and the fediverse protocol

luciole@beehaw.org on 23 May 01:18 next collapse

I think it’s established genAI can spit straightforward toy examples of a few hundred lines. Bungalows aren’t simply big birdhouses though.

Suoko@feddit.it on 23 May 11:07 collapse

Still they’re just birdhouses with some more infrastructure you can read instructions about how to build it.

6nk06@sh.itjust.works on 24 May 12:04 collapse

Empty readme and no comments in the code. Its useless to anyone who would want to change or fix it. It’s junior’s code and unacceptable in a professional environment.

Suoko@feddit.it on 24 May 21:04 collapse

Done in less than 4 hours while doing other things. It was just a sample. With no clue about golang.

oakey66@lemmy.world on 22 May 21:24 next collapse

It’s not good because it has no context on what is correct or not. It’s constantly making up functions that don’t exist or attributing functions to packages that don’t exist. It’s often sloppy in its responses because the source code it parrots is some amalgamation of good coding and terrible coding. If you are using this for your production projects, you will likely not be knowledgeable when it breaks, it’ll likely have security flaws, and will likely have errors in it.

tisktisk@piefed.social on 23 May 19:28 next collapse

So you're saying I've got a shot?

Excrubulent@slrpnk.net on 24 May 03:50 collapse

And I’ll keep saying this: you can’t teach a neural network to understand context without creating a generalised context engine, another word for which is AGI.

Fidelity is impossible to automate.

30p87@feddit.org on 22 May 21:41 next collapse

Ask Daniel Stenberg.

andybytes@programming.dev on 22 May 21:42 next collapse

AI is just the lack of privacy, Authoritarian Dragnet, remote control over others computers, web scraping, The complete destruction of America’s art scene, The stupidfication of America and copyright infringement with a sprinkling of baby death.

6nk06@sh.itjust.works on 24 May 11:58 collapse

Don’t forget subscriptions. We were freed by Linux, GCC, and all the open source tools that replaced $1000 proprietary crap. They now have that money again through AI monthly plans.

A lot of people on HackerNews have a $200 monthly subscription to have the privilege to work. It’s crazy.

andybytes@programming.dev on 22 May 21:46 next collapse

My theory is not a lot of people like this AI crap. They just lean into it for the fear of being left behind. Now you all think it’s just gonna fail and it’s gonna go bankrupt. But a lot of ideas in America are subsidized. And they don’t work well, but they still go forward. It’ll be you, the taxpayer, that will be funding these stupid ideas that don’t work, that are hostile to our very well-being.

Prime@lemmy.sdf.org on 23 May 01:45 next collapse

Microsoft is doing this today. I can’t link it because I’m on mobile. It is in dotnet. It is not going well :)

ThirdConsul@lemmy.ml on 23 May 04:29 collapse

Yeah, can’t find anything on dotnet getting poisoned by AI slop, so until you link it, I’ll assume you’re lying.

Walop@sopuli.xyz on 23 May 04:41 collapse

I guess they were referring to this.

old.reddit.com/…/my_new_hobby_watching_ai_slowly_…

HaraldvonBlauzahn@feddit.org on 23 May 08:06 collapse

OMG, this is gold! My neighbor must have wondered why I am laughing so hard…

The “reverse centaur” comment citing Cory Doctorow is so true it hurts - they want that people serve machines and not the other way around. That’s exactly how Amazon’s warehouses work with workers being paced by facory floor robots.

notannpc@lemmy.world on 23 May 02:57 next collapse

AI is at its most useful in the early stages of a project. Imagine coming to the fucking ssh project with AI slop thinking it has anything of value to add 😂

HaraldvonBlauzahn@feddit.org on 23 May 09:23 collapse

The early stages of a project is exactly where you should really think hard and long about what exactly you do want to achieve, what qualities you want the software to have, what are the detailed requirements, how you test them, and how the UI should look like. And from that, you derive the architecture.

AI is fucking useless at all of that.

In all complex planned activities, laying the right groundwork and foundations is essential for success. Software engineering is no different. You won’t order a bricklayer apprentice to draw the plan for a new house.

And if your difficulty is in lacking detailed knowledge of a programming language, it might be - depending on the case ! - the best approach to write a first prototype in a language you know well, so that your head is free to think about the concerns listed in paragraph 1.

MonkderVierte@lemmy.ml on 23 May 10:02 next collapse

the best approach to write a first prototype in a language you know well

Ok, writing a web browser in POSIX shell using yad now.

HaraldvonBlauzahn@feddit.org on 23 May 11:35 next collapse

writing a web browser in POSIX shell

Not HTML but the much simpler Gemini protocol - well you could have a look at Bollux, a Gemini client written im shell, or at ereandel:

github.com/kr1sp1n/awesome-gemini?tab=readme-ov-f…

ChickenLadyLovesLife@lemmy.world on 23 May 15:14 collapse

I’m going back to TurboBASIC.

ulterno@programming.dev on 23 May 14:18 collapse

AI is only good for the stage when…

AI is only good in case you want to…

Can’t think of anything. Edit: yes, I really tried
Playing the Devils’ advocate was easier that being AI’s advocate.


I might have said it to be good in case you are pitching a project and want to show some UI stuff maybe, without having to code anything.
But you know, there are actually specialised tools for that, which UI/UX designers used, to show my what I needed to implement.
And when I am pitching UI, I just use a pencil and paper and it is so much more efficient than anything AI, because I don’t need to talk to something, to make a mockup, to be used to talk to someone else. I can just draw it in front of the other guy with 0 preparation, right as it came into my mind and don’t need to pay for any data center usage. And if I need to go paperless, there is Whiteboards/Blackboards/Greenboards and Inkscape.

After having banged my head trying to explain code to a new developer, so that they can hopefully start making meaningful contributions, I don’t want to be banging my head on something worse than a new developer, hoping that it will output something that is logically sound.

Irelephant@lemm.ee on 23 May 14:36 next collapse

Its good as a glorified autocomplete.

ulterno@programming.dev on 23 May 14:42 collapse

Except that an autocomplete, with simple, lightweight and appropriate heuristics can actually make your work much easier and will not make you have to read it again and again, before you can be confident about it.

Irelephant@lemm.ee on 23 May 15:33 collapse

True, and it doesn’t boil the oceans and poison people’s air.

ChickenLadyLovesLife@lemmy.world on 23 May 15:13 collapse

AI is good for the early stages of a project … when it’s important to create the illusion of rapid progress so that management doesn’t cancel the project while there’s still time to do so.

ulterno@programming.dev on 23 May 15:58 collapse

Ahh, so an outsourced conmancomputer.

LeFantome@programming.dev on 23 May 05:06 next collapse

Can Open Source defend against copyright claims for AI contributions?

If I submit code to ReactOS that was trained on leaked Microsoft Windows code, what are the legal implications?

proton_lynx@lemmy.world on 23 May 07:35 next collapse

what are the legal implications?

It would be so fucking nice if we could use AI to bypass copyright claims.

piccolo@sh.itjust.works on 24 May 12:52 collapse

“No officer, i did not write this code. I trained AI on copyright material and it wrote the code. So im innocent”

General_Effort@lemmy.world on 24 May 13:32 collapse

If I submit code to ReactOS that was trained on leaked Microsoft Windows code, what are the legal implications?

None. There is a good chance that leaked MS code found its way into training data, anyway.

LeFantome@programming.dev on 31 May 19:46 collapse

I am not sure how you arrived at “none” from your second sentence. The second sentence is exactly my point.

Alternatively then, can I just use the Microsoft source code and claim that I got it from AI? That seems to be your point here.

sekxpistol@feddit.uk on 23 May 22:32 next collapse

To be honest, so many of the comments in this thread are just cope.

It’s true that ai isn’t a replacement for good coders …YET.

But it will be. You all can be as mad as you want, publish as many articles about how much ai sucks as you want. but it won’t stop anything from happening.

I say this as someone who has just started to learn to code myself.

The reason you all are mad is because you suddenly feel unsafe and unappreciated. And you’re right.

Ai is still gonna happen though. It will take away a lot of your jobs (especially starting with jr coders just getting into the market). It will lower your pay. You can yell about it, or you can adapt. Sucks, but it is what it is.

Think of it this way: what do you think the market is gonna be like in 5 years? Then 10? Brah, start preparing now. Right fucking now. Cuz it ain’t gonna get easier for you. I promise.

It happened with blue-collar factory works in the midwest regions of the US because of automation and offshoring. People bitched and tried to stop it. Lots of snooty white-color workers yelled, “learn to code!” But none of that saved their jobs.

And you guys won’t stop it happening with your jobs either. I don’t like the idea of AI taking over everything either. But it will. Adapt or die.

I’ve just started to learn to code. I am enjoying it. But in no way, shape, or form am I thinking it’s going to lead to a job for me.

EDIT: To copy what some else said, much better than me:

The idea that AI will some day be good at coding isn’t the issue. The issue is that some people in management think it’s already well on the way to being a good substitute, and they’re trying to do more with fewer coders to everyone’s detriment.

Abnorc@lemm.ee on 23 May 22:47 next collapse

The idea that AI will some day be good at coding isn’t the issue. The issue is that some people in management think it’s already well on the way to being a good substitute, and they’re trying to do more with fewer coders to everyone’s detriment.

sekxpistol@feddit.uk on 24 May 03:30 collapse

100 percent. YOu said in two sentences what I have been trying to say to others. I think you are 100 percent correct. Management will count on AI long before they actually should. That shortsightedness has always been around and always will be.

bpev@lemmy.world on 24 May 00:10 next collapse

I think the biggest difference between this and blue-collars workers losing their jobs, though, is that the same people losing their jobs are also placed very to benefit from the technology. Blue collared workers losing manufacturing jobs couldn’t, because they were priced out of obtaining that mafacturing hardware themselves, but programmers can use AI on an individual basis to augment their production. Not sure what the industry will look like in 10 years, but I feel like there will be plenty of opportunities for people who build digital things.

That being said, people who were looking to be junior developers exactly right now… uhhh… that’s some extrememly unlucky timing. I wish you luck.

sekxpistol@feddit.uk on 24 May 03:29 next collapse

Well I’m old, so not looking for a job, I am just learning programming because i want to. But to your point, I am seeing LOTS of developers who have been laid off and finding another job is proving more challenging than ever before. It’s rough out there and I feel for them.

To copy what someone else in this thread said:

The idea that AI will some day be good at coding isn’t the issue. The issue is that some people in management think it’s already well on the way to being a good substitute, and they’re trying to do more with fewer coders to everyone’s detriment.

bpev@lemmy.world on 24 May 09:44 collapse

Oh layoffs are definitely happening. I’m just not sure if it’s caused by AI productivity gains, or if it’s just the latest excuse (the pandemic, then soft layoffs of “back to office” enforcement, and now AI). Esp since the companies most talking about AI productivity gains are the same companies that benefit from AI adoption…

What I wanted to explain is just that the skills to program actually translate pretty well. At my old company, we used to say “you know someone’s a staff engineer, because they only make PowerPoint presentations and diagrams, and don’t actually write any code”. And those skills directly translate to directing an AI to build the thing you need. The abstracted architect role will probably increase in value, as the typing value decreases.

My biggest concern is probably that AI is currently eating junior dev jobs, since what it excels at is typically the kind of work you’d give to a junior engineer. And I think that more gruntwork kinda tasks are the way that someone develops the higher level skills that are important later; you start to see the kinds of edge cases first hand, so it makes them memorable. But I feel like that might just be a transition thing; many developers these days don’t know bare code down to the 1s and 0s. The abstraction might just move up another level, and people will build more things. At least, this is the optimistic view. 🤷 But I’m an optimistic guy.

sekxpistol@feddit.uk on 24 May 19:55 collapse

My biggest concern is probably that AI is currently eating junior dev jobs, since what it excels at is typically the kind of work you’d give to a junior engineer.

Yeah, def gonna be rough for people graduating from college right now.

kossa@feddit.org on 24 May 05:55 collapse

They could now, because big “AI” companies sell their product on a loss.

The individual programmer is already outpriced when it comes to training those kind of models themselves. Once the companies want to turn a profit, the just laid off worker is outpriced as well. If an LLM can really do as good as a human programmer, who costs 70-100k, nothing stops the LLM provider to charge 35-50k easily. Try to augment your productivity at that price point, especially without a job.

I mean, society came through the change of the first and second work sector, we could reap the new productivity gains for the benefit of all, but, alas here we are at the beginning of a new crisis 😅

bpev@lemmy.world on 24 May 09:30 collapse

mmm so I’ve only used the online models for agent coding, since I’m on a laptop and lack the hardware, but my understanding is that local models like devstral and llama are relatively competitive and can be used on like… a gaming rig? I don’t think they’d be able to push the price that much.

But I don’t disagree that big companies will try their darnedest to.

chaos@beehaw.org on 24 May 01:06 next collapse

Do you think there’s any reason to believe that these tools are going to continue their breakneck progress? It seems like we’ve reached a point where throwing more GPUs and text at these things is not yielding more results, and they still don’t have the problem solving skills to work out tasks outside of their training set. It’s closer to a StackOverflow that magically has the answers to most questions you ask than a replacement for proper software engineering. I know you never know if a breakthrough is around the corner, but it feels like we’ve hit a plateau for the foreseeable future.

sekxpistol@feddit.uk on 24 May 03:28 next collapse

Do you think there’s any reason to believe that these tools are going to continue their breakneck progress?

I do.

And as I mentioned in another comment, it’s not so much that I think AI will do a better job, it’s that I think MANAGEMENT will think AI does a cheaper job. Already many tech people who have been laid off are saying it’s the worst job market they’ve ever seen.

AI sucks. But management is about dollars NOW. The are shortsided, fall into fads, and they will see the cost savings now as outweight the long term problems. I don’t agree with them, I am saying they will do that tho. Even if we don’t agree.

To copy what someone else in this thread said:

The idea that AI will some day be good at coding isn’t the issue. The issue is that some people in management think it’s already well on the way to being a good substitute, and they’re trying to do more with fewer coders to everyone’s detriment.

auraithx@lemmy.dbzer0.com on 24 May 13:50 collapse

Not sure what you mean, we are seeing results at an increasing pace if anything. A lot more complexity going into it than ‘increasing text/GPUs’ though.

arcprize.org/leaderboard

AlphaEvolve recently achieved what you are after.

We also applied AlphaEvolve to over 50 open problems in analysis , geometry , combinatorics and number theory , including the kissing number problem.

In 75% of cases, it rediscovered the best solution known so far.

In 20% of cases, it improved upon the previously best known solutions, thus yielding new discoveries

AlphaEvolve discovered a new scheduling heuristic for Google’s Borg cluster management system, recovering an average of 0.7% of global compute resources that were previously stranded due to resource fragmentation.

Google’s annual capital expenditures in the tens of billions, this efficiency translates to hundreds of millions of dollars saved annually

Mniot@programming.dev on 24 May 01:58 collapse

To be honest, you sound like you’re only just starting to learn to code.

Will coding forever belong to humans? No. Is the current generative-AI technology going to replace coders? Also no.

The reaction you see is frustration because it’s obvious to anyone with decent skill that AI isn’t up to the challenge, but it’s not obvious to people who don’t have that skill and so we now spend a lot of time telling bosses “no, that’s not actually correct”.

Someone else referenced Microsoft’s public work with Copilot. Here’s Copilot making 13 PRs over 5 days and only 4 ever get merged you might think “30% success is pretty good!” But compare that with human-generated PRs and you can see that 30% fucking sucks. And that’s not even looking inside the PR where the bot wastes everyone’s time making tons of mistakes. It’s just a terrible coworker and instead of getting fired they’re getting an award for top performer.

sekxpistol@feddit.uk on 24 May 03:26 next collapse

To be honest, you sound like you’re only just starting to learn to code.

I definitely am. But I have no doubts that ai is going to take a lot of entry-level type jobs soon, and eventually higher end jobs.

We’ll always need good, smart coders. Just not as many as we have now.

but it’s not obvious to people who don’t have that skill and so we now spend a lot of time telling bosses “no, that’s not actually correct”.

I get it. But those clueless people are gonna be the people in charge of hiring, and they’ll decide to hire less, and expect current staff to do more. I’ve seen in hundreds of time in industries, and it’s already happening now in yours.

For context, I’m old. So I’ve seen your arguments in many different industries.

And to your point, they’ll have ai replacing good people, long before ai is good enough to. But you’re approaching the issue with logic. Corporate lacks a lot of logic.

I’m already seeing it in your industry. Plenty of reddit/Lemmy posts talking about how coders have been laid off, and having a much much more difficult time getting another job than at any point in their careers.

Again, I’m saying AI is a good solution. I’m saying management will think that. Just like they did when they offshored jobs to much less skilled, yet way more inexpensive workers.

To copy what someone else in this thread said:

The idea that AI will some day be good at coding isn’t the issue. The issue is that some people in management think it’s already well on the way to being a good substitute, and they’re trying to do more with fewer coders to everyone’s detriment.

Mniot@programming.dev on 24 May 04:42 next collapse

I don’t understand how you think this works.

If I say, “now we have robots that can build a car from scratch!” the automakers will be salivating. But if my robot actually cannot build a car, then I don’t think it’s going to cause mass layoffs.

Many of the big software companies are doing mass layoffs. It’s not because AI has taken over the jobs. They always hired extra people as a form of anti-competitiveness. Now they’re doing layoffs to drive salaries down. That sucks and tech workers would be smart to unionize (we won’t). But I don’t see any radical shift in the industry.

HaraldvonBlauzahn@feddit.org on 24 May 09:05 next collapse

A big part of the changed software job market in the US is caused by the rise of interest rates, and in consequence a large part of high-risk venture capital money drying up. This was finsncing a lot of start-ups without any solid product or business model. And, this began very clearly before the AI hype.

The trope that AI is actually replacing jobs is a lie that AI companies want you to believe.

piccolo@sh.itjust.works on 24 May 12:56 collapse

Companies are also using AI to mask layoffs. “Yeah we dont need as many employees because AI can do their jobs better. Please shareholders, buy more stock and ignore our numbers!”

sekxpistol@feddit.uk on 24 May 19:57 collapse

I don’t understand how you think this works.

Do you think I am the only one that thinks like this? You don’t think middle and upper management thinks like I do?

But I don’t see any radical shift in the industry.

Oh, I’m saving this comment. Dude, go into any CSjobs forum and you tell me that there’s not a shift in the industry. lol

I’ll say this. I hope you’re right. (but you’re not)

HaraldvonBlauzahn@feddit.org on 24 May 05:22 collapse

If you walk around in my city and open your eyes, you will see that half of the bars and restaurants are closed because there is a shortage of even unskilled staff and restaurants didn’t pay enough to people. They now work in other sectors.

And yes, software developers are leaving jobs with unreasonable demands and shitty work conditions. Last not least because conserving mental health is more important. Go, for exanple, to the news.ycombinators.com forum and just search for the keyword “burnout”. That’s becoming a massive problem for companies because rising complexity is not matched by adequate organizational practices.

And AI is not going to help with that - it is already massively increasing technical debt.

HaraldvonBlauzahn@feddit.org on 24 May 05:06 next collapse

It’s the Dunning-Kruger effect.

And it’s fostered by an massive amount of spam and astroturfing coming from “AI” companies, lying that LLMs are good at this or that. Sure, algorithms like neural networks can recognize patterns. Algorithms like backtracking can play chess or solve or transform algebraic equations. But these are not LLMs and LLMs will not and can not replace software engineering.

Sure, companies want to pay less for programming. But they don’t pay for software developers to generate some gibberish in source code syntax, they need working code. And this is why software engineers and good programmers will not only remain scarce but will become even shorter in supply.

And companies that don’t pay six-figure salaries to developers will find that experienced developers will flat out refuse to work on AI-generated codebases, because they are unmaintainable and lead to burnout and brain rot.

auraithx@lemmy.dbzer0.com on 24 May 13:40 collapse

Been a few months since I used co-pilot, but they use a model that’s worse than GPT-4/4o which is a big step down from the reasoning models.

Try out Cline, aider, or one of the tools devs actually use with the latest models from Anthropic/Google/OpenAI.

aider.chat/docs/leaderboards/

Didn’t look through all the issues but there were things like

The agent was blocked by configuration issues from accessing the necessary dependencies to successfully build and test. Those are being fixed and we’ll continue experimenting.

Been out less than a week, let’s see how it’s doing in a year.

glitchdx@lemmy.world on 24 May 00:11 next collapse

If AI was good at coding, my game would be done by now.

Reptorian@programming.dev on 24 May 04:43 next collapse

I’ll admit I did used AI for code before, but here’s the thing. I already coded for years, and I usually try everything before last resort things. And I find that approach works well. I rarely needed to go to the AI route. I used it like for .11% of my coding work, and I verified it through stress testing.

TempermentalAnomaly@lemmy.world on 24 May 05:36 next collapse

I am not a programmer and I think it’s silly to think that AI will replace developers.

But I was working through a math problem in Moscow Puzzles with my kiddo.

<img alt="" src="https://lemmy.world/pictrs/image/51d5925a-abc8-43e4-a86d-3b5af4ca6f1c.jpeg">

We had solved it, but I wasn’t sure he got it at a deep level. So I figured I’d do something in Excel or maybe just do cut outs. But I figured I’d try to find a web app that would do this better. Nothing really came up that was a good match. But then thought, let’s see how bad AI programming can be. I’d fought with it over some excel functions and it’s been mainly useful in pointing me in the right direction, but only occasionally getting me over the finish line.

After about 6 to 8 hours of work, a little debugging, havinf teach and quiz me occasionally, and some real frustration of pointing out that the feature previously changed and re-emeged, I eventually had something that worked.

The Shooting Range Simulator is a web-based application designed to help users solve a logic puzzle involving scoring points by placing blocks on vertical number lines.

A buddy developer friend of mine said: “I took a quick scroll through the code. Looks pretty clean, but I didn’t dive in enough to really understand it. Definitely all that css BS would take me ages to do without AI.”

I don’t take credit for this and don’t pretend that this was my work, but I know my kiddo is excited to try the tool. I hope he learns from it and we bond over a math problem.

I know that everyone is worried about this tool, but moments like those are not nothing. Personally, I’m a Luddite and think the new tools should be deployed by the people’s livelihood it will effect and not the business owners.

bignose@programming.dev on 24 May 05:52 next collapse

Personally, I’m a Luddite and think the new tools should be deployed by the people’s livelihood it will effect and not the business owners.

Thank you for correctly describing what a Luddite wants and does not want.

auraithx@lemmy.dbzer0.com on 24 May 13:08 collapse

Yes, despite the irrational phobia amongst the Lemmings, AI is massively useful across a wide range of examples like you’ve just given as it reduces barriers to building something.

As a CS grad, the problem isn’t it replacing all programmers, at least not immediately. It’s that a senior software engineer can manage a bunch of AI agents, meaning there’s less demand for developers overall.

Same way tools like Wix, Facebook, etc came in and killed the need for a bunch of web developers that operated in the range for small businesses.

sekxpistol@feddit.uk on 24 May 20:00 collapse

As a CS grad, the problem isn’t it replacing all programmers, at least not immediately. It’s that a senior software engineer can manage a bunch of AI agents, meaning there’s less demand for developers overall.

Yes! You get it. That right there proves that you’ll make it through just fine. So many in this thread denying that Ai is gonna take jobs. But you gave a great scenario.

francois@sh.itjust.works on 24 May 10:46 next collapse

Microsoft has set up copilot to make contributions for the dotnet runtime github.com/dotnet/runtime/pull/115762 I’m sure maintainers spends more time to review and interact with copilot than it would have to write it themselves

conditional_soup@lemm.ee on 24 May 13:08 collapse

FTA: The user considered it was the unpaid volunteer coders’ “job” to take his AI submissions seriously. He even filed a code of conduct complaint with the project against the developers. This was not upheld. So he proclaimed the project corrupt. [GitHub; Seylaw, archive]

This is an actual comment that this user left on another project: [GitLab]

As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).