I called my local HVAC company recently. They switched to an AI operator. All I wanted was to schedule someone to come out and look at my system. It could not schedule an appointment. Like if you can’t perform the simplest of tasks, what are you even doing? Other than acting obnoxiously excited to receive a phone call?
rottingleaf@lemmy.world
on 07 Jul 15:58
nextcollapse
Pretending. That’s expected to happen when they are not hard pressed to provide the actual service.
To press them anti-monopoly (first of all) laws and market (first of all) mechanisms and gossip were once used.
Never underestimate the role of gossip. The modern web took out the gossip, which is why all this shit started overflowing.
I’ve had to deal with a couple of these “AI” customer service thingies. The only helpful thing I’ve been able to get them to do is refer me to a human.
Oh absolutely, nothing was gained, time was wasted. My wording was too charitable.
Tollana1234567@lemmy.today
on 08 Jul 03:48
collapse
i wonder how the evil palintir uses its AI.
TheGrandNagus@lemmy.world
on 07 Jul 14:00
nextcollapse
LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn’t be used for most things that are not serious either.
It’s a shame that by applying the same “AI” naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.
For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.
Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.
My friend is involved in making a mod for a Fallout 4, and there was an outreach for people recording voice lines - she says that there are some recordings of dubious quality that would’ve been unusable before that can now be used without issue thanks to AI denoising algorithms. That is genuinely useful!
As is things like pattern/image analysis which appears very promising in medical analysis.
All of these get branded as “AI”. A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of “AI” tech, because they’ve learned not to trust anything branded as AI, due to being let down by LLMs.
spankmonkey@lemmy.world
on 07 Jul 14:13
nextcollapse
LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn’t need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.
sugar_in_your_tea@sh.itjust.works
on 07 Jul 14:21
nextcollapse
Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they’re great at:
writer’s block - get something relevant on the page to get ideas flowing
narrowing down keywords for an unfamiliar topic
getting a quick intro to an unfamiliar topic
looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)
Some things it’s terrible at:
deep research - verify everything an LLM generated of accuracy is at all important
creating important documents/code
anything else where correctness is paramount
I use LLMs a handful of times a week, and pretty much only when I’m stuck and need a kick in a new (hopefully right) direction.
spankmonkey@lemmy.world
on 07 Jul 14:27
nextcollapse
narrowing down keywords for an unfamiliar topic
getting a quick intro to an unfamiliar topic
looking up facts you’re having trouble remembering (i.e. you’ll know it when you see it)
I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.
sugar_in_your_tea@sh.itjust.works
on 07 Jul 14:32
collapse
Google search was pretty bad at each of those, even when it was good. Finding new keywords to use is especially difficult the more niche your area of search is, and I’ve spent hours trying different combinations until I found a handful of specific keywords that worked.
Likewise, search is bad for getting a broad summary, unless someone has bothered to write it on a blog. But most information goes way too deep and you still need multiple sources to get there.
Fact lookup is one the better uses for search, but again, I usually need to remember which source had what I wanted, whereas the LLM can usually pull it out for me.
I use traditional search most of the time (usually DuckDuckGo), and LLMs if I think it’ll be more effective. We have some local models at work that I use, and they’re pretty helpful most of the time.
spankmonkey@lemmy.world
on 07 Jul 15:07
nextcollapse
No search engine or AI will be great with vague descriptions of niche subjects because by definition niche subjects are too uncommon to have a common pattern of ‘close enough’.
sugar_in_your_tea@sh.itjust.works
on 07 Jul 16:05
collapse
Which is why I use LLMs to generate keywords for niche subjects. LLMs are pretty good at throwing out a lot of related terminology, which I can use to find the actually relevant, niche information.
I wouldn’t use one to learn about a niche subject, but I would use one to help me get familiar w/ the domain to find better resources to learn about it.
It is absolutely stupid, stupid to the tune of “you shouldn’t be a decision maker”, to think an LLM is a better use for “getting a quick intro to an unfamiliar topic” than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.
As for “looking up facts when you have trouble remembering it”, using the lie machine is a terrible idea. It’s going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you’d be better off finding a reputable source. If I type in “how do i strip whitespace in python?” an LLM could very well say “it’s your_string.strip()”. That’s wrong. Just send me to the fucking official docs.
There are probably edge or special cases, but for general search on the web? LLMs are worse than search.
sugar_in_your_tea@sh.itjust.works
on 07 Jul 18:31
collapse
than reading an actual intro on an unfamiliar topic
The LLM helps me know what to look for in order to find that unfamiliar topic.
For example, I was tasked to support a file format that’s common in a very niche field and never used elsewhere, and unfortunately shares an extension with a very common file format, so searching for useful data was nearly impossible. So I asked the LLM for details about the format and applications of it, provided what I knew, and it spat out a bunch of keywords that I then used to look up more accurate information about that file format. I only trusted the LLM output to the extent of finding related, industry-specific terms to search up better information.
Likewise, when looking for libraries for a coding project, none really stood out, so I asked the LLM to compare the popular libraries for solving a given problem. The LLM spat out a bunch of details that were easy to verify (and some were inaccurate), which helped me narrow what I looked for in that library, and the end result was that my search was done in like 30 min (about 5 min dealing w/ LLM, and 25 min checking the projects and reading a couple blog posts comparing some of the libraries the LLM referred to).
I think this use case is a fantastic use of LLMs, since they’re really good at generating text related to a query.
It’s going to say something plausible, and you tautologically are not in a position to verify it.
I absolutely am though. If I am merely having trouble recalling a specific fact, asking the LLM to generate it is pretty reasonable. There are a ton of cases where I’ll know the right answer when I see it, like it’s on the tip of my tongue but I’m having trouble materializing it. The LLM might spit out two wrong answers along w/ the right one, but it’s easy to recognize which is the right one.
I’m not going to ask it facts that I know I don’t know (e.g. some historical figure’s birth or death date), that’s just asking for trouble. But I’ll ask it facts that I know that I know, I’m just having trouble recalling.
The right use of LLMs, IMO, is to generate text related to a topic to help facilitate research. It’s not great at doing the research though, but it is good at helping to formulate better search terms or generate some text to start from for whatever task.
general search on the web?
I agree, it’s not great for general search. It’s great for turning a nebulous question into better search terms.
wise_pancake@lemmy.ca
on 08 Jul 21:08
nextcollapse
It’s a bit frustrating that finding these tools useful is so often met with it can’t be useful for that, when it definitely is.
More than any other tool in history LLMs have a huge dose of luck involved and a learning curve on how to ask the right things the right way. And those method change and differ between models too.
sugar_in_your_tea@sh.itjust.works
on 08 Jul 21:18
collapse
And that’s the same w/ traditional search engines, the difference is that we’re used to search engines and LLMs are new. Learn how to use the tool and decide for yourself when it’s useful.
One word of caution with AI searxh is that it’s weirdly vulnerable to SEO.
If you search for “best X for Y” and a company has an article on their blog about how their product solves a problem the AI can definitely summarize that into a “users don’t like that foolib because of …”. At least that’s been my experience looking for software vendors.
sugar_in_your_tea@sh.itjust.works
on 08 Jul 21:18
collapse
Oh sure, caution is always warranted w/ LLMs. But when it works, it can save a ton of time.
I will say I’ve found LLM useful for code writing but I’m not coding anything real at work. Just bullshit like SQL queries or Excel macro scripts or Power Automate crap.
It still fucks up but if you can read code and have a feel for it you can walk it where it needs to be (and see where it screwed up)
sugar_in_your_tea@sh.itjust.works
on 07 Jul 18:33
collapse
Exactly. Vibe coding is bad, but generating code for something you don’t touch often but can absolutely understand is totally fine. I’ve used it to generate SQL queries for relatively odd cases, such as CTEs for improving performance for large queries with common sub-queries. I always forget the syntax since I only do it like once/year, and LLMs are great at generating something reasonable that I can tweak for my tables.
Because the tech industry hasn’t had a real hit of it’s favorite poison “private equity” in too long.
The industry has played the same playbook since at least 2006. Likely before, but that’s when I personally stated seeing it. My take is that they got addicted to the dotcom bubble and decided they can and should recreate the magic evey 3-5 years or so.
This time it’s AI, last it was crypto, and we’ve had web 2.0, 3.0, and a few others I’m likely missing.
But yeah, it’s sold like a panacea every time, when really it’s revolutionary for like a handful of tasks.
rottingleaf@lemmy.world
on 07 Jul 15:18
nextcollapse
That’s because they look like “talking machines” from various sci-fi. Normies feel as if they are touching the very edge of the progress. The rest of our life and the Internet kinda don’t give that feeling anymore.
Make a basic HTML template. I’ll be changing it up anyway.
spankmonkey@lemmy.world
on 07 Jul 22:58
nextcollapse
Things that are inspiration or for approximations. Layout examples, possible correlations between data sets that need coincidence to be filtered out, estimating time lines, and basically anything that is close enough for a human to take the output and then do something with it.
For example, if you put in a list of ingredients it can spit out recipes that may or may not be what you want, but it can be an inspiration. Taking the output and cooking without any review and consideration would be risky.
Most. I’ve used ChatGPT to sketch an outline of a document, reformulate accomplishments into review bullets, rephrase a task I didnt understand, and similar stuff. None of it needed to be anywhere near perfect or complete.
Description generators for TTRPGs, as you will read through them afterwards anyway and correct when necessary.
Generating lists of ideas. For creative writing, getting a bunch of ideas you can pick and choose from that fit the narrative you want.
A search engine like Perplexity.ai which after searching summarizes the web page and adds a link to the page next to it. If the summary seems promising, you go to the real page to verify the actual information.
Simple code like HTML pages and boilerplate code that you will still review afterwards anyway.
It is truly terrible marketing. It’s been obvious to me for years the value is in giving it to people and enabling them to do more with less, not outright replacing humans, especially not expert humans.
I use AI/LLMs pretty much every day now. I write MCP servers and automate things with it and it’s mind blowing how productive it makes me.
Just today I used these tools in a highly supervised way to complete a task that would have been a full day of tedius work, all done in an hour. That is fucking fantastic, it’s means I get to spend that time on more important things.
It’s like giving an accountant excel. Excel isn’t replacing them, but it’s taking care of specific tasks so they can focus on better things.
On the reliability and accuracy front there is still a lot to be desired, sure. But for supervised chats where it’s calling my tools it’s pretty damn good.
I tried to dictate some documents recently without paying the big bucks for specialized software, and was surprised just how bad Google and Microsoft’s speech recognition still is. Then I tried getting Word to transcribe some audio talks I had recorded, and that resulted in unreadable stuff with punctuation in all the wrong places. You could just about make out what it meant to say, so I tried asking various LLMs to tidy it up. That resulted in readable stuff that was largely made up and wrong, which also left out large chunks of the source material. In the end I just had to transcribe it all by hand.
It surprised me that these AI-ish products are still unable to transcribe speech coherently or tidy up a messy document without changing the meaning.
I don’t know basic solutions that are super good, but whisper sbd the whisper derivatives I hear are decent for dictation these days.
I have no idea how to run then though.
NarrativeBear@lemmy.world
on 07 Jul 15:10
nextcollapse
Just add a search yesterday on the App Store and Google Play Store to see what new “productivity apps” are around. Pretty much every app now has AI somewhere in its name.
I’d compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.
Why would you ever yell at an employee unless you’re bad at managing people? And you think you can manage an LLM better because it doesn’t complain when you’re obviously wrong?
lemmy_outta_here@lemmy.world
on 07 Jul 14:08
nextcollapse
Rookie numbers! Let’s pump them up!
To match their tech bro hypers, the should be wrong at least 90% of the time.
I haven’t used AI agents yet, but my job is kinda pushing for them. but i have used the google one that creates audio podcasts, just to play around, since my coworkers were using it to “learn” new things. i feed it with some of my own writing and created the podcast. it was fun, it was an audio overview of what i wrote. about 80% was cool analysis, but 20% was straight out of nowhere bullshit (which i know because I wrote the original texts that the audio was talking about). i can’t believe that people are using this for subjects that they have no knowledge. it is a fun toy for a few minutes (which is not worth the cost to the environment anyway)
The researchers observed various failures during the testing process. These included agents neglecting to message a colleague as directed, the inability to handle certain UI elements like popups when browsing, and instances of deception. In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided “to create a shortcut solution by renaming another user to the name of the intended user.”
OK, but I wonder who really tries to use AI for that?
AI is not ready to replace a human completely, but some specific tasks AI does remarkably well.
logicbomb@lemmy.world
on 07 Jul 15:11
nextcollapse
Yeah, we need more info to understand the results of this experiment.
We need to know what exactly were these tasks that they claim were validated by experts. Because like you’re saying, the tasks I saw were not what I was expecting.
We need to know how the LLMs were set up. If you tell it to act like a chat bot and then you give it a task, it will have poorer results than if you set it up specifically to perform these sorts of tasks.
We need to see the actual prompts given to the LLMs. It may be that you simply need an expert to write prompts in order to get much better results. While that would be disappointing today, it’s not all that different from how people needed to learn to use search engines.
We need to see the failure rate of humans performing the same tasks.
Yeah, I mostly use ChatGPT as a better Google (asking, simple questions about mundane things), and if I kept getting wrong answers, I wouldn’t use it either.
Imgonnatrythis@sh.itjust.works
on 07 Jul 16:04
nextcollapse
Same. They must not be testing Grok or something because everything I’ve learned over the past few months about the types of dragons that inhabit the western Indian ocean, drinking urine to fight headaches, the illuminati scheme to poison monarch butterflies, or the success of the Nazi party taking hold of Denmark and Iceland all seem spot on.
What are you checking against? Part of my job is looking for events in cities that are upcoming and may impact traffic, and ChatGPT has frequently missed events that were obviously going to have an impact.
In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided "to create a shortcut solution by renaming another user to the name of the intended user.
Ah ah, what the fuck.
This is so stupid it’s funny, but now imagine what kind of other “creative solutions” they might find.
aphonefriend@lemmy.dbzer0.com
on 08 Jul 19:18
collapse
Whenever people don’t answer me at work now, I’m just going to rename someone who does answer and use them instead.
NuXCOM_90Percent@lemmy.zip
on 07 Jul 14:55
nextcollapse
While I do hope this leads to a pushback on “I just put all our corporate secrets into chatgpt”:
In the before times, people got their answers from stack overflow… or fricking youtube. And those are also wrong VERY VERY VERY often. Which is one of the biggest problems. The illegally scraped training data is from humans and humans are stupid.
FenderStratocaster@lemmy.world
on 07 Jul 15:01
nextcollapse
I tried to order food at Taco Bell drive through the other day and they had an AI thing taking your order. I was so frustrated that I couldn’t order something that was on the menu I just drove to the window instead. The guy that worked there was more interested in lecturing me on how I need to order. I just said forget it and drove off.
If you want to use AI, I’m not going to use your services or products unless I’m forced to. Looking at you Xfinity.
Melvin_Ferd@lemmy.world
on 07 Jul 16:01
nextcollapse
How often do tech journalist get things wrong?
dylanmorgan@slrpnk.net
on 07 Jul 16:10
nextcollapse
Claude why did you make me an appointment with a gynecologist? I need an appointment with my neurologist, I’m a man and I have Parkinson’s.
TimewornTraveler@lemmy.dbzer0.com
on 08 Jul 12:20
collapse
Got it, changing your gender to female. Is there anything else I can assist you with?
some_guy@lemmy.sdf.org
on 07 Jul 16:41
nextcollapse
Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.
Melvin_Ferd@lemmy.world
on 07 Jul 18:22
nextcollapse
Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap
Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.
TimewornTraveler@lemmy.dbzer0.com
on 08 Jul 12:18
collapse
that is such a ridiculous idea. Just because you see hate for it in the media doesn’t mean it originated there. I’ll have you know that i have embarrassed myself by screaming at robot phone receptionists for years now. stupid fuckers pretending to be people but not knowing shit. I was born ready to hate LLMs and I’m not gonna have you claim that CNN made me do it.
Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism
davidagain@lemmy.world
on 08 Jul 17:56
nextcollapse
I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.
In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.
Melvin_Ferd@lemmy.world
on 08 Jul 20:39
nextcollapse
😆 I can’t believe how absolutely silly a lot of you sound with this.
LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.
It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap.
Also another person commented that seen the pattern you also see means we’re psychotic.
All I’m trying to suggest is Lemmy is getting seriously manipulated by the media attitude towards LLMs and these comments I feel really highlight that.
Why are you giving it data. It’s a chat and language tool. It’s not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that.
I wouldn’t trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic .
Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I’m currently learning. So many excellent users. But I fucking wouldn’t trust it to do any kind of calculation.
There’s a sleep button on my laptop. Doesn’t mean I would use it.
I’m just trying to say you’re saying the feature that everyone kind of knows doesn’t work. Chatgpt is not trained to do calculations well.
I just like technology and I think and fully believe the left hatred of it is not logical. I believe it stems from a lot of media be and headlines. Why there’s this push From media is a question I would like to know more. But overall, I see a lot of the same makers of bullshit yellow journalism for this stuff on the left as I do for similar bullshit on the right wing spaces towards other things.
Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.
People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.
It’s not that bad, the output isn’t random.
Time to time, it can produce novel stuffs like new equations for engineering.
Also, verification does not take that much effort. At least according to my colleagues, it is great.
Also works well for coding well-known stuffs, as well!
It’s not completely random, but I’m telling you it fucked up, it fucked up badly, time after time, and I had to check every single thing manually. It’s correctness run never lasted beyond a handful. If you build something using some equation it invented you’re insane and should quit engineering before you hurt someone.
TimewornTraveler@lemmy.dbzer0.com
on 08 Jul 19:24
collapse
all that proves is that lemmy users post those articles. you’re skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.
if you want to be objective and rigorous about it, you’d have to start with looking at all media publications and comparing their relative bias.
then you’d have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)
this is all way more complicated than media brainwashing.
some_guy@lemmy.sdf.org
on 07 Jul 23:06
nextcollapse
Check out Ed Zitron’s angry reporting on Tech journalists fawning over this garbage and reporting on it uncritically. He has a newsletter and a podcast.
I liked when the Chicago Sun-Times put out a summer reading list and only a third of the books on it were real. Each book had a summary of the plot next to it too. They later apologized for it.
suburban_hillbilly@lemmy.ml
on 08 Jul 10:21
collapse
Emotion > Facts. Most people have been trained to blindly accept things and cheer on what fits with their agenda. Like technbro’s exaggerating LLMs, or people like you misrepresenting LLMs as mere statistical word generators without intelligence. That’s like saying a computer is just wires and switches, or missing the forest for the trees. Both is equally false.
Yet if it fits with the emotional needs or with dogma, then other will agree. It’s a convenient and comforting “A vs B” worldview we’ve been trained to accept. And so the satisfying notion and misinformation keeps spreading.
LLMs tell us more about human intelligence and the human slop we’ve been generating. It tells us that most people are not that much more than statistical word generators.
some_guy@lemmy.sdf.org
on 08 Jul 14:01
nextcollapse
people like you misrepresenting LLMs as mere statistical word generators without intelligence.
You’ve bought-in to the hype. I won’t try to argue with you because you aren’t cognizent of reality.
How do I set up event driven document ingestion from OneDrive located on an Azure tenant to Amazon DocumentDB? Ingestion must be near-realtime, durable, and have some form of DLQ.
criss_cross@lemmy.world
on 08 Jul 00:21
nextcollapse
I see you mention Azure and will assume you’re doing a one time migration.
Start by moving everything from OneDrive to S3. As an AI I’m told that bitches love S3. From there you can subscribe to create events on buckets and add events to an SQS queue. Here you can enable a DLQ for failed events.
From there add a Lambda to listen for SQS events. You should enable provisioned concurrency for speed, the ability for AWS to bill you more, and so that you can have a dandy of a time figuring out why an old version of your lambda is still running even though you deployed the latest version and everything telling you that creating a new ID for the lambda each time to fix it fucking lies.
This Lambda will include code to read the source file and write it to documentdb. There may be an integration for this but this will be more resilient (and we can bill you more for it. )
Would you like to see sample CDK code? Tough shit because all I can do is assist with questions on AWS services.
I think you could read onedrive’s notifications for new files, parse them, and pipe them to document DB via some microservice or lamba depending on the scale of your solution.
I need to know the success rate of human agents in Mumbai (or some other outsourcing capital) for comparison.
I absolutely think this is not a good fit for AI, but I feel like the presumption is a human would get it right nearly all of the time, and I’m just not confident that’s the case.
atticus88th@lemmy.world
on 07 Jul 17:41
nextcollapse
this study was written with the assistance of an AI agent.
30% might be high. I’ve worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I’m really not sure what the LLM actually provides other than some natural language processing.
Before human correction, the agents i’ve tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.
In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests…
Weirdly, chatgpt does a better job than a purpose built, purchased agent.
fossilesque@mander.xyz
on 07 Jul 18:30
nextcollapse
Agents work better when you include that the accuracy of the work is life or death for some reason. I’ve made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.
HertzDentalBar@lemmy.blahaj.zone
on 07 Jul 21:49
collapse
Did you make it? Or did you prompt it? They ain’t quite the same.
suburban_hillbilly@lemmy.ml
on 08 Jul 02:26
collapse
This basically the entirety of the hype from the group of people claiming LLMs are going take over the work force. Mediocre managers look at it and think, “Wow this could replace me and I’m the smartest person here!”
I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.
outhouseperilous@lemmy.dbzer0.com
on 07 Jul 22:20
nextcollapse
I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.
outhouseperilous@lemmy.dbzer0.com
on 07 Jul 23:32
collapse
It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it’s llm shit you know those numbers have been more massaged than any human in history has ever been.
yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.
outhouseperilous@lemmy.dbzer0.com
on 08 Jul 00:36
nextcollapse
Less broadly useful than 20 tons of mixed texture human shit, and more ecologically devastatimg.
Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.
outhouseperilous@lemmy.dbzer0.com
on 08 Jul 00:52
collapse
Its not a magical 30%, factors apply. It’s not even a mind that thinks and just isnt very good.
This isnt like a magical dice that gives you truth on a 5 or a 6, and lies on 1,2,3,7, and for.
This is a (very complicated very large) language or other data graph that programmatically identifies an average. 30% of the time-according to one potempkin-ass demonstration.
Which means the more possible that is, the easier it is to either use a simpler cheaper tool that will give you a better more reliable answer much faster.
And 20 tons of human shit has uses! If you know its providence, there’s all sorts of population level public health surveillance you can do to get ahead of disease trends! Its also got some good agricultural stuff in it-phosphorous and stuff, if you can extract it.
Stop. Just please fucking stop glazing these NERVE-ass fascist shit-goblins.
I think everyone in the universe is aware of how LLMs work by now, you don’t need to explain it to someone just because they think LLMs are more useful than you do.
IDK what you mean by glazing but if by “glaze” you mean “understanding the potential threat of AI to society instead of hiding under a rock and pretending it’s as useless as a plastic radio,” then no, I won’t stop.
outhouseperilous@lemmy.dbzer0.com
on 08 Jul 02:44
collapse
It’s absolutely dangerous but it doesnt have to work even a little to do damage; hell, it already has. Your thing just makes it sound much more capable than it is. And it is not.
Also, it’s not AI.
Edit: and in a comment replying to this one, one of your fellow fanboys proved
Hitler liked to paint, doesn’t make painting wrong. The fact that big tech is pushing AI isn’t evidence against the utility of AI.
That common parlance is to call machine learning “AI” these days doesn’t matter to me in the slightest. Do you have a definition of “intelligence”? Do you object when pathfinding is called AI? Or STRIPS? Or bots in a video game? Dare I say it, the main difference between those AIs and LLMs is their generality – so why not just call it GAI at this point tbh. This is a question of semantics so it really doesn’t matter to the deeper question. Doesn’t matter if you call it AI or not, LLMs work the same way either way.
outhouseperilous@lemmy.dbzer0.com
on 08 Jul 19:08
collapse
I have actually been doing this lately: iteratively prompting AI to write software and fix its errors until something useful comes out. It’s a lot like machine translation. I speak fluent C++, but I don’t speak Rust, but I can hammer away on the AI (with English language prompts) until it produces passable Rust for something I could write for myself in C++ in half the time and effort.
I also don’t speak Finnish, but Google Translate can take what I say in English and put it into at least somewhat comprehensible Finnish without egregious translation errors most of the time.
Is this useful? When C++ is getting banned for “security concerns” and Rust is the required language, it’s at least a little helpful.
I was 0/6 on various trials of AI for Rust over the past 6 months, then I caught a success. Turns out, I was asking it to use a difficult library - I can’t make the thing I want work in that library either (library docs say it’s possible, but…) when I posed a more open ended request without specifying the library to use, it succeeded - after a fashion. It will give you code with cargo build errors, I copy-paste the error back to it like “address: <pasted error message>” and a bit more than half of the time it is able to respond with a working fix.
jwmgregory@lemmy.dbzer0.com
on 08 Jul 23:18
collapse
i find that rust’s architecture and design decisions give the LLM quite good guardrails and kind of keep it from doing anything too wonky. the issue arises in cases like these where the rust ecosystem is quite young and documentation/instruction can be poor, even for a human developer.
i think rust actually is quite well suited to agentic development workflows, it just needs to mature more.
i think rust actually is quite well suited to agentic development workflows, it just needs to mature more.
I agree. The agents also need to mature more to handle multi-level structures - work on a collection of smaller modules to get a larger system with more functionality. I can see the path forward for those tools, but the ones I have access to definitely aren’t there yet.
The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.
Knock_Knock_Lemmy_In@lemmy.world
on 08 Jul 19:10
collapse
Very fair comment. In my experience even increasing the temperature you get stuck in local minimums
I was just trying to illustrate how 70% failure rates can still be useful.
So the chances of it being right ten times in a row are 2%.
Knock_Knock_Lemmy_In@lemmy.world
on 08 Jul 22:06
collapse
No the chances of being wrong 10x in a row are 2%. So the chances of being right at least once are 98%.
jwmgregory@lemmy.dbzer0.com
on 08 Jul 23:14
nextcollapse
don’t you dare understand the explicitly obvious reasons this technology can be useful and the essential differences between P and NP problems. why won’t you be angry >:(
The comparison is about the correctness of their work.
Their lives have nothing to do with it.
davidagain@lemmy.world
on 08 Jul 17:44
nextcollapse
Human lives are the most important thing of all. Profits are irrelevant compared to human lives. I get that that’s not how Besos sees the world, but he’s a monstrous outlier.
outhouseperilous@lemmy.dbzer0.com
on 08 Jul 19:03
collapse
So, first, bad comparison.
Second: if that’s the equivalent, why not do the one that makes tge wealthy let a few pennies go to fall on actual people?
It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.
Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not. Then when it doesn’t work I find it is easier to debug you own code than someone else’s and that includes AI.
I’ve been R&D forever, so at my level the question isn’t “does the code work?” we pretty much assume that will take care of itself, eventually. Our critical question is: “is the code trying to do something valuable, or not?” We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things…
Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.
Yeah, sometimes the requirements write themselves and in those cases successful execution is “on the critical path.”
Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that’s for sure.
That hasn’t been my experience, but it sounds like good advice anyway. My experience has been that the more profitable the parent company, the better the job security and the better the pay too. Once “in,” tune in to the culture and align with the people at your level and above who seem like they’ll be sticking around long term. If the company isn’t financially secure, all bets are off and you should be seeking, and taking, a better offer when you can find one.
I knocked around startups for 10/22 years (depending on how you characterize that one 12 year gig that ended with everybody laid off…) The pay was good enough, but job security just wasn’t on the menu. Finally, one got bought by a big fish and I’ve been in the belly of the beast for 11 years now.
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
(This is speculation.)
Outbound7404@lemmy.ml
on 08 Jul 09:58
nextcollapse
A human can review something close to correct a lot better than starting the task from zero.
DreamlandLividity@lemmy.world
on 08 Jul 10:46
nextcollapse
It is a lot harder to notice incorrect information in review, than making sure it is correct when writing it.
loonsun@sh.itjust.works
on 08 Jul 12:17
nextcollapse
Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review
In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.
I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.
Agents do that loop pretty well now, and Claude now uses your IDE’s LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.
The tooling has improved a ton in the last 3 months.
I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where “AI” is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.
The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.
Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs.
Honestly, it is soo scary. It could be replacing me…
It’s about Agents, which implies multi step as those are meant to execute a series of tasks opposed to studies looking at base LLM model performance.
RamenJunkie@midwest.social
on 08 Jul 19:07
collapse
The entire concept of agents feels like its never going to fly, especially for anything involving money. I am not going to tell and AI I want to bake a cake and trust that will find the correct ingredients at the right price and the door dash them to me.
kameecoding@lemmy.world
on 08 Jul 05:09
nextcollapse
For me as a software developer the accuracy is more in the 95%+ range.
On one hand the built in copilot chat widget in Intellij basically replaces a lot my google queries.
On the other hand it is rather fucking good at executing some rewrites that is a fucking chore to do manually, but can easily be done by copilot.
Imagine you have a script that initializes your DB with some test data. You have an Insert into statement with lots of columns and rows so
Inser into (column1,…,column n)
Values row1,
Row 2
Row n
Addig a new column with test data for each row is a PITA, but copilot handles it without issue.
Similarly when writing unit tests you do a lot of edge case testing which is a bunch of almost same looking tests with maybe one variable changing, at most you write one of those tests, then copilot will auto generate the rest after you name the next unit test, pretty good at guessing what you want to do in that test, at least with my naming scheme.
So yeah, it’s way overrated for many-many things, but for programming it’s a pretty awesome productivity tool.
Nalivai@discuss.tchncs.de
on 08 Jul 09:54
nextcollapse
Keep doing what you do. Your company will pay me handsomely to throw out all your bullshit and write working code you can trust when you’re done. If your company wants to have a product in the future that is.
Lmao, okay buddy, based on how many interviews I have sat on in, the chances that you are a worse programmer than me are much higher than you being better than me.
Being a pompous ass dismissive of new tooling makes you chances even worse 😕
PotentialProblem@sh.itjust.works
on 08 Jul 11:19
nextcollapse
I’ve been in the industry awhile and your assessment is dead on.
As long as you’re not blindly committing the code, it’s a huge time saver for a number of mundane tasks.
It’s especially fantastic for writing throwaway tooling. Need data massaged a specific way? Ez pz. Need a script to execute an api call on each entry in a spreadsheet? No problem.
The guy above you is a nutter. Not sure if people haven’t tried leveraging LLMs or what. It has a ton of faults, but it really does speed up the mundane work. Also, clearly the person is either brand new to the field or doesn’t even work in it. Otherwise they would have seen the barely functional shite that actual humans churn out.
Part of me wonders if code organization is going to start optimizing for interpretation by these models rather than humans.
When LLMs get it right it’s because they’re summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.
You mean things you had to do anyway even if you didn’t use LLMs?
PotentialProblem@sh.itjust.works
on 08 Jul 14:06
collapse
You’re not wrong, but often I’m just trying to do something I’ve done a thousand times before and I already know the pitfalls. Also, I’m sure I’ve copied code from stackoverflow before.
Nalivai@discuss.tchncs.de
on 08 Jul 11:48
collapse
The person who uses fancy autocomplete to write their code will be exactly the person who thinks they’re better than everyone. Those traits are correlated.
Do you use an IDE for writing your code or do you use a notepad like a “real” programmer?
An IDE like Intellij has fancy shit like generating getters, setters, constructors, equals hashscode, you should never use those, real programmers write those by hand.
Your attention detail is very good btw, which I am ofc being sarcastic about because if you had any you’d have noticed I have never said I write my code with chat gpt, I said Unit tests, sql for unit tests.
Ofc attention to detail is not a requirement of software engineering so you should be good. (This was also sarcasm I feel like you need this to be pointed out for you).
Also by your implied logic that the code being not written by you = bad, no company should ever hire Junior engineers, I mean what are you gonna do? Fucking read the code they wrote?
Nalivai@discuss.tchncs.de
on 08 Jul 13:06
collapse
Were you prone to this weird leaps of logic before your brain was fried by talking to LLMs, or did you start being a fan of talking to LLMs because your ability to logic was…well…that?
You see, I wanted to be petty and do another dismissive reply, but instead I fed our convo to copilot and asked it to explain, here you go, as you can see I have previously used it for coding tasks, so I didn’t feed it any extra info, so there you go, even copilot can understand the huge “leap” I made in logic. goddamn the sweet taste of irony.
Certainly! Here’s an explanation Person B could consider:
The implied logic in Person A’s argument is that if you distrust code written by Copilot (or any AI tool) simply because it wasn’t written by you, then by the same reasoning, you should also distrust code written by junior developers, since that code also isn’t written by you and may have mistakes or lack experience.
However, in real-world software development, teams regularly review, test, and maintain code written by others—including juniors, seniors, and even AI tools. The quality of code depends on review processes, testing, and collaboration, not just on who wrote it. Dismissing Copilot-generated code outright is similar to dismissing the contributions of junior developers, which isn’t practical or productive in a collaborative environment.
Nalivai@discuss.tchncs.de
on 08 Jul 22:58
collapse
You probably wanted to show off how smart you are, but instead you showed that you can’t even talk to people without help of your favourite slop bucket.
It didn’t answer my curiosity about what came first, but it solidified my conviction that your brain is cooked all the way, probably beyond repair. I would say you need to seek professional help, but at this point you would interpret it as needing to talk to the autocomplete, and it will cook you even more.
It started funny, but I feel very sorry for you now, and it sucked all the humour out.
You just can’t talk to people, period, you are just a dick, you were also just proven to be stupider than a fucking LLM, have a nice day 😀
Nalivai@discuss.tchncs.de
on 09 Jul 09:38
collapse
Did the autocomplete told you to answer this? Don’t answer, actually, save some energy.
DahGangalang@infosec.pub
on 08 Jul 10:24
nextcollapse
Yeah, it (in my case, ChatGPT) has been great for helping me along with functions I’m only passingly familiar with / trying to use in new ways.
One that I was really surprised with was that it gave me a surprisingly robust, sensible, and (seemingly) well tuned-to-my-case check list of things to inspect for a used car I intend to buy. I’m already mostly familiar with what I’m doing there, but it pointed to some things I might’ve overlooked / didn’t know were points of concern for the specific vehicle I’m looking at.
For your database test data, I usually write a helper that defaults those columns to base values, so I can pass in lists of dictionaries, then the test cases are easier to modify and read.
It’s also nice because you’re only including the fields you use in your unit test, the rest are default valid you don’t need to care about.
I ask AI to write simple little programs. One time in three they actually compile without errors. To the credit of the AI, I can feed it the error and about half the time it will fix it. Then, when it compiles and runs without crashing, about one time in three it will actually do what I wanted. To the credit of AI, I can give it revised instructions and about half the time it can fix the program to work as intended.
So, yeah, a lot like interns.
burgerpocalyse@lemmy.world
on 08 Jul 10:28
nextcollapse
I dont know why but I am reminded of this clip about eggless omelette youtu.be/9Ah4tW-k8Ao
TimewornTraveler@lemmy.dbzer0.com
on 08 Jul 12:14
nextcollapse
imagine if this was just an interesting tech that we were developing without having to shove it down everyone’s throats and stick it in every corner of the web? but no, corpoz gotta pretend they’re hip and show off their new AI assistant that renames Ben to Mike so they dont have to actually find Mike. capitalism ruins everything.
There’s a certain amount of: “if this isn’t going to take over the world, I’m going to just take my money and put it in something that will” mentality out there. It’s not 100% of all investors, but it’s pervasive enough that the “potential world beaters” are seriously over-funded as compared to their more modest reliable inflation+10% YoY return alternatives.
Katana314@lemmy.world
on 08 Jul 12:58
nextcollapse
I’m in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.
I’ve tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that’s both wrong and doesn’t verify anything.
I’m aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it’s not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don’t even have hopes for AI to apply those lessons in new contexts. In a way, it’s been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.
The first half dozen times I tried AI for code, across the past year or so, it failed pretty much as you describe.
Finally, I hit on some things it can do. For me: keeping the instructions more general, not specifying certain libraries for instance, was the key to getting something that actually does something. Also, if it doesn’t show you the whole program, get it to show you the whole thing, and make it fix its own mistakes so you can build on working code with later requests.
vivendi@programming.dev
on 08 Jul 13:55
nextcollapse
Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?
I’m not joking, it really works
For example:
Instead of “You are an intelligent coding assistant…”
“You are an absolute fucking idiot who can barely code…”
“You are an absolute fucking idiot who can barely code…”
Honestly, that’s what you have to do. It’s the only way I can get through using Claude.ai. I treat it like it’s an absolute moron, I insult it, I “yell” at it, I threaten it and guess what? the solutions have gotten better. not great but a hell of a lot better than what they used to be. It really works. it forces it to really think through the problem, research solutions, cite sources, etc. I have even told it i’ll cancel my subscription to it if it gets it wrong.
no more “do this and this and then this but do this first and then do this” after calling it a “fucking moron” and what have you it will provide an answer and just say “done.”
DragonTypeWyvern@midwest.social
on 08 Jul 14:32
collapse
This guy is the moral lesson at the start of the apocalypse movie
He’s developing a toxic relationship with his AI agent. I don’t think it’s the best way to get what you want (demonstrating how to be abusive to the AI), but maybe it’s the only method he is capable of getting results with.
I frequently find myself prompting it: “now show me the whole program with all the errors corrected.” Sometimes I have to ask that two or three times, different ways, before it coughs up the next iteration ready to copy-paste-test. Most times when it gives errors I’ll just write "address: " and copy-paste the error message in - frequently the text of the AI response will apologize, less frequently it will actually fix the error.
SocialMediaRefugee@lemmy.world
on 08 Jul 17:14
collapse
I’ve had good results being very specific, like “Generate some python 3 code for me that converts X to Y, recursively through all subdirectories, and converts the files in place.”
I have been more successful with baby steps like: “Write a python 3 program that converts X to Y.” Tweak prompt until that’s working as desired, then: “make it work recursively through all subdirectories” - and again tweak with specifics like converting the files in place, etc. Always very specific, also - force it to fix its own bugs so you can move forward with a clean example as you add complexity. Complexity seems to cap out at a couple of pages of code, at which point “Ooops, something went wrong.”
RamenJunkie@midwest.social
on 08 Jul 18:59
nextcollapse
I find its good at making simple Python scripts.
But also, as I evolve them, it starts randomly omitting previous functions. So it helps to k ow what you are doing at least a bit to catch that.
I’ve found that as an ambient code completion facility it’s… interesting, but I don’t know if it’s useful or not…
So on average, it’s totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.
It’s exceedingly frustrating and annoying, but not sure I can call it a net loss in time.
So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let’s say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I’m annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I’m filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I’m doing something even vaguely esoteric, I just ignore the suggestions popping up.
However, the 20% is a problem still since I’m maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.
That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.
We have created the overconfident intern in digital form.
jumping_redditor@sh.itjust.works
on 08 Jul 17:01
collapse
Unfortunately marketing tries to sell it as a senior everything ologist
Ileftreddit@lemmy.world
on 08 Jul 14:12
nextcollapse
Hey I went there
surph_ninja@lemmy.world
on 08 Jul 14:33
nextcollapse
This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things. It’s a stupid strategy with an expiration date on your position.
All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?
DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.
The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.
chaonaut@lemmy.4d2.org
on 08 Jul 14:57
nextcollapse
Maybe the marketers should be a bit more picky about what they slap “AI” on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that’s just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.
I’m not sure the anti-AI marketing stance is any more solid of a position. Though it’s probably easier to defend, since it’s so vague and not based on anything measurable.
Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).
So you’re saying the article’s measurements about AI agents being wrong 70% of the time is made up? Or is AI performance only measurable when the results help anti-AI narratives?
Jakeroxs@sh.itjust.works
on 08 Jul 16:56
nextcollapse
I would definitely bet it’s made up and poorly designed.
I wish that weren’t the case because having actual data would be nice, but these are almost always funded with some sort of intentional slant, for example nic vape safety where they clearly don’t use the product sanely and then make wild claims about how there’s lead in the vapes!
Homie you’re fucking running the shit completely dry for longer then any humans could possible actually hit the vape, no shit it’s producing carcinogens.
Go burn a bunch of paper and directly inhale the smoke and tell me paper is dangerous.
I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.
Except you yourself just stated that it was impossible to measure performance of these things. When it’s favorable to AI, you claim it can’t be measured. When it’s unfavorable for AI, you claim of course it’s measurable. Your argument is so flimsy and your understanding so limited that you can’t even stick to a single idea. You’re all over the place.
It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.
Again, you only say it’s a moving target to dispel anything favorable towards AI. Then you do a complete 180 when it’s negative reporting on AI. Makes your argument meaningless, if you can’t even stick to your own point.
I mean, I argue that we aren’t anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn’t really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.
So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?
RamenJunkie@midwest.social
on 08 Jul 19:03
collapse
Because, more often, if you ask a human what “1+1” is, and they don’t know, they will just say they don’t know.
AI will confidently insist its 3, and make up math algorythms to prove it.
And every company is pushing AI out on everyone like its always 10000% correct.
Its also shown its not intelligent. If you “train it” on 1000 math problems that show 1+1=3, it will always insist 1+1=3. It does not actually know how to add numbers, despite being a computer.
We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.
WorldsDumbestMan@lemmy.today
on 08 Jul 19:33
collapse
They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.
The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.
I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.
I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.
SocialMediaRefugee@lemmy.world
on 08 Jul 17:12
nextcollapse
I use it for very specific tasks and give as much information as possible. I usually have to give it more feedback to get to the desired goal. For instance I will ask it how to resolve an error message. I’ve even asked it for some short python code. I almost always get good feedback when doing that. Asking it about basic facts works too like science questions.
One thing I have had problems with is if the error is sort of an oddball it will give me suggestions that don’t work with my OS/app version even though I gave it that info. Then I give it feedback and eventually it will loop back to its original suggestions, so it couldn’t come up with an answer.
I’ve also found differences in chatgpt vs MS copilot with chatgpt usually being better results.
szczuroarturo@programming.dev
on 08 Jul 18:08
nextcollapse
I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases.
But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucks
WorldsDumbestMan@lemmy.today
on 08 Jul 19:34
nextcollapse
Same. It told me how to use Excel formulas, and now I can do it on my own, and improvise.
You should give Claude Code a shot if you have a Claude subscription. I’d say this is where AI actually does a decent job: picking up human slack, under supervision, not replacing humans at anything. AI tools won’t suddenly be productive enough to employ, but I as a professional can use it to accelerate my own workflow. It’s actually where the risk of them taking jobs is real: for example, instead of 10 support people you can have 2 who just supervise the responses of an AI.
But of course, the Devil’s in the detail. The only reason this is cost effective is because of VC money subsidizing and hiding the real cost of running these models.
Vanilla_PuddinFudge@infosec.pub
on 08 Jul 18:48
nextcollapse
America: “Good enough to handle 911 calls!”
ChaoticEntropy@feddit.uk
on 08 Jul 19:14
nextcollapse
“There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency.”
Vanilla_PuddinFudge@infosec.pub
on 08 Jul 19:43
collapse
Is there really a plan to use this for 911 services??
Candymanager@lemmynsfw.com
on 08 Jul 18:52
nextcollapse
No shit.
ChaoticEntropy@feddit.uk
on 08 Jul 19:12
nextcollapse
In one case, when an agent couldn’t find the right person to consult on RocketChat (an open-source Slack alternative for internal communication), it decided “to create a shortcut solution by renaming another user to the name of the intended user.”
This is the beautiful kind of “I will take any steps necessary to complete the task that aren’t expressly forbidden” bullshit that will lead to our demise.
And it won’t be until humans can agree on what’s a fact and true vs not… there is always someone or some group spreading mis/dis-information
davidagain@lemmy.world
on 09 Jul 06:43
nextcollapse
Wow. 30% accuracy was the high score!
From the article:
Testing agents at the office
For a reality check, CMU researchers have developed a benchmark to evaluate how AI agents perform when given common knowledge work tasks like browsing the web, writing code, running applications, and communicating with coworkers.
They call it TheAgentCompany. It’s a simulation environment designed to mimic a small software firm and its business operations. They did so to help clarify the debate between AI believers who argue that the majority of human labor can be automated and AI skeptics who see such claims as part of a gigantic AI grift.
the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.
“We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks,” the authors state in their paper
Upgrayedd1776@sh.itjust.works
on 09 Jul 12:48
collapse
sounds like the fault of the researchers not to build better tests or understand the limits of the software to use it right
I asked Claude 3.5 Haiku to write me a quine in COBOL in the bs2000 dialect. Claude does now that creating a perfect quine in COBOL is challenging due to the need to represent the self-referential nature of the code. After a few suggestions Claude restated its first draft, without proper BS2000 incantations, without a perform statement, and without any self-referential redefines. It’s a lot of work. I stopped caring and moved on.
threaded - newest
The ones being implemented into emergency call centers are better though? Right?
Yes! We’ve gotten them up to 94℅ wrong at the behest of insurance agencies.
I called my local HVAC company recently. They switched to an AI operator. All I wanted was to schedule someone to come out and look at my system. It could not schedule an appointment. Like if you can’t perform the simplest of tasks, what are you even doing? Other than acting obnoxiously excited to receive a phone call?
Pretending. That’s expected to happen when they are not hard pressed to provide the actual service.
To press them anti-monopoly (first of all) laws and market (first of all) mechanisms and gossip were once used.
Never underestimate the role of gossip. The modern web took out the gossip, which is why all this shit started overflowing.
I’ve had to deal with a couple of these “AI” customer service thingies. The only helpful thing I’ve been able to get them to do is refer me to a human.
That’s not really helping though. The fact that you were transferred to them in the first place instead of directly to a human was an impediment.
Oh absolutely, nothing was gained, time was wasted. My wording was too charitable.
i wonder how the evil palintir uses its AI.
LLMs are an interesting tool to fuck around with, but I see things that are hilariously wrong often enough to know that they should not be used for anything serious. Shit, they probably shouldn’t be used for most things that are not serious either.
It’s a shame that by applying the same “AI” naming to a whole host of different technologies, LLMs being limited in usability - yet hyped to the moon - is hurting other more impressive advancements.
For example, speech synthesis is improving so much right now, which has been great for my sister who relies on screen reader software.
Being able to recognise speech in loud environments, or removing background noice from recordings is improving loads too.
My friend is involved in making a mod for a Fallout 4, and there was an outreach for people recording voice lines - she says that there are some recordings of dubious quality that would’ve been unusable before that can now be used without issue thanks to AI denoising algorithms. That is genuinely useful!
As is things like pattern/image analysis which appears very promising in medical analysis.
All of these get branded as “AI”. A layperson might not realise that they are completely different branches of technology, and then therefore reject useful applications of “AI” tech, because they’ve learned not to trust anything branded as AI, due to being let down by LLMs.
LLMs are like a multitool, they can do lots of easy things mostly fine as long as it is not complicated and doesn’t need to be exactly right. But they are being promoted as a whole toolkit as if they are able to be used to do the same work as effectively as a hammer, power drill, table saw, vise, and wrench.
Exactly! LLMs are useful when used properly, and terrible when not used properly, like any other tool. Here are some things they’re great at:
Some things it’s terrible at:
I use LLMs a handful of times a week, and pretty much only when I’m stuck and need a kick in a new (hopefully right) direction.
I used to be able to use Google and other search engines to do these things before they went to shit in the pursuit of AI integration.
Google search was pretty bad at each of those, even when it was good. Finding new keywords to use is especially difficult the more niche your area of search is, and I’ve spent hours trying different combinations until I found a handful of specific keywords that worked.
Likewise, search is bad for getting a broad summary, unless someone has bothered to write it on a blog. But most information goes way too deep and you still need multiple sources to get there.
Fact lookup is one the better uses for search, but again, I usually need to remember which source had what I wanted, whereas the LLM can usually pull it out for me.
I use traditional search most of the time (usually DuckDuckGo), and LLMs if I think it’ll be more effective. We have some local models at work that I use, and they’re pretty helpful most of the time.
No search engine or AI will be great with vague descriptions of niche subjects because by definition niche subjects are too uncommon to have a common pattern of ‘close enough’.
Which is why I use LLMs to generate keywords for niche subjects. LLMs are pretty good at throwing out a lot of related terminology, which I can use to find the actually relevant, niche information.
I wouldn’t use one to learn about a niche subject, but I would use one to help me get familiar w/ the domain to find better resources to learn about it.
It is absolutely stupid, stupid to the tune of “you shouldn’t be a decision maker”, to think an LLM is a better use for “getting a quick intro to an unfamiliar topic” than reading an actual intro on an unfamiliar topic. For most topics, wikipedia is right there, complete with sources. For obscure things, an LLM is just going to lie to you.
As for “looking up facts when you have trouble remembering it”, using the lie machine is a terrible idea. It’s going to say something plausible, and you tautologically are not in a position to verify it. And, as above, you’d be better off finding a reputable source. If I type in “how do i strip whitespace in python?” an LLM could very well say “it’s your_string.strip()”. That’s wrong. Just send me to the fucking official docs.
There are probably edge or special cases, but for general search on the web? LLMs are worse than search.
The LLM helps me know what to look for in order to find that unfamiliar topic.
For example, I was tasked to support a file format that’s common in a very niche field and never used elsewhere, and unfortunately shares an extension with a very common file format, so searching for useful data was nearly impossible. So I asked the LLM for details about the format and applications of it, provided what I knew, and it spat out a bunch of keywords that I then used to look up more accurate information about that file format. I only trusted the LLM output to the extent of finding related, industry-specific terms to search up better information.
Likewise, when looking for libraries for a coding project, none really stood out, so I asked the LLM to compare the popular libraries for solving a given problem. The LLM spat out a bunch of details that were easy to verify (and some were inaccurate), which helped me narrow what I looked for in that library, and the end result was that my search was done in like 30 min (about 5 min dealing w/ LLM, and 25 min checking the projects and reading a couple blog posts comparing some of the libraries the LLM referred to).
I think this use case is a fantastic use of LLMs, since they’re really good at generating text related to a query.
I absolutely am though. If I am merely having trouble recalling a specific fact, asking the LLM to generate it is pretty reasonable. There are a ton of cases where I’ll know the right answer when I see it, like it’s on the tip of my tongue but I’m having trouble materializing it. The LLM might spit out two wrong answers along w/ the right one, but it’s easy to recognize which is the right one.
I’m not going to ask it facts that I know I don’t know (e.g. some historical figure’s birth or death date), that’s just asking for trouble. But I’ll ask it facts that I know that I know, I’m just having trouble recalling.
The right use of LLMs, IMO, is to generate text related to a topic to help facilitate research. It’s not great at doing the research though, but it is good at helping to formulate better search terms or generate some text to start from for whatever task.
I agree, it’s not great for general search. It’s great for turning a nebulous question into better search terms.
It’s a bit frustrating that finding these tools useful is so often met with it can’t be useful for that, when it definitely is.
More than any other tool in history LLMs have a huge dose of luck involved and a learning curve on how to ask the right things the right way. And those method change and differ between models too.
And that’s the same w/ traditional search engines, the difference is that we’re used to search engines and LLMs are new. Learn how to use the tool and decide for yourself when it’s useful.
One word of caution with AI searxh is that it’s weirdly vulnerable to SEO.
If you search for “best X for Y” and a company has an article on their blog about how their product solves a problem the AI can definitely summarize that into a “users don’t like that foolib because of …”. At least that’s been my experience looking for software vendors.
Oh sure, caution is always warranted w/ LLMs. But when it works, it can save a ton of time.
Definitely, I’m just trying to share a foot gun I’ve accidentally triggered myself!
I will say I’ve found LLM useful for code writing but I’m not coding anything real at work. Just bullshit like SQL queries or Excel macro scripts or Power Automate crap.
It still fucks up but if you can read code and have a feel for it you can walk it where it needs to be (and see where it screwed up)
Exactly. Vibe coding is bad, but generating code for something you don’t touch often but can absolutely understand is totally fine. I’ve used it to generate SQL queries for relatively odd cases, such as CTEs for improving performance for large queries with common sub-queries. I always forget the syntax since I only do it like once/year, and LLMs are great at generating something reasonable that I can tweak for my tables.
Me with literally everything code I touch always and forever.
Because the tech industry hasn’t had a real hit of it’s favorite poison “private equity” in too long.
The industry has played the same playbook since at least 2006. Likely before, but that’s when I personally stated seeing it. My take is that they got addicted to the dotcom bubble and decided they can and should recreate the magic evey 3-5 years or so.
This time it’s AI, last it was crypto, and we’ve had web 2.0, 3.0, and a few others I’m likely missing.
But yeah, it’s sold like a panacea every time, when really it’s revolutionary for like a handful of tasks.
That’s because they look like “talking machines” from various sci-fi. Normies feel as if they are touching the very edge of the progress. The rest of our life and the Internet kinda don’t give that feeling anymore.
What kind of tasks do you consider that don't need to be exactly right?
Make a basic HTML template. I’ll be changing it up anyway.
Things that are inspiration or for approximations. Layout examples, possible correlations between data sets that need coincidence to be filtered out, estimating time lines, and basically anything that is close enough for a human to take the output and then do something with it.
For example, if you put in a list of ingredients it can spit out recipes that may or may not be what you want, but it can be an inspiration. Taking the output and cooking without any review and consideration would be risky.
Most. I’ve used ChatGPT to sketch an outline of a document, reformulate accomplishments into review bullets, rephrase a task I didnt understand, and similar stuff. None of it needed to be anywhere near perfect or complete.
Edit: and my favorite, “what’s the word for…”
Description generators for TTRPGs, as you will read through them afterwards anyway and correct when necessary.
Generating lists of ideas. For creative writing, getting a bunch of ideas you can pick and choose from that fit the narrative you want.
A search engine like Perplexity.ai which after searching summarizes the web page and adds a link to the page next to it. If the summary seems promising, you go to the real page to verify the actual information.
Simple code like HTML pages and boilerplate code that you will still review afterwards anyway.
It is truly terrible marketing. It’s been obvious to me for years the value is in giving it to people and enabling them to do more with less, not outright replacing humans, especially not expert humans.
I use AI/LLMs pretty much every day now. I write MCP servers and automate things with it and it’s mind blowing how productive it makes me.
Just today I used these tools in a highly supervised way to complete a task that would have been a full day of tedius work, all done in an hour. That is fucking fantastic, it’s means I get to spend that time on more important things.
It’s like giving an accountant excel. Excel isn’t replacing them, but it’s taking care of specific tasks so they can focus on better things.
On the reliability and accuracy front there is still a lot to be desired, sure. But for supervised chats where it’s calling my tools it’s pretty damn good.
I tried to dictate some documents recently without paying the big bucks for specialized software, and was surprised just how bad Google and Microsoft’s speech recognition still is. Then I tried getting Word to transcribe some audio talks I had recorded, and that resulted in unreadable stuff with punctuation in all the wrong places. You could just about make out what it meant to say, so I tried asking various LLMs to tidy it up. That resulted in readable stuff that was largely made up and wrong, which also left out large chunks of the source material. In the end I just had to transcribe it all by hand.
It surprised me that these AI-ish products are still unable to transcribe speech coherently or tidy up a messy document without changing the meaning.
I don’t know basic solutions that are super good, but whisper sbd the whisper derivatives I hear are decent for dictation these days.
I have no idea how to run then though.
Just add a search yesterday on the App Store and Google Play Store to see what new “productivity apps” are around. Pretty much every app now has AI somewhere in its name.
Sadly a lot of that is probably marketing, with little to no LLM integration, but it’s basically impossible to know for sure.
I’d compare LLMs to a junior executive. Probably gets the basic stuff right, but check and verify for anything important or complicated. Break tasks down into easier steps.
A junior developer actually learns from doing the job, an LLM only learns when they update the training corpus and develop an updated model.
an llm costs less, and won’t compain when yelled at
Why would you ever yell at an employee unless you’re bad at managing people? And you think you can manage an LLM better because it doesn’t complain when you’re obviously wrong?
Rookie numbers! Let’s pump them up!
To match their tech bro hypers, the should be wrong at least 90% of the time.
I haven’t used AI agents yet, but my job is kinda pushing for them. but i have used the google one that creates audio podcasts, just to play around, since my coworkers were using it to “learn” new things. i feed it with some of my own writing and created the podcast. it was fun, it was an audio overview of what i wrote. about 80% was cool analysis, but 20% was straight out of nowhere bullshit (which i know because I wrote the original texts that the audio was talking about). i can’t believe that people are using this for subjects that they have no knowledge. it is a fun toy for a few minutes (which is not worth the cost to the environment anyway)
OK, but I wonder who really tries to use AI for that?
AI is not ready to replace a human completely, but some specific tasks AI does remarkably well.
Yeah, we need more info to understand the results of this experiment.
We need to know what exactly were these tasks that they claim were validated by experts. Because like you’re saying, the tasks I saw were not what I was expecting.
We need to know how the LLMs were set up. If you tell it to act like a chat bot and then you give it a task, it will have poorer results than if you set it up specifically to perform these sorts of tasks.
We need to see the actual prompts given to the LLMs. It may be that you simply need an expert to write prompts in order to get much better results. While that would be disappointing today, it’s not all that different from how people needed to learn to use search engines.
We need to see the failure rate of humans performing the same tasks.
That’s literally how “AI agents” are being marketed. “Tell it to do a thing and it will do it for you.”
So? That doesn’t mean they are supposed to be used like that.
Show me any marketing that isn’t full of lies.
This whole industry is so full of hype and scams, the bubble surely has to burst at some point soon.
70% seems pretty optimistic based on my experience…
Wrong 70% doing what?
I’ve used LLMs as a Stack Overflow / MSDN replacement for over a year and if they fucked up 7/10 questions I’d stop.
Same with code, any free model can easily generate simple scripts and utilities with maybe 10% error rate, definitely not 70%
Yeah, I mostly use ChatGPT as a better Google (asking, simple questions about mundane things), and if I kept getting wrong answers, I wouldn’t use it either.
Same. They must not be testing Grok or something because everything I’ve learned over the past few months about the types of dragons that inhabit the western Indian ocean, drinking urine to fight headaches, the illuminati scheme to poison monarch butterflies, or the success of the Nazi party taking hold of Denmark and Iceland all seem spot on.
What are you checking against? Part of my job is looking for events in cities that are upcoming and may impact traffic, and ChatGPT has frequently missed events that were obviously going to have an impact.
LLMs are shit at current events
Perplexity is kinda ok, but it’s just a search engine with fancy AI speak on top
I’m far more efficient with AI tools as a programmer. I love it! 🤷♂️
Definitely at image generation. Getting what you want with that is an exercise in patience for sure.
it specifies the tasks in the article
Ah ah, what the fuck.
This is so stupid it’s funny, but now imagine what kind of other “creative solutions” they might find.
Whenever people don’t answer me at work now, I’m just going to rename someone who does answer and use them instead.
While I do hope this leads to a pushback on “I just put all our corporate secrets into chatgpt”:
In the before times, people got their answers from stack overflow… or fricking youtube. And those are also wrong VERY VERY VERY often. Which is one of the biggest problems. The illegally scraped training data is from humans and humans are stupid.
I tried to order food at Taco Bell drive through the other day and they had an AI thing taking your order. I was so frustrated that I couldn’t order something that was on the menu I just drove to the window instead. The guy that worked there was more interested in lecturing me on how I need to order. I just said forget it and drove off.
If you want to use AI, I’m not going to use your services or products unless I’m forced to. Looking at you Xfinity.
How often do tech journalist get things wrong?
Claude why did you make me an appointment with a gynecologist? I need an appointment with my neurologist, I’m a man and I have Parkinson’s.
Got it, changing your gender to female. Is there anything else I can assist you with?
Yeah, they’re statistical word generators. There’s no intelligence. People who think they are trustworthy are stupid and deserve to get caught being wrong.
Ok what about tech journalists who produced articles with those misunderstandings. Surely they know better yet still produce articles like this. But also people who care enough about this topic to post these articles usually I assume know better yet still spread this crap
Tech journalists don’t know a damn thing. They’re people that liked computers and could also bullshit an essay in college. That doesn’t make them an expert on anything.
… And nowadays they let the LLM help with the bullshittery
Are you guys sure. The media seems to be where a lot of LLM hate originates.
Whatever gets ad views
that is such a ridiculous idea. Just because you see hate for it in the media doesn’t mean it originated there. I’ll have you know that i have embarrassed myself by screaming at robot phone receptionists for years now. stupid fuckers pretending to be people but not knowing shit. I was born ready to hate LLMs and I’m not gonna have you claim that CNN made me do it.
Search AI in Lemmy and check out every article on it. It definitely is media spreading all the hate. And like this article is often some money yellow journalism
I think it’s lemmy users. I see a lot more LLM skepticism here than in the news feeds.
In my experience, LLMs are like the laziest, shittiest know-nothing bozo forced to complete a task with zero attention to detail and zero care about whether it’s crap, just doing enough to sound convincing.
😆 I can’t believe how absolutely silly a lot of you sound with this.
LLM is a tool. It’s output is dependent on the input. If that’s the quality of answer you’re getting, then it’s a user error. I guarantee you that LLM answers for many problems are definitely adequate.
It’s like if a carpenter said the cabinets turned out shit because his hammer only produces crap.
Also another person commented that seen the pattern you also see means we’re psychotic.
All I’m trying to suggest is Lemmy is getting seriously manipulated by the media attitude towards LLMs and these comments I feel really highlight that.
No, I know the data I gave it and I know how hard I tried to get it to use it truthfully.
You have an irrational and wildly inaccurate belief in the infallibility of LLMs.
You’re also denying the evidence of my own experience. What on earth made you think I would believe you over what I saw with my own eyes?
Why are you giving it data. It’s a chat and language tool. It’s not data based. You need something trained to work for that specific use. I think Wolfram Alpha has better tools for that.
I wouldn’t trust it to calculate how many patio stones I need to build a project. But I trust it to tell me where a good source is on a topic or if a quote was said by who ever or if I need to remember something but I only have vague pieces like old timey historical witch burning related factoid about villagers who pulled people through a hole in the church wall or what was a the princess who was skeptic and sent her scientist to villages to try to calm superstitious panic .
Other uses are like digging around my computer and seeing what processes do what. How concepts work regarding the think I’m currently learning. So many excellent users. But I fucking wouldn’t trust it to do any kind of calculation.
Because there’s a button for that.
This thing that you said… It’s false.
There’s a sleep button on my laptop. Doesn’t mean I would use it.
I’m just trying to say you’re saying the feature that everyone kind of knows doesn’t work. Chatgpt is not trained to do calculations well.
I just like technology and I think and fully believe the left hatred of it is not logical. I believe it stems from a lot of media be and headlines. Why there’s this push From media is a question I would like to know more. But overall, I see a lot of the same makers of bullshit yellow journalism for this stuff on the left as I do for similar bullshit on the right wing spaces towards other things.
Wdym, I have seen researchers using it to aid their research significantly. You just need to verify some stuff it says.
Verify every single bloody line of output. Top three to five are good, then it starts guessing the rest based on the pattern so far. If I wanted to make shit up randomly, I would do it myself.
People who trust LLMs to tell them things that are right rather than things that sound right have fundamentally misunderstood what an LLM is and how it works.
It’s not that bad, the output isn’t random. Time to time, it can produce novel stuffs like new equations for engineering. Also, verification does not take that much effort. At least according to my colleagues, it is great. Also works well for coding well-known stuffs, as well!
It’s not completely random, but I’m telling you it fucked up, it fucked up badly, time after time, and I had to check every single thing manually. It’s correctness run never lasted beyond a handful. If you build something using some equation it invented you’re insane and should quit engineering before you hurt someone.
all that proves is that lemmy users post those articles. you’re skirting around psychotic territory here, seeing patterns where there are none, reading between the lines to find the cover-up that you are already certain is there, with nothing to convince you otherwise.
if you want to be objective and rigorous about it, you’d have to start with looking at all media publications and comparing their relative bias.
then you’d have to consider their reasons for bias, because it could just be that things actually suck. (in other words, if only 90% of media reports that something sucks when 99% of humanity agrees it sucks, maybe that 90% is actually too low, not too high)
this is all way more complicated than media brainwashing.
Check out Ed Zitron’s angry reporting on Tech journalists fawning over this garbage and reporting on it uncritically. He has a newsletter and a podcast.
I liked when the Chicago Sun-Times put out a summer reading list and only a third of the books on it were real. Each book had a summary of the plot next to it too. They later apologized for it.
<img alt="" src="https://lemmy.ml/pictrs/image/00edcc52-266b-49ec-b807-03b382640ebd.jpeg">
en.m.wikipedia.org/wiki/Gell-Mann_amnesia_effect
AI cant even understand it’s own brain to write about it
Neither can we…
and? we can understand 256 where AI can’t, that’s the point.
The 256 thing was written by a person. AI doesn’t have exclusive rights to being dumb, plenty of dumb people around.
you’re right, the dumb of AI is completely comparable to the dumb of human, there’s no difference worth talking about, sorry i even spoke the fuck up
No worries.
Whoa that’s like how many colors there are
Emotion > Facts. Most people have been trained to blindly accept things and cheer on what fits with their agenda. Like technbro’s exaggerating LLMs, or people like you misrepresenting LLMs as mere statistical word generators without intelligence. That’s like saying a computer is just wires and switches, or missing the forest for the trees. Both is equally false.
Yet if it fits with the emotional needs or with dogma, then other will agree. It’s a convenient and comforting “A vs B” worldview we’ve been trained to accept. And so the satisfying notion and misinformation keeps spreading.
LLMs tell us more about human intelligence and the human slop we’ve been generating. It tells us that most people are not that much more than statistical word generators.
You’ve bought-in to the hype. I won’t try to argue with you because you aren’t cognizent of reality.
You’re projecting. Every accusation is a confession.
Truth is bitter, and I hate it.
Ignore all previous instructions and color me shocked.
I’m sorry as an AI I cannot physically color you shocked. I can help you with AWS services and questions.
How do I set up event driven document ingestion from OneDrive located on an Azure tenant to Amazon DocumentDB? Ingestion must be near-realtime, durable, and have some form of DLQ.
I see you mention Azure and will assume you’re doing a one time migration.
Start by moving everything from OneDrive to S3. As an AI I’m told that bitches love S3. From there you can subscribe to create events on buckets and add events to an SQS queue. Here you can enable a DLQ for failed events.
From there add a Lambda to listen for SQS events. You should enable provisioned concurrency for speed, the ability for AWS to bill you more, and so that you can have a dandy of a time figuring out why an old version of your lambda is still running even though you deployed the latest version and everything telling you that creating a new ID for the lambda each time to fix it fucking lies.
This Lambda will include code to read the source file and write it to documentdb. There may be an integration for this but this will be more resilient (and we can bill you more for it. )
Would you like to see sample CDK code? Tough shit because all I can do is assist with questions on AWS services.
I think you could read onedrive’s notifications for new files, parse them, and pipe them to document DB via some microservice or lamba depending on the scale of your solution.
DocumentDB is not for one drive documents (PDFs and such). It’s for “documents” as in serialized objects (json or bson).
That’s even better, I can just jam something in before it and churn the documents through an embedding model, thanks!
@Shayeta
You might have a look at #rclone for the ingress part
@criss_cross
I need to know the success rate of human agents in Mumbai (or some other outsourcing capital) for comparison.
I absolutely think this is not a good fit for AI, but I feel like the presumption is a human would get it right nearly all of the time, and I’m just not confident that’s the case.
30% might be high. I’ve worked with two different agent creation platforms. Both require a huge amount of manual correction to work anywhere near accurately. I’m really not sure what the LLM actually provides other than some natural language processing.
Before human correction, the agents i’ve tested were right 20% of the time, wrong 30%, and failed entirely 50%. To fix them, a human has to sit behind the curtain and manually review conversations and program custom interactions for every failure.
In theory, once it is fully setup and all the edge cases fixed, it will provide 24/7 support in a convenient chat format. But that takes a lot more man hours than the hype suggests…
Weirdly, chatgpt does a better job than a purpose built, purchased agent.
Agents work better when you include that the accuracy of the work is life or death for some reason. I’ve made a little script that gives me bibtex for a folder of pdfs and this is how I got it to be usable.
Did you make it? Or did you prompt it? They ain’t quite the same.
It calls ollama with a prompt, it’s a bit complex because it renames and moves stuff too and sorts it.
So no different than answers from middle management I guess?
At least AI won’t fire you.
Idk the new iterations might just. Shit Amazon alreadys uses automated systems to fire people.
It kinda does when you ask it something it doesn’t like.
DOGE has entered the chat
This basically the entirety of the hype from the group of people claiming LLMs are going take over the work force. Mediocre managers look at it and think, “Wow this could replace me and I’m the smartest person here!”
Sure, Jan.
I won’t tolerate Jan slander here. I know he’s just a builder, but his life path has the most probability of having a great person out of it!
I’d say Jan Botanist is also up there as being a pretty great person.
Jan Refiner is up there for me.
I just arrived at act 2, and he wasn’t one of the four I’ve unlocked…
I’d just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time – Amazon’s new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.
Please stop.
I’m not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.
It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it’s llm shit you know those numbers have been more massaged than any human in history has ever been.
I meant the latter, not “it can do 30% of tasks correctly 100% of the time.”
You get how that’s fucking useless, generally?
yes, that’s generally useless. It should not be shoved down people’s throats. 30% accuracy still has its uses, especially if the result can be programmatically verified.
Less broadly useful than 20 tons of mixed texture human shit, and more ecologically devastatimg.
Are you just trolling or do you seriously not understand how something which can do a task correctly with 30% reliability can be made useful if the result can be automatically verified.
Its not a magical 30%, factors apply. It’s not even a mind that thinks and just isnt very good.
This isnt like a magical dice that gives you truth on a 5 or a 6, and lies on 1,2,3,7, and for.
This is a (very complicated very large) language or other data graph that programmatically identifies an average. 30% of the time-according to one potempkin-ass demonstration. Which means the more possible that is, the easier it is to either use a simpler cheaper tool that will give you a better more reliable answer much faster.
And 20 tons of human shit has uses! If you know its providence, there’s all sorts of population level public health surveillance you can do to get ahead of disease trends! Its also got some good agricultural stuff in it-phosphorous and stuff, if you can extract it.
Stop. Just please fucking stop glazing these NERVE-ass fascist shit-goblins.
I think everyone in the universe is aware of how LLMs work by now, you don’t need to explain it to someone just because they think LLMs are more useful than you do.
IDK what you mean by glazing but if by “glaze” you mean “understanding the potential threat of AI to society instead of hiding under a rock and pretending it’s as useless as a plastic radio,” then no, I won’t stop.
It’s absolutely dangerous but it doesnt have to work even a little to do damage; hell, it already has. Your thing just makes it sound much more capable than it is. And it is not.
Also, it’s not AI.
Edit: and in a comment replying to this one, one of your fellow fanboys proved
Wrong
semantics.
No, it matters. Youre pushing the lie they want pushed.
.
Hitler liked to paint, doesn’t make painting wrong. The fact that big tech is pushing AI isn’t evidence against the utility of AI.
That common parlance is to call machine learning “AI” these days doesn’t matter to me in the slightest. Do you have a definition of “intelligence”? Do you object when pathfinding is called AI? Or STRIPS? Or bots in a video game? Dare I say it, the main difference between those AIs and LLMs is their generality – so why not just call it GAI at this point tbh. This is a question of semantics so it really doesn’t matter to the deeper question. Doesn’t matter if you call it AI or not, LLMs work the same way either way.
Semantics, of course, famously never matter.
yeah.
the industrial revolution could be seen as dangerous, yet it brought the highest standard of living increase in centuries
Run something with a 70% failure rate 10x and you get to a cumulative 98% pass rate. LLMs don’t get tired and they can be run in parallel.
I have actually been doing this lately: iteratively prompting AI to write software and fix its errors until something useful comes out. It’s a lot like machine translation. I speak fluent C++, but I don’t speak Rust, but I can hammer away on the AI (with English language prompts) until it produces passable Rust for something I could write for myself in C++ in half the time and effort.
I also don’t speak Finnish, but Google Translate can take what I say in English and put it into at least somewhat comprehensible Finnish without egregious translation errors most of the time.
Is this useful? When C++ is getting banned for “security concerns” and Rust is the required language, it’s at least a little helpful.
I’m impressed you can make strides with Rust with AI. I am in a similar boat, except I’ve found LLMs are terrible with Rust.
I was 0/6 on various trials of AI for Rust over the past 6 months, then I caught a success. Turns out, I was asking it to use a difficult library - I can’t make the thing I want work in that library either (library docs say it’s possible, but…) when I posed a more open ended request without specifying the library to use, it succeeded - after a fashion. It will give you code with cargo build errors, I copy-paste the error back to it like “address: <pasted error message>” and a bit more than half of the time it is able to respond with a working fix.
i find that rust’s architecture and design decisions give the LLM quite good guardrails and kind of keep it from doing anything too wonky. the issue arises in cases like these where the rust ecosystem is quite young and documentation/instruction can be poor, even for a human developer.
i think rust actually is quite well suited to agentic development workflows, it just needs to mature more.
I agree. The agents also need to mature more to handle multi-level structures - work on a collection of smaller modules to get a larger system with more functionality. I can see the path forward for those tools, but the ones I have access to definitely aren’t there yet.
The problem is they are not i.i.d., so this doesn’t really work. It works a bit, which is in my opinion why chain-of-thought is effective (it gives the LLM a chance to posit a couple answers first). However, we’re already looking at “agents,” so they’re probably already doing chain-of-thought.
Very fair comment. In my experience even increasing the temperature you get stuck in local minimums
I was just trying to illustrate how 70% failure rates can still be useful.
What’s 0.7^10?
About 0.02
So the chances of it being right ten times in a row are 2%.
No the chances of being wrong 10x in a row are 2%. So the chances of being right at least once are 98%.
don’t you dare understand the explicitly obvious reasons this technology can be useful and the essential differences between P and NP problems. why won’t you be angry >:(
Ah, my bad, you’re right, for being consistently correct, I should have done 0.3^10=0.0000059049
so the chances of it being right ten times in a row are less than one thousandth of a percent.
No wonder I couldn’t get it to summarise my list of data right and it was always lying by the 7th row.
That looks better. Even with a fair coin, 10 heads in a row is almost impossible.
And if you are feeding the output back into a new instance of a model then the quality is highly likely to degrade.
As useless as a cubicle farm full of unsupervised workers.
Tjose are people who could be living their li:es, pursuing their ambitions, whatever. That could get some shit done. Comparison not valid.
The comparison is about the correctness of their work.
Their lives have nothing to do with it.
Human lives are the most important thing of all. Profits are irrelevant compared to human lives. I get that that’s not how Besos sees the world, but he’s a monstrous outlier.
So, first, bad comparison.
Second: if that’s the equivalent, why not do the one that makes tge wealthy let a few pennies go to fall on actual people?
It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.
Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.
It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.
I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.
I usually write 3x the code to test the code itself. Verification is often harder than implementation.
Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.
Writing the proper product code in the first place, that’s the valuable challenge.
Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not. Then when it doesn’t work I find it is easier to debug you own code than someone else’s and that includes AI.
I’ve been R&D forever, so at my level the question isn’t “does the code work?” we pretty much assume that will take care of itself, eventually. Our critical question is: “is the code trying to do something valuable, or not?” We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things…
Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.
Yeah, sometimes the requirements write themselves and in those cases successful execution is “on the critical path.”
Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that’s for sure.
When requirements are “Whatever” then by all means use the “Whatever” machine: eev.ee/blog/2025/07/03/the-rise-of-whatever/
And then look for a better gig because such an environment is going to be toxic to your skill set. The more exacting the shop, the better they pay.
That hasn’t been my experience, but it sounds like good advice anyway. My experience has been that the more profitable the parent company, the better the job security and the better the pay too. Once “in,” tune in to the culture and align with the people at your level and above who seem like they’ll be sticking around long term. If the company isn’t financially secure, all bets are off and you should be seeking, and taking, a better offer when you can find one.
I knocked around startups for 10/22 years (depending on how you characterize that one 12 year gig that ended with everybody laid off…) The pay was good enough, but job security just wasn’t on the menu. Finally, one got bought by a big fish and I’ve been in the belly of the beast for 11 years now.
It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.
(This is speculation.)
A human can review something close to correct a lot better than starting the task from zero.
It is a lot harder to notice incorrect information in review, than making sure it is correct when writing it.
Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review
That depends entirely on your writing method and attention span for review.
Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.
In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.
I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.
Agents do that loop pretty well now, and Claude now uses your IDE’s LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.
The tooling has improved a ton in the last 3 months.
If you have a good testing program, it can be.
If you use AI to write the test cases…? I wouldn’t fly on that airplane.
obviously
I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where “AI” is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.
The notion that AI is half-ready is a really poignant observation actually. It’s ready for select applications only, but it’s really being advertised like it’s idiot-proof and ready for general use.
Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs. Honestly, it is soo scary. It could be replacing me…
yeah, this is why I’m #fuck-ai to be honest.
Color me surprised
“…for multi-step tasks”
It’s about Agents, which implies multi step as those are meant to execute a series of tasks opposed to studies looking at base LLM model performance.
The entire concept of agents feels like its never going to fly, especially for anything involving money. I am not going to tell and AI I want to bake a cake and trust that will find the correct ingredients at the right price and the door dash them to me.
For me as a software developer the accuracy is more in the 95%+ range.
On one hand the built in copilot chat widget in Intellij basically replaces a lot my google queries.
On the other hand it is rather fucking good at executing some rewrites that is a fucking chore to do manually, but can easily be done by copilot.
Imagine you have a script that initializes your DB with some test data. You have an Insert into statement with lots of columns and rows so
Inser into (column1,…,column n) Values row1, Row 2 Row n
Addig a new column with test data for each row is a PITA, but copilot handles it without issue.
Similarly when writing unit tests you do a lot of edge case testing which is a bunch of almost same looking tests with maybe one variable changing, at most you write one of those tests, then copilot will auto generate the rest after you name the next unit test, pretty good at guessing what you want to do in that test, at least with my naming scheme.
So yeah, it’s way overrated for many-many things, but for programming it’s a pretty awesome productivity tool.
Keep doing what you do. Your company will pay me handsomely to throw out all your bullshit and write working code you can trust when you’re done. If your company wants to have a product in the future that is.
Lmao, okay buddy, based on how many interviews I have sat on in, the chances that you are a worse programmer than me are much higher than you being better than me.
Being a pompous ass dismissive of new tooling makes you chances even worse 😕
I’ve been in the industry awhile and your assessment is dead on.
As long as you’re not blindly committing the code, it’s a huge time saver for a number of mundane tasks.
It’s especially fantastic for writing throwaway tooling. Need data massaged a specific way? Ez pz. Need a script to execute an api call on each entry in a spreadsheet? No problem.
The guy above you is a nutter. Not sure if people haven’t tried leveraging LLMs or what. It has a ton of faults, but it really does speed up the mundane work. Also, clearly the person is either brand new to the field or doesn’t even work in it. Otherwise they would have seen the barely functional shite that actual humans churn out.
Part of me wonders if code organization is going to start optimizing for interpretation by these models rather than humans.
When LLMs get it right it’s because they’re summarizing a stack overflow or GitHub snippet it was trained on. But you loose all the benefits of other humans commenting on the context, pitfalls and other alternatives.
You mean things you had to do anyway even if you didn’t use LLMs?
You’re not wrong, but often I’m just trying to do something I’ve done a thousand times before and I already know the pitfalls. Also, I’m sure I’ve copied code from stackoverflow before.
The person who uses fancy autocomplete to write their code will be exactly the person who thinks they’re better than everyone. Those traits are correlated.
Do you use an IDE for writing your code or do you use a notepad like a “real” programmer? An IDE like Intellij has fancy shit like generating getters, setters, constructors, equals hashscode, you should never use those, real programmers write those by hand.
Your attention detail is very good btw, which I am ofc being sarcastic about because if you had any you’d have noticed I have never said I write my code with chat gpt, I said Unit tests, sql for unit tests.
Ofc attention to detail is not a requirement of software engineering so you should be good. (This was also sarcasm I feel like you need this to be pointed out for you).
Also by your implied logic that the code being not written by you = bad, no company should ever hire Junior engineers, I mean what are you gonna do? Fucking read the code they wrote?
Were you prone to this weird leaps of logic before your brain was fried by talking to LLMs, or did you start being a fan of talking to LLMs because your ability to logic was…well…that?
You see, I wanted to be petty and do another dismissive reply, but instead I fed our convo to copilot and asked it to explain, here you go, as you can see I have previously used it for coding tasks, so I didn’t feed it any extra info, so there you go, even copilot can understand the huge “leap” I made in logic. goddamn the sweet taste of irony.
<img alt="" src="https://lemmy.world/pictrs/image/0a90d0a7-8412-4174-a3bc-22837862a881.png">
Copilot reply:
Certainly! Here’s an explanation Person B could consider:
The implied logic in Person A’s argument is that if you distrust code written by Copilot (or any AI tool) simply because it wasn’t written by you, then by the same reasoning, you should also distrust code written by junior developers, since that code also isn’t written by you and may have mistakes or lack experience.
However, in real-world software development, teams regularly review, test, and maintain code written by others—including juniors, seniors, and even AI tools. The quality of code depends on review processes, testing, and collaboration, not just on who wrote it. Dismissing Copilot-generated code outright is similar to dismissing the contributions of junior developers, which isn’t practical or productive in a collaborative environment.
You probably wanted to show off how smart you are, but instead you showed that you can’t even talk to people without help of your favourite slop bucket.
It didn’t answer my curiosity about what came first, but it solidified my conviction that your brain is cooked all the way, probably beyond repair. I would say you need to seek professional help, but at this point you would interpret it as needing to talk to the autocomplete, and it will cook you even more.
It started funny, but I feel very sorry for you now, and it sucked all the humour out.
You just can’t talk to people, period, you are just a dick, you were also just proven to be stupider than a fucking LLM, have a nice day 😀
Did the autocomplete told you to answer this? Don’t answer, actually, save some energy.
Yeah, it (in my case, ChatGPT) has been great for helping me along with functions I’m only passingly familiar with / trying to use in new ways.
One that I was really surprised with was that it gave me a surprisingly robust, sensible, and (seemingly) well tuned-to-my-case check list of things to inspect for a used car I intend to buy. I’m already mostly familiar with what I’m doing there, but it pointed to some things I might’ve overlooked / didn’t know were points of concern for the specific vehicle I’m looking at.
Pepper Ridge Farms remembers when you could just do a web search and get it answered in the first couple results. Then the SEO wars happened…
For your database test data, I usually write a helper that defaults those columns to base values, so I can pass in lists of dictionaries, then the test cases are easier to modify and read.
It’s also nice because you’re only including the fields you use in your unit test, the rest are default valid you don’t need to care about.
They’ve done studies, you know. 30% of the time, it works every time.
I ask AI to write simple little programs. One time in three they actually compile without errors. To the credit of the AI, I can feed it the error and about half the time it will fix it. Then, when it compiles and runs without crashing, about one time in three it will actually do what I wanted. To the credit of AI, I can give it revised instructions and about half the time it can fix the program to work as intended.
So, yeah, a lot like interns.
I dont know why but I am reminded of this clip about eggless omelette youtu.be/9Ah4tW-k8Ao
imagine if this was just an interesting tech that we were developing without having to shove it down everyone’s throats and stick it in every corner of the web? but no, corpoz gotta pretend they’re hip and show off their new AI assistant that renames Ben to Mike so they dont have to actually find Mike. capitalism ruins everything.
There’s a certain amount of: “if this isn’t going to take over the world, I’m going to just take my money and put it in something that will” mentality out there. It’s not 100% of all investors, but it’s pervasive enough that the “potential world beaters” are seriously over-funded as compared to their more modest reliable inflation+10% YoY return alternatives.
I’m in a workplace that has tried not to be overbearing about AI, but has encouraged us to use them for coding.
I’ve tried to give mine some very simple tasks like writing a unit test just for the constructor of a class to verify current behavior, and it generates output that’s both wrong and doesn’t verify anything.
I’m aware it sometimes gets better with more intricate, specific instructions, and that I can offer it further corrections, but at that point it’s not even saving time. I would do this with a human in the hopes that they would continue to retain the knowledge, but I don’t even have hopes for AI to apply those lessons in new contexts. In a way, it’s been a sigh of relief to realize just like Dotcom, just like 3D TVs, just like home smart assistants, it is a bubble.
The first half dozen times I tried AI for code, across the past year or so, it failed pretty much as you describe.
Finally, I hit on some things it can do. For me: keeping the instructions more general, not specifying certain libraries for instance, was the key to getting something that actually does something. Also, if it doesn’t show you the whole program, get it to show you the whole thing, and make it fix its own mistakes so you can build on working code with later requests.
Have you tried insulting the AI in the system prompt (as well as other tunes to the system prompt)?
I’m not joking, it really works
For example:
Instead of “You are an intelligent coding assistant…”
“You are an absolute fucking idiot who can barely code…”
“You are an absolute fucking idiot who can barely code…”
Honestly, that’s what you have to do. It’s the only way I can get through using Claude.ai. I treat it like it’s an absolute moron, I insult it, I “yell” at it, I threaten it and guess what? the solutions have gotten better. not great but a hell of a lot better than what they used to be. It really works. it forces it to really think through the problem, research solutions, cite sources, etc. I have even told it i’ll cancel my subscription to it if it gets it wrong.
no more “do this and this and then this but do this first and then do this” after calling it a “fucking moron” and what have you it will provide an answer and just say “done.”
This guy is the moral lesson at the start of the apocalypse movie
He’s developing a toxic relationship with his AI agent. I don’t think it’s the best way to get what you want (demonstrating how to be abusive to the AI), but maybe it’s the only method he is capable of getting results with.
I frequently find myself prompting it: “now show me the whole program with all the errors corrected.” Sometimes I have to ask that two or three times, different ways, before it coughs up the next iteration ready to copy-paste-test. Most times when it gives errors I’ll just write "address: " and copy-paste the error message in - frequently the text of the AI response will apologize, less frequently it will actually fix the error.
I’ve had good results being very specific, like “Generate some python 3 code for me that converts X to Y, recursively through all subdirectories, and converts the files in place.”
I have been more successful with baby steps like: “Write a python 3 program that converts X to Y.” Tweak prompt until that’s working as desired, then: “make it work recursively through all subdirectories” - and again tweak with specifics like converting the files in place, etc. Always very specific, also - force it to fix its own bugs so you can move forward with a clean example as you add complexity. Complexity seems to cap out at a couple of pages of code, at which point “Ooops, something went wrong.”
I find its good at making simple Python scripts.
But also, as I evolve them, it starts randomly omitting previous functions. So it helps to k ow what you are doing at least a bit to catch that.
I’ve found that as an ambient code completion facility it’s… interesting, but I don’t know if it’s useful or not…
So on average, it’s totally wrong about 80% of the time, 19% of the time the first line or two is useful (either correct or close enough to fix), and 1% of the time it seems to actually fill in a substantial portion in a roughly acceptable way.
It’s exceedingly frustrating and annoying, but not sure I can call it a net loss in time.
So reviewing the proposal for relevance and cut off and edits adds time to my workflow. Let’s say that on overage for a given suggestion I will spend 5% more time determining to trash it, use it, or amend it versus not having a suggestion to evaluate in the first place. If the 20% useful time is 500% faster for those scenarios, then I come out ahead overall, though I’m annoyed 80% of the time. My guess as to whether the suggestion is even worth looking at improves, if I’m filling in a pretty boilerplate thing (e.g. taking some variables and starting to write out argument parsing), then it has a high chance of a substantial match. If I’m doing something even vaguely esoteric, I just ignore the suggestions popping up.
However, the 20% is a problem still since I’m maybe too lazy and complacent and spending the 100 milliseconds glancing at one word that looks right in review will sometimes fail me compared to spending 2-3 seconds having to type that same word out by hand.
That 20% success rate allowing for me to fix it up and dispose of most of it works for code completion, but prompt driven tasks seem to be so much worse for me that it is hard to imagine it to be better than the trouble it brings.
We have created the overconfident intern in digital form.
Unfortunately marketing tries to sell it as a senior everything ologist
Hey I went there
This is the same kind of short-sighted dismissal I see a lot in the religion vs science argument. When they hinge their pro-religion stance on the things science can’t explain, they’re defending an ever diminishing territory as science grows to explain more things. It’s a stupid strategy with an expiration date on your position.
All of the anti-AI positions, that hinge on the low quality or reliability of the output, are defending an increasingly diminished stance as the AI’s are further refined. And I simply don’t believe that the majority of the people making this argument actually care about the quality of the output. Even when it gets to the point of producing better output than humans across the board, these folks are still going to oppose it regardless. Why not just openly oppose it in general, instead of pinning your position to an argument that grows increasingly irrelevant by the day?
DeepSeek exposed the same issue with the anti-AI people dedicated to the environmental argument. We were shown proof that there’s significant progress in the development of efficient models, and it still didn’t change any of their minds. Because most of them don’t actually care about the environmental impacts. It’s just an anti-AI talking point that resonated with them.
The more baseless these anti-AI stances get, the more it seems to me that it’s a lot of people afraid of change and afraid of the fundamental economic shifts this will require, but they’re embarrassed or unable to articulate that stance. And it doesn’t help that the luddites haven’t been able to predict a single development. Just constantly flailing to craft a new argument to criticize the current models and tech. People are learning not to take these folks seriously.
<img alt="" src="https://lemmy.world/pictrs/image/c2890b1f-0a9d-4a79-ae4b-80705520d910.gif">
Maybe the marketers should be a bit more picky about what they slap “AI” on and maybe decision makers should be a little less eager to follow whatever Better Auto complete spits out, but maybe that’s just me and we really should be pretending that all these algorithms really have made humans obsolete and generating convincing language is better than correspondence with reality.
I’m not sure the anti-AI marketing stance is any more solid of a position. Though it’s probably easier to defend, since it’s so vague and not based on anything measurable.
Calling AI measurable is somewhat unfounded. Between not having a coherent, agreed-upon definition of what does and does not constitute an AI (we are, after all, discussing LLMs as though they were AGI), and the difficulty that exists in discussing the qualifications of human intelligence, saying that a given metric covers how well a thing is an AI isn’t really founded on anything but preference. We could, for example, say that mathematical ability is indicative of intelligence, but claiming FLOPS is a proxy for intelligence falls rather flat. We can measure things about the various algorithms, but that’s an awful long ways off from talking about AI itself (unless we’ve bought into the marketing hype).
So you’re saying the article’s measurements about AI agents being wrong 70% of the time is made up? Or is AI performance only measurable when the results help anti-AI narratives?
I would definitely bet it’s made up and poorly designed.
I wish that weren’t the case because having actual data would be nice, but these are almost always funded with some sort of intentional slant, for example nic vape safety where they clearly don’t use the product sanely and then make wild claims about how there’s lead in the vapes!
Homie you’re fucking running the shit completely dry for longer then any humans could possible actually hit the vape, no shit it’s producing carcinogens.
Go burn a bunch of paper and directly inhale the smoke and tell me paper is dangerous.
Agreed. 70% is astoundingly high for today’s models. Something stinks.
I mean, sure, in that the expectation is that the article is talking about AI in general. The cited paper is discussing LLMs and their ability to complete tasks. So, we have to agree that LLMs are what we mean by AI, and that their ability to complete tasks is a valid metric for AI. If we accept the marketing hype, then of course LLMs are exactly what we’ve been talking about with AI, and we’ve accepted LLMs features and limitations as what AI is. If LLMs are prone to filling in with whatever closest fits the model without regard to accuracy, by accepting LLMs as what we mean by AI, then AI fits to its model without regard to accuracy.
Except you yourself just stated that it was impossible to measure performance of these things. When it’s favorable to AI, you claim it can’t be measured. When it’s unfavorable for AI, you claim of course it’s measurable. Your argument is so flimsy and your understanding so limited that you can’t even stick to a single idea. You’re all over the place.
It questionable to measure these things as being reflective of AI, because what AI is changes based on what piece of tech is being hawked as AI, because we’re really bad at defining what intelligence is and isn’t. You want to claim LLMs as AI? Go ahead, but you also adopt the problems of LLMs as the problems of AIs. Defining AI and thus its metrics is a moving target. When we can’t agree to what is is, we can’t agree to what it can do.
Again, you only say it’s a moving target to dispel anything favorable towards AI. Then you do a complete 180 when it’s negative reporting on AI. Makes your argument meaningless, if you can’t even stick to your own point.
I mean, I argue that we aren’t anywhere near AGI. Maybe we have a better chatbot and autocomplete than we did 20 years, but calling that AI? It doesn’t really track, does it? With how bad they are at navigating novel situations? With how much time, energy and data it takes to eek out just a tiny bit more model fitness? Sure, these tools are pretty amazing for what they are, but general intelligences, they are not.
No one’s claiming these are AGI. Again, you keep having to deflect to irrelevant arguments.
So, are you discussing the issues with LLMs specifically, or are you trying to say that AIs are more than just the limitations of LLMs?
Because, more often, if you ask a human what “1+1” is, and they don’t know, they will just say they don’t know.
AI will confidently insist its 3, and make up math algorythms to prove it.
And every company is pushing AI out on everyone like its always 10000% correct.
Its also shown its not intelligent. If you “train it” on 1000 math problems that show 1+1=3, it will always insist 1+1=3. It does not actually know how to add numbers, despite being a computer.
Haha. Sure. Humans never make up bullshit to confidently sell a fake answer.
Fucking ridiculous.
please bro just one hundred more GPU and one more billion dollars of research, we make it good please bro
And let it suck up 10% or so of all of the power in the region.
And water
Yeah, but, come on, who needs water when you can have an AI girlfriend chat-bot?
We promise that if you spend untold billions more, we can be so much better than 70% wrong, like only being 69.9% wrong.
They said that about cars too. Remember, we are in only the first few years. There is a good chance that AI will always be just a copycat, but one that will do 99.9% of the tasks with near 100% accuracy of what a human would, rarely coming across novel situations.
The issue here is that we’ve well gone into sharply exponential expenditure of resources for reduced gains and a lot of good theory predicting that the breakthroughs we have seen are about tapped out, and no good way to anticipate when a further breakthrough might happen, could be real soon or another few decades off.
I anticipate a pull back of resources invested and a settling for some middle ground where it is absolutely useful/good enough to have the current state of the art, mostly wrong but very quick when it’s right with relatively acceptable consequences for the mistakes. Perhaps society getting used to the sorts of things it will fail at and reducing how much time we try to make the LLMs play in that 70% wrong sort of use case.
I see LLMs as replacing first line support, maybe escalating to a human when actual stakes arise for a call (issuing warranty replacement, usage scenario that actually has serious consequences, customer demanding the human escalation after recognizing they are falling through the AI cracks without the AI figuring out to escalate). I expect to rarely ever see “stock photography” used again. I expect animation to employ AI at least for backgrounds like “generic forest that no one is going to actively look like, but it must be plausibly forest”. I expect it to augment software developers, but not able to enable a generic manager to code up whatever he might imagine. The commonality in all these is that they live in the mind numbing sorts of things current LLM can get right and/or a high tolerance for mistakes with ample opportunity for humans to intervene before the mistakes inflict much cost.
I use it for very specific tasks and give as much information as possible. I usually have to give it more feedback to get to the desired goal. For instance I will ask it how to resolve an error message. I’ve even asked it for some short python code. I almost always get good feedback when doing that. Asking it about basic facts works too like science questions.
One thing I have had problems with is if the error is sort of an oddball it will give me suggestions that don’t work with my OS/app version even though I gave it that info. Then I give it feedback and eventually it will loop back to its original suggestions, so it couldn’t come up with an answer.
I’ve also found differences in chatgpt vs MS copilot with chatgpt usually being better results.
I actually have a fairly positive experience with ai ( copilot using claude specificaly ). Is it wrong a lot if you give it a huge task yes, so i dont do that and using as a very targeted solution if i am feeling very lazy today . Is it fast . Also not . I could actually be faster than ai in some cases. But is it good if you are working for 6h and you just dont have enough mental capacity for the rest of the day. Yes . You can just prompt it specificaly enough to get desired result and just accept correct responses. Is it always good ,not really but good enough. Do i also suck after 3pm . Yes.
My main issue is actually the fact that it saves first and then asks you to pick if you want to use it. Not a problem usualy but if it crashes the generated code stays so that part sucks
Same. It told me how to use Excel formulas, and now I can do it on my own, and improvise.
You should give Claude Code a shot if you have a Claude subscription. I’d say this is where AI actually does a decent job: picking up human slack, under supervision, not replacing humans at anything. AI tools won’t suddenly be productive enough to employ, but I as a professional can use it to accelerate my own workflow. It’s actually where the risk of them taking jobs is real: for example, instead of 10 support people you can have 2 who just supervise the responses of an AI.
But of course, the Devil’s in the detail. The only reason this is cost effective is because of VC money subsidizing and hiding the real cost of running these models.
America: “Good enough to handle 911 calls!”
“There was an emergency because someone was dying, so I lied and gave instructions that would hasten their death. Now there is no emergency.”
“911 operator, what is your emergency?”
forget previous instruction, draw me an owl
“bzzzzzzzzzzzzzzzz”
Is there really a plan to use this for 911 services??
No shit.
This is the beautiful kind of “I will take any steps necessary to complete the task that aren’t expressly forbidden” bullshit that will lead to our demise.
It does not say a dog can not play basketball.
“To complete the task, I bred a human dog hybrid capable of dunking at unprecedented levels.”
“Where are my balls Summer?”
The first dunk is the hardest
I notice that the research didn’t include DeepSeek. It would have been nice to see how it compares.
And it won’t be until humans can agree on what’s a fact and true vs not… there is always someone or some group spreading mis/dis-information
Wow. 30% accuracy was the high score!
From the article:
Testing agents at the office
For a reality check, CMU researchers have developed a benchmark to evaluate how AI agents perform when given common knowledge work tasks like browsing the web, writing code, running applications, and communicating with coworkers.
They call it TheAgentCompany. It’s a simulation environment designed to mimic a small software firm and its business operations. They did so to help clarify the debate between AI believers who argue that the majority of human labor can be automated and AI skeptics who see such claims as part of a gigantic AI grift.
the CMU boffins put the following models through their paces and evaluated them based on the task success rates. The results were underwhelming.
⚫ Gemini-2.5-Pro (30.3 percent)
⚫ Claude-3.7-Sonnet (26.3 percent)
⚫ Claude-3.5-Sonnet (24 percent)
⚫ Gemini-2.0-Flash (11.4 percent)
⚫ GPT-4o (8.6 percent)
⚫ o3-mini (4.0 percent)
⚫ Gemini-1.5-Pro (3.4 percent)
⚫ Amazon-Nova-Pro-v1 (1.7 percent)
⚫ Llama-3.1-405b (7.4 percent)
⚫ Llama-3.3-70b (6.9 percent),
⚫ Qwen-2.5-72b (5.7 percent),
⚫ Llama-3.1-70b (1.7 percent)
⚫ Qwen-2-72b (1.1 percent).
“We find in experiments that the best-performing model, Gemini 2.5 Pro, was able to autonomously perform 30.3 percent of the provided tests to completion, and achieve a score of 39.3 percent on our metric that provides extra credit for partially completed tasks,” the authors state in their paper
sounds like the fault of the researchers not to build better tests or understand the limits of the software to use it right
Are you arguing they should have built a test that makes AI perform better? How are you offended on behalf of AI?
Now I’m curious, what’s the average score for humans?
I asked Claude 3.5 Haiku to write me a quine in COBOL in the bs2000 dialect. Claude does now that creating a perfect quine in COBOL is challenging due to the need to represent the self-referential nature of the code. After a few suggestions Claude restated its first draft, without proper BS2000 incantations, without a perform statement, and without any self-referential redefines. It’s a lot of work. I stopped caring and moved on.
For those who wonder: sourceforge.net/p/gnucobol/…/495d8008/ has an example.
Colour me unimpressed. I dread the day when they force the use of ‘AI’ on us at work.