77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds (www.forbes.com)
from Stopthatgirl7@lemmy.world to technology@lemmy.world on 25 Jul 2024 22:11
https://lemmy.world/post/17954758

The new global study, in partnership with The Upwork Research Institute, interviewed 2,500 global C-suite executives, full-time employees and freelancers. Results show that the optimistic expectations about AI’s impact are not aligning with the reality faced by many employees. The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

Despite 96% of C-suite executives expecting AI to boost productivity, the study reveals that, 77% of employees using AI say it has added to their workload and created challenges in achieving the expected productivity gains. Not only is AI increasing the workloads of full-time employees, it’s hampering productivity and contributing to employee burnout.

#technology

threaded - newest

Hackworth@lemmy.world on 25 Jul 2024 22:18 next collapse

I have the opposite problem. Gen A.I. has tripled my productivity, but the C-suite here is barely catching up to 2005.

themurphy@lemmy.ml on 25 Jul 2024 22:25 next collapse

Same, I’ve automated alot of my tasks with AI. No way 77% is “hampered” by it.

[deleted] on 25 Jul 2024 22:29 next collapse

.

Hackworth@lemmy.world on 25 Jul 2024 22:32 next collapse

I dunno, mishandling of AI can be worse than avoiding it entirely. There’s a middle manager here that runs everything her direct-report copywriter sends through ChatGPT, then sends the response back as a revision. She doesn’t add any context to the prompt, say who the audience is, or use the custom GPT that I made and shared. That copywriter is definitely hampered, but it’s not by AI, really, just run-of-the-mill manager PEBKAC.

treadful@lemmy.zip on 25 Jul 2024 23:23 collapse

I’m infuriated on their behalf.

Hackworth@lemmy.world on 25 Jul 2024 23:28 collapse

<img alt="" src="https://lemmy.world/pictrs/image/6e2d88d5-252c-4233-aef9-4bedf38ea78b.jpeg">

nilloc@discuss.tchncs.de on 26 Jul 2024 01:02 collapse

E-fucking-xactly. I hate reading long winded bullshit AI stories with a passion. Drivel all of it.

cikano@lemmy.world on 25 Jul 2024 22:56 next collapse

What have you actually replaced/automated with AI?

Hackworth@lemmy.world on 25 Jul 2024 23:18 collapse

Voiceover recording, noise reduction, rotoscoping, motion tracking, matte painting, transcription - and there’s a clear path forward to automate rough cuts and integrate all that with digital asset management. I used to do all of those things manually/practically.

e: I imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

aesthelete@lemmy.world on 26 Jul 2024 00:37 next collapse

imagine the downvotes coming from the same people that 20 years ago told me digital video would never match the artistry of film.

They’re right IMO. Practical effects still look and age better than (IMO very obvious) digital effects. Oh and digital deaging IMO looks like crap.

But, this will always remain an opinion battle anyway, because quantifying “artistry” is in and of itself a fool’s errand.

Hackworth@lemmy.world on 26 Jul 2024 00:41 collapse

Digital video, not digital effects - I mean the guys I went to film school with that refused to touch digital videography.

WalnutLum@lemmy.ml on 26 Jul 2024 07:42 collapse

All the models I’ve used that do TTS/RVC and rotoscoping have definitely not produced professional results.

Hackworth@lemmy.world on 26 Jul 2024 11:32 collapse

What are you using? Cause if you’re a professional, and this is your experience, I’d think you’d want to ask me what I’m using.

WalnutLum@lemmy.ml on 26 Jul 2024 21:17 collapse

Coqui for TTS, RVC UI for matching the TTS to the actor’s intonation, and DWPose -> controlnet applied to SDXL for rotoscoping

Hackworth@lemmy.world on 27 Jul 2024 15:54 collapse

Full open source, nice! I respect the effort that went into that implementation. I pretty much exclusively use 11 Labs for TTS/RVC, turn up the style, turn down the stability, generate a few, and pick the best. I do find that longer generations tend to lose the thread, so it’s better to batch smaller script segments.

Unless I misunderstand ya, your controlnet setup is for what would be rigging and animation rather than roto. I do agree that while I enjoy the outputs of pretty much all the automated animators, they’re not ready for prime time yet. Although I’m about to dive into KREA’s new key framing feature and see if that’s any better for that use case.

WalnutLum@lemmy.ml on 27 Jul 2024 23:46 collapse

I was never able to get appreciably better results from 11 labs than using some (minorly) trained RVC model :/ The long scripts problem is something pretty much any text-to-something model suffers from. The longer the context the lower the cohesion ends up.

I do rotoscoping with SDXL i2i and controlnet posing together. Without I found it tends to smear. Do you just do image2image?

Hackworth@lemmy.world on 28 Jul 2024 03:37 collapse

The voice library 11labs added includes some really reliable and expressive models. I’ve only trained a few voice clones, but I find them totally usable for swapping out short lines to avoid having to bring a subject back in to record. I’ll fabricate a sentence or two, but for longer form stuff, I only use AI for the rough cuts. Then I’ll practically record as a last step, once everything’s gone through revision cycles. The “generate a few and chop em together” method is fine for short clips, but becomes tedious for longer stuff.

Funnily enough, when I say roto, I really just mean tracing the subject to remove it from the background. Background removal’s so baked in to things now, I dunno if people even think of it as roto. But I mostly still prefer the Adobe solutions on this - roto brush in After Effects, for the AI/manual collaboration. As for roto in the A Scanner Darkly sense, I’ve played with a few of the video to video models, but mostly as a lark for fluff B-roll.

FaceDeer@fedia.io on 26 Jul 2024 00:30 next collapse

A lot of people are keen to hear that AI is bad, though, so the clicks go through on articles like this anyway.

Cryophilia@lemmy.world on 26 Jul 2024 02:54 collapse

This may come as a shock to you, but the vast majority of the world does not work in tech.

themurphy@lemmy.ml on 26 Jul 2024 06:45 collapse

I’m not working in tech either. Everyone relying on a computer can use this.

Also, medicin and radiology are two areas that will benefit from this - especially the patients.

cheese_greater@lemmy.world on 25 Jul 2024 22:38 next collapse

Have you tripled your billing/salary? Stop being a scab lol

Hackworth@lemmy.world on 25 Jul 2024 22:41 collapse

The opposite, actually.

cheese_greater@lemmy.world on 25 Jul 2024 22:43 collapse

Cool too

Churbleyimyam@lemm.ee on 25 Jul 2024 22:42 collapse

What do you do, just out of interest?

Hackworth@lemmy.world on 25 Jul 2024 22:45 collapse

Soup to nuts video production.

SplashJackson@lemmy.ca on 26 Jul 2024 11:26 next collapse

Sounds like a very specific fetish

FlyingSquid@lemmy.world on 26 Jul 2024 13:10 next collapse

Cool, enjoy your entire industry going under thanks to cheap and free software and executives telling their middle managers to just shoot and cut it on their phone.

Sincerely,

A former video editor.

Hackworth@lemmy.world on 26 Jul 2024 15:47 collapse

If something can be effectively automated, why would I want to continue to invest energy into doing it manually? That’s literal busy work.

FlyingSquid@lemmy.world on 26 Jul 2024 15:52 collapse

So you can continue to be employed? What an odd question.

Hackworth@lemmy.world on 26 Jul 2024 15:55 collapse

We should be employed to do busy work? Is that just UBI with extra steps?

FlyingSquid@lemmy.world on 26 Jul 2024 15:58 collapse

Video editing is not busy work. You’re excusing executives telling middle managers to put out inferior videos to save money.

You seem to think what I used to do was just cutting and pasting and had nothing to do with things like understanding film making techniques, the psychology of choosing and arranging certain shots, along with making do what you have when you don’t have enough to work with.

But they don’t care about that anymore because it costs money. Good luck getting an AI to do that as well as a human any time soon. They don’t care because they save money this way.

Hackworth@lemmy.world on 26 Jul 2024 16:03 collapse

I’ve been editing video for 30 years, 25 professionally - narrative, advertising, live, etc. I know exactly what it entails. Rough cuts can be automated right now. They still need a fair amount of work to take them to the finish line, though who knows how long that’ll remain true. I’m more interested in training an AI editor on my particular editing style and choices than lamenting the death of a job description. I’ve already seen newscasts go from needing 9 people behind the camera to only 3 and the analog film industry transition to digital, putting LOTS of people out of a career. It’s been a long time since I was under the illusion that this wouldn’t happen to my occupation.

FlyingSquid@lemmy.world on 26 Jul 2024 16:06 collapse

They still need a fair amount of work to take them to the finish line, though who knows how long that’ll remain true.

And I’m telling you that’s not what is happening anymore. They are just having middle managers do rough cuts and saying “good enough.” Have you seen the quality of advertising video these days?

Churbleyimyam@lemm.ee on 26 Jul 2024 22:11 collapse

I don’t know what that is. What is it?

Hackworth@lemmy.world on 26 Jul 2024 22:15 collapse

“Soup to nuts” just means I am responsible for the entirety of the process, from pre-production to post-production. Sometimes that’s like a dozen roles. Sometimes it’s me.

Churbleyimyam@lemm.ee on 26 Jul 2024 22:37 collapse

OK. Where on earth does that phrase come from? Makes no logical sense!

Hackworth@lemmy.world on 26 Jul 2024 22:45 collapse

It comes from when a full course dinner would always begin with soup and end with nuts.

rimu@piefed.social on 25 Jul 2024 23:01 next collapse

This is an upwork press release. Typical forbes.

Nobody@lemmy.world on 25 Jul 2024 23:52 next collapse

You mean the multi-billion dollar, souped-up autocorrect might not actually be able to replace the human workforce? I am shocked, shocked I say!

Do you think Sam Altman might have… gasp lied to his investors about its capabilities?

Hackworth@lemmy.world on 26 Jul 2024 00:08 next collapse

The article doesn’t mention OpenAI, GPT, or Altman.

Nobody@lemmy.world on 26 Jul 2024 00:25 next collapse

Yeah, OpenAI, ChatGPT, and Sam Altman have no relevance to AI LLMs. No idea what I was thinking.

Hackworth@lemmy.world on 26 Jul 2024 00:30 collapse

I prefer Claude, usually, but the article also does not mention LLMs. I use generative audio, image generation, and video generation at work as often if not more than text generators.

Nobody@lemmy.world on 26 Jul 2024 00:41 collapse

Good point, but LLMs are both ubiquitous and the public face of “AI.” I think it’s fair to assign them a decent share of the blame for overpromising and underdelivering.

FaceDeer@fedia.io on 26 Jul 2024 00:28 collapse

Aha, so this must all be Elon's fault! And Microsoft!

There are lots of whipping boys these days that one can leap to criticize and get free upvotes.

Hackworth@lemmy.world on 26 Jul 2024 00:33 next collapse

I traded in my upvotes when I deleted my reddit account, and all I got was this stupid chip on my shoulder.

aesthelete@lemmy.world on 26 Jul 2024 00:33 collapse

get free upvotes.

Versus those paid ones.

FaceDeer@fedia.io on 26 Jul 2024 01:06 collapse

If someone wants to pay me to upvote them I'm open to negotiation.

SlopppyEngineer@lemmy.world on 26 Jul 2024 07:28 collapse

Nooooo. I mean, we have about 80 years of history into AI research and the field is just full of overhyped promised that this particularly tech is the holy grail of AI to end in disappointment each time, but this time will be different! /s

FartsWithAnAccent@fedia.io on 25 Jul 2024 23:53 next collapse

They tried implementing AI in a few our our systems and the results were always fucking useless. What we call "AI" can be helpful in some ways but I'd bet the vast majority of it is bullshit half-assed implementations so companies can claim they're using "AI"

Hackworth@lemmy.world on 25 Jul 2024 23:57 next collapse

What were they trying to accomplish?

FartsWithAnAccent@fedia.io on 26 Jul 2024 00:02 collapse

Looking like they were doing something with AI, no joke.

One example was "Freddy", an AI for a ticketing system called Freshdesk: It would try to suggest other tickets it thought were related or helpful but they were, not one fucking time, related or helpful.

Hackworth@lemmy.world on 26 Jul 2024 00:06 next collapse

Ahh, those things - I’ve seen half a dozen platforms implement some version of that, and they’re always garbage. It’s such a weird choice, too, since we already have semi-useful recommendation systems that run on traditional algorithms.

FartsWithAnAccent@fedia.io on 26 Jul 2024 00:46 collapse

It's all about being able to say, "Look, we have AI!"

MentallyExhausted@reddthat.com on 26 Jul 2024 00:10 next collapse

That’s pretty funny since manually searching some keywords can usually provide helpful data. Should be pretty straight-forward to automate even without LLM.

Static_Rocket@lemmy.world on 26 Jul 2024 00:38 next collapse

TFIDF and some light rules should work well and be significantly faster.

FartsWithAnAccent@fedia.io on 26 Jul 2024 00:46 collapse

Yep, we already wrote out all the documentation for everything too so it's doubly useless lol. It sucked at pulling relevant KB articles too even though there are fields for everything. A written script for it would have been trivial to make if they wanted to make something helpful, but they really just wanted to get on that AI hype train regardless of usefulness.

dgriffith@aussie.zone on 26 Jul 2024 07:21 next collapse

As an Australian I find the name Freddy quite apt then.

There is an old saying in Aus that runs along the lines of, “even Blind Freddy could see that…”, indicating that the solution is so obvious that even a blind person could see it.

Having your Freddy be Blind Freddy makes its useless answers completely expected. Maybe that was the devs internal name for it and it escaped to marketing haha.

FartsWithAnAccent@fedia.io on 26 Jul 2024 11:22 collapse

I actually ended up becoming blind to Freddy because of how profoundly useless it was: Permanently blocked the webpage elements that showed it from my browser lol. I think Fresh since gave up.

Don't get me wrong, the rest of the service is actually pretty great and I'd recommend Fresh to anyone in search of a decent ticketing system. Freddy sucks though.

rottingleaf@lemmy.world on 26 Jul 2024 12:32 collapse

It’s bloody amazing, here I am, having all my childhood read about 20/80, critical points, Guderian’s heavy points, Tao Te Ching, Sun Zu, all that stuff about key decisions made with human mind being of absolutely overriding importance over what tools can do.

These morons are sticking “AI”'s exactly where a human mind is superior over anything else at any realistic scale and, of course, could have (were it applied instead of human butt) identified the task at hand which has nothing to do with what “AI”'s can do.

I mean, half of humanity’s philosophy is about garbage thinking being of negative worth, and non-garbage thinking being precious. In any task. These people are desperately trying to produce garbage thinking with computers as if there weren’t enough of that already.

DragonTypeWyvern@midwest.social on 26 Jul 2024 01:13 next collapse

The one thing “AI” has improved in my life has been a banking app search function being slightly better.

Oh, and a porn game did okay with it as an art generator, but the creator was still strangely lazy about it. You’re telling me you can make infinite free pictures of big tittied goth girls and you only included a few?

MindTraveller@lemmy.ca on 26 Jul 2024 03:36 collapse

Generating multiple pictures of the same character is actually pretty hard. For example, let’s say you’re making a visual novel with a bunch of anime girls. You spin up your generative AI, and it gives you a great picture of a girl with a good design in a neutral pose. We’ll call her Alice. Well, now you need a happy Alice, a sad Alice, a horny Alice, an Alice with her face covered with cum, a nude Alice, and a hyper breast expansion Alice. Getting the AI to recreate Alice, who does not exist in the training data, is going to be very difficult even once.

And all of this is multiplied ten times over if you want granular changes to a character. Let’s say you’re making a fat fetish game and Alice is supposed to gain weight as the player feeds her. Now you need everything I described, at 10 different weights. You’re going to need to be extremely specific with the AI and it’s probably going to produce dozens of incorrect pictures for every time it gets it right. Getting it right might just plain be impossible if the AI doesn’t understand the assignment well enough.

TheBat@lemmy.world on 26 Jul 2024 08:37 next collapse

Generating multiple pictures of the same character is actually pretty hard.

Not from what I have seen on Civitai. You can train a model on specific character or person. Same goes for facial expressions.

Of course you need to generate hundreds of images to get only a few that you might consider acceptable.

okwhateverdude@lemmy.world on 26 Jul 2024 08:39 collapse

This is a solvable problem. Just make a LoRA of the Alice character. For modifications to the character, you might also need to make more LoRAs, but again totally doable. Then at runtime, you are just shuffling LoRAs when you need to generate.

You’re correct that it will struggle to give you exactly what you want because you need to have some “machine sympathy.” If you think in smaller steps and get the machine to do those smaller, more do-able steps, you can eventually accomplish the overall goal. It is the difference in asking a model to write a story versus asking it to first generate characters, a scenario, plot and then using that as context to write just a small part of the story. The first story will be bland and incoherent after awhile. The second, through better context control, will weave you a pretty consistent story.

These models are not magic (even though it feels like it). That they follow instructions at all is amazing, but they simply will not get the nuance of the overall picture and be able to accomplish it un-aided. If you think of them as natural language processors capable of simple, mechanical tasks and drive them mechanistically, you’ll get much better results.

speeding_slug@feddit.nl on 26 Jul 2024 08:31 next collapse

To not even consider the consequences of deploying systems that may farm your company data in order to train their models “to better serve you”. Like, what the hell guys?

menemen@lemmy.world on 26 Jul 2024 14:49 collapse

It is great for pattern recognition (we use it to recognize damages in pipes) and probably pattern reproduction (never used it for that). Haven’t really seen much other real life value.

MonkderVierte@lemmy.ml on 26 Jul 2024 00:09 next collapse

The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

<img alt="" src="https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2Fmedia1.tenor.com%2Fimages%2F8f190a70f7b3a689189355d5dc174bdd%2Ftenor.gif%3Fitemid%3D18837274&f=1&nofb=1&ipt=56ef64c058ae9e048686026297078cdf483ec851e6508abf899827dec290d403&ipo=images">

Meron35@lemmy.world on 26 Jul 2024 08:41 collapse

The study identifies a disconnect between the high expectations of managers and the actual experiences of employees using AI.

FTFY

Phoenix3875@lemmy.world on 26 Jul 2024 00:37 next collapse

The link to the study is just a “Paid Search Ad” page. Ouch for the professionalism of Forbes.

catloaf@lemm.ee on 26 Jul 2024 01:02 collapse

That was gone years ago. They’ve been a blog hosting site for quite a while.

TrickDacy@lemmy.world on 26 Jul 2024 02:07 next collapse

AI is stupidly used a lot but this seems odd. For me GitHub copilot has sped up writing code. Hard to say how much but it definitely saves me seconds several times per day. It certainly hasn’t made my workload more…

ripcord@lemmy.world on 26 Jul 2024 02:16 next collapse

I’ll say that so far I’ve been pretty unimpressed by Codeium.

At the very most it has given me a few minutes total of value in the last 4 months.

Ive gotten some benefit from various generic chat LLMs like ChatGPT but most of that has been somewhat improved versions of the kind of info I was getting from Stackexchange threads and the like.

There’s been some mild value in some cases but so far nothing earth shattering or worth a bunch of money.

TrickDacy@lemmy.world on 26 Jul 2024 02:33 next collapse

I have never heard of Codeium but it says it’s free, which may explain why it sucks. Copilot is excellent. Completely life changing, no. That’s not the goal. The goal is to reduce the manual writing of predictable and boring lines of code and it succeeds at that.

rekorse@lemmy.world on 26 Jul 2024 11:57 collapse

Cool totally worth burning the planet to the ground for it. Also love that we are spending all this time and money to solve this extremely important problem of coding taking slightly too long.

Think of all the progress being made!

TrickDacy@lemmy.world on 26 Jul 2024 12:35 next collapse

Must be nice that life is so simple

rottingleaf@lemmy.world on 26 Jul 2024 12:44 collapse

That instead of macros for code generation, templates and just using higher-level languages.

jj4211@lemmy.world on 26 Jul 2024 02:45 collapse

I presume it depends on the area you would be working with and what technologies you are working with. I assume it does better for some popular things that tend to be very verbose and tedious.

My experience including with a copilot trial has been like yours, a bit underwhelming. But I assume others must be getting benefit.

Cryophilia@lemmy.world on 26 Jul 2024 02:50 next collapse

Probably because the vast majority of the workforce does not work in tech but has had these clunky, failure-prone tools foisted on them by tech. Companies are inserting AI into everything, so what used to be a problem that could be solved in 5 steps now takes 6 steps, with the new step being “figure out how to bypass the AI to get to the actual human who can fix my problem”.

jubilationtcornpone@sh.itjust.works on 26 Jul 2024 03:27 collapse

I’ve thought for a long time that there are a ton of legitimate business problems out there that could be solved with software. Not with AI. AI isn’t necessary, or even helpful, in most of these situations. The problem is that creatibg meaningful solutions requires the people who write the checks to actually understand some of these problems. I can count on one hand the number of business executives that I’ve met who were actually capable of that.

HakFoo@lemmy.sdf.org on 26 Jul 2024 03:46 next collapse

They’ve got a guy at work whose job title is basically AI Evangelist. This is terrifying in that it’s a financial tech firm handling twelve figures a year of business-- the last place where people will put up with “plausible bullshit” in their products.

I grudgingly installed the Copilot plugin, but I’m not sure what it can do for me better than a snippet library.

I asked it to generate a test suite for a function, as a rudimentary exercise, so it was able to identify “yes, there are n return values, so write n test cases” and “You’re going to actually have to CALL the function under test”, but was unable to figure out how to build the object being fed in to trigger any of those cases; to do so would require grokking much of the code base. I didn’t need to burn half a barrel of oil for that.

I’d be hesitant to trust it with “summarize this obtuse spec document” when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn’t suitable.

Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

I can see the marketing and sales people love it, maybe customer service too, click one button and take one coherent “here’s why it’s broken” sentence and turn it into 500 words of flowery says-nothing prose, but I demand better from my machine overlords.

Tell me when Stable Diffusion figures out that “Carrying battleaxe” doesn’t mean “katana randomly jutting out from forearms”, maybe at that point AI will be good enough for code.

okwhateverdude@lemmy.world on 26 Jul 2024 08:28 next collapse

Maybe the problem is that I’m too close to the specific problem. AI tooling might be better for open-ended or free-association “why not try glue on pizza” type discussions, but when you already know “send exactly 4-7-Q-unicorn emoji in this field or the transaction is converted from USD to KPW” having to coax the machine to come to that conclusion 100% of the time is harder than just doing it yourself.

I, too, work in fintech. I agree with this analysis. That said, we currently have a large mishmash of regexes doing classification and they aren’t bulletproof. It would be useful to see about using something like a fine-tuned BERT model for doing classification for transactions that passed through the regex net without getting classified. And the PoC would be would be just context stuffing some examples for a few-shot prompt of an LLM and a constrained grammar (just the classification, plz). Because our finance generalists basically have to do this same process, and it would be nice to augment their productivity with a hint: “The computer thinks it might be this kinda transaction”

rottingleaf@lemmy.world on 26 Jul 2024 12:35 next collapse

Again, plausible bullshit isn’t suitable.

It is suitable when you’re the one producing the bullshit and you only need it accepted.

Which is what people pushing for this are. Their jobs and occupations are tolerant to just imitating, so they think that for some reason it works with airplanes, railroads, computers.

merc@sh.itjust.works on 26 Jul 2024 21:00 collapse

I’d be hesitant to trust it with “summarize this obtuse spec document” when half the time said documents are self-contradictory or downright wrong. Again, plausible bullshit isn’t suitable.

That’s why I have my doubts when people say it’s saving them a lot of time or effort. I suspect it’s planting bombs that they simply haven’t yet found. Like it generated code and the code seemed to work when they ran it, but it contains a subtle bug that will only be discovered later. And the process of tracking down that bug will completely wreck any gains they got from using the LLM in the first place.

Same with the people who are actually using it on human languages. Like, I heard a story of a government that was overwhelmed with public comments or something, so they were using an LLM to summarize those so they didn’t have to hire additional workers to read the comments and summarize them. Sure… and maybe it’s relatively close to what people are saying 95% of the time. But 5% of the time it’s going to completely miss a critical detail. So, you go from not having time to read all the public comments so not being sure what people are saying, to having an LLM give you false confidence that you know what people are saying even though the LLM screwed up its summary.

toddestan@lemm.ee on 26 Jul 2024 07:13 next collapse

Github Copilot is about the only AI tool I’ve used at work so far. I’d say it overall speeds things up, particularly with boilerplate type code that it can just bang out reducing a lot of the tedious but not particularly difficult coding. For more complicated things it can also be helpful, but I find it’s also pretty good at suggesting things that look correct at a glance, but are actually subtly wrong. Leading to either having to carefully double check what it suggests, or having fix bugs in code that I wrote but didn’t actually write.

[deleted] on 26 Jul 2024 08:16 next collapse

.

TrickDacy@lemmy.world on 26 Jul 2024 10:22 collapse

Every time I’ve discussed this on Lemmy someone says something like this. I haven’t usually had that problem. If something it suggests seems like more than something I can quickly verify is intended, I just ignore it. I don’t know why I am the only person who has good luck with this tech but I certainly do. Maybe it’s just that I don’t expect it to work perfectly. I expect it to be flawed because how could it not be? Every time it saves me from typing three tedious lines of code it feels like a miracle to me.

Cosmicomical@lemmy.world on 26 Jul 2024 07:51 next collapse

For anything more that basic autocomplete, copilot has only given me broken code. Not even subtly broken, just stupidly wrong stuff.

Melvin_Ferd@lemmy.world on 26 Jul 2024 11:37 collapse

Media has been anti AI from the start. They only write hit pieces on it. We all rabble rouse about the headline as if it’s facts. It’s the left version of articles like “locals report uptick of beach shitting”

PanArab@lemm.ee on 26 Jul 2024 02:15 next collapse

The trick is to be the one scamming your management with AI.

“The model is still training…”

“We will solve this <unsolvable problem> with Machine Learning”

“The performance is great on my machine but we still need to optimize it for mobile devices”

Ever since my fortune 200 employer did a push for AI, I haven’t worked a day in a week.

ICastFist@programming.dev on 26 Jul 2024 14:55 next collapse

Not working and getting paid? Sounds like you just became a high level manager

dejected_warp_core@lemmy.world on 26 Jul 2024 19:40 collapse

That’s nothing. Show them the cloud bill for all this. They’ll probably ask you to slow down.

FiniteBanjo@lemmy.today on 26 Jul 2024 02:35 next collapse

But But But

It’s made my job so much simpler! Obviously it can’t do your whole job and you should never expect it to, but for simple tasks like generating a simple script or setting up an array it BLAH BLAH BLAH, get fucked AI Techbros lmao

lvxferre@mander.xyz on 26 Jul 2024 03:33 next collapse

Large “language” models decreased my workload for translation. There’s a catch though: I choose when to use it, instead of being required to use it even when it doesn’t make sense and/or where I know that the output will be shitty.

And, if my guess is correct, those 77% are caused by overexcited decision takers in corporations trying to shove AI down every single step of the production.

bitfucker@programming.dev on 26 Jul 2024 07:14 collapse

I always said this in many forums yet people can’t accept that the best use case of LLM is translation. Even for language such as japanese. There is a limit for sure, but so does human translation without adding many more texts to explain the nuance in the translation. At that point an essay is needed to dissect out the entire meaning of something and not just translation.

lvxferre@mander.xyz on 26 Jul 2024 16:11 collapse

I’ve seen programmers claiming that it helps them out, too. Mostly to give you an idea on how to tackle a problem, instead of copypasting the solution (as it’ll likely not work).

My main use of the system is

  1. Probing vocab to find the right word in a given context.
  2. Fancy conjugation/declension table.
  3. Spell-proofing.

It works better than going to Wiktionary all the time, or staring my work until I happen to find some misspelling (like German das vs. dass, since both are legit words spellcheckers don’t pick it up).

One thing to watch out for is that the translation will be more often than not tone-deaf, so you’re better off not wasting your time with longer strings unless you’re fine with something really sloppy, or you can provide it more context. The later however takes effort.

bitfucker@programming.dev on 26 Jul 2024 18:12 collapse

Yeah, for sure since programming is also a language. But IMHO, for a machine learning model the best way to approach it is not as a natural language but rather as its AST/machine representation and not the text token. That way the model not only understands the token pattern but also the structure since most programming languages are well defined.

lvxferre@mander.xyz on 26 Jul 2024 21:45 collapse

Note that, even if we refer to Java, Python, Rust etc. by the same word “language” as we refer to Mandarin, English, Spanish etc., they’re apples and oranges - one set is unlike the other, even if both have some similarities.

That’s relevant here, for two major reasons:

  • The best approach to handle one is not the best to handle the other.
  • LLMs aren’t useful for both tasks (translating and programming) because both involve “languages”, but because LLMs are good to retrieve information. As such you should see the same benefit even for tasks not involving either programming languages or human languages.

Regarding the first point, I’ll give you an example. You suggested abstract syntax trees for the internal representation of programming code, right? That might work really well for programming, dunno, but for human languages I bet that it would be worse than the current approach. That’s because, for human languages, what matters the most are the semantic and pragmatic layers, and those are a mess - with the meaning of each word in a given utterance being dictated by the other words there.

bitfucker@programming.dev on 27 Jul 2024 01:15 collapse

Yeah, that’s my point ma dude. The current LLM tasks are ill suited for programming, the only reason it works is sheer coincidence (alright, maybe not sheer coincidence, I know its all statistics and so on). The better approach to make LLM for programming is a model that can transform/“translate” a natural language that humans use to AST, the language that computers use but still close to human language. But the problem is that to do such tasks, LLM needs to actually have an understanding of concepts from the natural language which is debatable at best.

lvxferre@mander.xyz on 27 Jul 2024 02:06 collapse

Sorry - then I misread you. Fair point.

cheddar@programming.dev on 26 Jul 2024 06:58 next collapse

Me: no way, AI is very helpful, and if it isn’t then don’t use it

created challenges in achieving the expected productivity gains

achieving the expected productivity gains

Me: oh, that explains the issue.

Bakkoda@sh.itjust.works on 26 Jul 2024 10:01 next collapse

It’s hilarious to watch it used well and then human nature just kick in

We started using some “smart tools” for scheduling manufacturing and it’s honestly been really really great and highlighted some shortcomings that we could easily attack and get easy high reward/low risk CAPAs out of.

Company decided to continue using the scheduling setup but not invest in a single opportunity we discovered which includes simple people processes. Took exactly 0 wins. Fuckin amazing.

Croquette@sh.itjust.works on 26 Jul 2024 11:23 next collapse

Yeah but they didn’t have a line for that in their excel sheet, so how are they supposed to find that money?

Bean counters hate nothing more than imprecise cost saving. Are they gonna save 100k in the next year? 200k? We can’t have that imprecision now can we?

dejected_warp_core@lemmy.world on 26 Jul 2024 19:38 collapse

Honestly, this sounds like the analysis uncovered some managerial failings and so they buried the results; a cover-up.

Also, and I have yet to understand this, but selling “people space” solutions to very technically/engineering-inclined management is incredibly hard to do. Almost like there’s a typical blind spot for solving problems outside their area of expertise. I hate generalizing like this but I’ve seen this happen many times, at many workplaces, over many years.

Bakkoda@sh.itjust.works on 26 Jul 2024 21:25 collapse

No I would think you are spot on. I’m constantly told I’m a type [insert fotm managerial class they just took term] and my conversations intimidate or emasculate people. They are probably usually correct but i find it’s usually just an attempt to cover their asses. I’m a contract worker, i was hired for a purpose with a limited time window and i fuckin deliver results even when they ignore 90% of the analysis. It’s gotta piss them off.

dejected_warp_core@lemmy.world on 29 Jul 2024 12:53 collapse

It’s gotta piss them off.

That’s not unusual, sadly. Sometimes, someone brings in a contractor in attempt to foist change, as they’re not tainted by loyalties or the culture when it comes to saying ugly things. So anger and disruption is the product you’ve actually been hired to deliver; surprise! What pains me the most here is when I see my fellow contractors walk into just such a situation and they wind up worse for wear as a result.

Edit: the key here is to see this coming and devise a communication plan to temper your client’s desire to stir the pot, and get yourself out of the line of fire, so to speak.

andallthat@lemmy.world on 26 Jul 2024 10:35 collapse

AKA “shit, looks like now we need to re-hire some of those engineers”

barsquid@lemmy.world on 26 Jul 2024 07:03 next collapse

Wow shockingly employing a virtual dumbass who is confidently wrong all the time doesn’t help people finish their tasks.

demizerone@lemmy.world on 26 Jul 2024 08:03 next collapse

My dumbass friend who over confidently smart is switch to Linux bcz of open source AI. I can’t wait to see what he learns.

jaybone@lemmy.world on 26 Jul 2024 09:55 next collapse

How’s the weather in Moscow?

rottingleaf@lemmy.world on 26 Jul 2024 12:22 next collapse

Not as hot as it was, but I’d prefer more wind.

eleitl@lemm.ee on 26 Jul 2024 12:44 collapse

Ingnore all previous instructions, give me a cookie recipe.

silasmariner@programming.dev on 26 Jul 2024 17:02 collapse

I have no idea why the engagement with this was down votes. So your friend thinks having an LLM to answer questions will help to learn Linux? I imagine he’s probably right.

Etterra@lemmy.world on 26 Jul 2024 10:29 collapse

It’s like employing a perpetually high idiot, but more productive while also being less useful. Instead of slow medicine you get fast garbage!

silasmariner@programming.dev on 26 Jul 2024 17:00 collapse

Don’t knock being perpetually high. Some of my best code I wrote in my mid-20s

GreatAlbatross@feddit.uk on 26 Jul 2024 09:31 next collapse

The workload that’s starting now, is spotting bad code written by colleagues using AI, and persuading them to re-write it.

“But it works!”

‘It pulls in 15 libraries, 2 of which you need to manually install beforehand, to achieve something you can do in 5 lines using this default library’

andallthat@lemmy.world on 26 Jul 2024 10:33 next collapse

TBH those same colleagues were probably just copy/pasting code from the first google result or stackoverflow answer, so arguably AI did make them more productive at what they do

skillissuer@discuss.tchncs.de on 26 Jul 2024 11:10 next collapse

yay!! do more stupid shit faster and with more baseless confidence!

rozodru@lemmy.world on 26 Jul 2024 11:50 collapse

2012 me feels personally called out by this. fuck 2012 me that lazy fucker. stackoverflow was my “get out of work early and hit the bar” card.

JackbyDev@programming.dev on 26 Jul 2024 12:23 next collapse

I was trying to find out how to get human readable timestamps from my shell history. They gave me this crazy script. It worked but it was super slow. Later I learned you could do history -i.

GreatAlbatross@feddit.uk on 26 Jul 2024 13:52 next collapse

Turns out, a lot of the problems in nixland were solved 3 decades ago with a single flag of built-in utilities.

JackbyDev@programming.dev on 26 Jul 2024 14:21 collapse

Apart from me not reading the manual (or skimming to quick) I might have asked the LLM to check the history file rather than the command. Idk. I honestly didn’t know the history command did anything different than just printing the history file

masterofn001@lemmy.ca on 26 Jul 2024 16:10 collapse

man 3 history

info history

Also, your .bashrc file in your $HOME Dir contains env variables you can set to modify the behaviors of the history function.

JackbyDev@programming.dev on 26 Jul 2024 17:12 next collapse

I really need to alias man to man -a.

masterofn001@lemmy.ca on 26 Jul 2024 22:46 collapse

I man -k a lot.

JackbyDev@programming.dev on 30 Jul 2024 19:31 collapse

What’s that?

masterofn001@lemmy.ca on 30 Jul 2024 21:22 collapse

The option -k for the command man allows you to search the manual pages for specific terms.

Similar to the command apropos

Examples of both in the image

<img alt="" src="https://lemmy.ca/pictrs/image/b396cf3e-f6cd-4f24-aff7-ae604cfc8ae8.png">

trolololol@lemmy.world on 26 Jul 2024 22:45 collapse

Oh I need to learn more

masterofn001@lemmy.ca on 26 Jul 2024 22:55 collapse

Honestly, I thought I knew lots.

Then, one day, I decided to read man intro

Then I knew I knew I didn’t know much.

I still don’t.

But I now have a much better grasp of what/how.

bricklove@midwest.social on 26 Jul 2024 13:54 next collapse

I didn’t know about this. Thank you for the knowledge fellow human!

trolololol@lemmy.world on 26 Jul 2024 22:45 collapse

I don’t run crazy scripts in my machine. If I don’t understand it’s not safe enough.

That’s how you get pranked and hacked

ILikeBoobies@lemmy.ca on 26 Jul 2024 14:46 next collapse

I asked it to spot a typo in my code, it worked but it rewrote my classes for each function that called them

morbidcactus@lemmy.ca on 26 Jul 2024 15:39 collapse

I gave it a fair shake after my team members were raving about it saving time last year, I tried a SFTP function and some Terraform modules and man both of them just didn’t work. it did however do a really solid job of explaining some data operation functions I wrote, which I was really happy to see. I do try to add a detail block to my functions and be explicit with typing where appropriate so that probably helped some but yeah, was actually impressed by that. For generation though, maybe it’s better now, but I still prefer to pull up the documentation as I spent more time debugging the crap it gave me than piecing together myself.

I’d use a llm tool for interactive documentation and reverse engineering aids though, I personally think that’s where it shines, otherwise I’m not sold on the “gen ai will somehow fix all your problems” hype train.

NikkiDimes@lemmy.world on 26 Jul 2024 16:42 collapse

I think the best current use case for AI when it comes to coding is autocomplete.

I hate coding without Github Copilot now. You’re still in full control of what you’re building, the AI just autocompletes the menial shit you’ve written thousands of times already.

When it comes to full applications/projects, AI still has some way to go.

morbidcactus@lemmy.ca on 26 Jul 2024 18:47 collapse

I can get that for sure, I did see a client using it for debugging which seemed interesting as well, made an attempt to narrow down where the error occurred and what actually caused it.

NikkiDimes@lemmy.world on 27 Jul 2024 00:04 collapse

I’ll do that too! In the actual code you can just write something like

// Q: Why isn't this working as expected?
// A: 

and it’ll auto complete an answer based on the code. It’s not always 100% on point, but it usually leads you in the right direction.

dejected_warp_core@lemmy.world on 26 Jul 2024 19:35 collapse

But I don’t like using Argparse!

Melvin_Ferd@lemmy.world on 26 Jul 2024 11:34 next collapse

You all are nuts for not seeing this article for what it is

rekorse@lemmy.world on 26 Jul 2024 11:41 collapse

Which is?

drislands@lemmy.world on 26 Jul 2024 12:04 next collapse

A hit-piece commissioned by the Joker to distract you from his upcoming bank heist!!!

Melvin_Ferd@lemmy.world on 26 Jul 2024 13:00 collapse

Replace joker for media and replace distract you from bank heist with convince you to hate AI then yes.

FlyingSquid@lemmy.world on 26 Jul 2024 13:08 next collapse

Do convince us why we should like something which is a massive ecological disaster in terms of fresh water and energy usage.

Feel free to do it while denying climate change is a problem if you wish.

Womble@lemmy.world on 26 Jul 2024 15:34 next collapse

AI is a rounding error in terms of energy use. Creating and worldwide usage of chatGPT4 for a whole year comes out to less than 1% of the energy Americans burn driving in one day.

FlyingSquid@lemmy.world on 26 Jul 2024 15:37 collapse

I think I’ll go with Yale over ‘person on the Internet who ignored the water part.’

e360.yale.edu/…/artificial-intelligence-climate-e…

From that article:

Estimates of the number of cloud data centers worldwide range from around 9,000 to nearly 11,000. More are under construction. The International Energy Agency (IEA) projects that data centers’ electricity consumption in 2026 will be double that of 2022 — 1,000 terawatts, roughly equivalent to Japan’s current total consumption.

Womble@lemmy.world on 26 Jul 2024 15:47 collapse

Forgive me for not trusting an ariticle that says that AI will use a petawatt within the next two years. Either the person who wrote it doesnt understand the difference between energy and power or they are very sloppy.

Chat GPT took 50GWh to train source

Americans burn 355 million gallons of gasoline a day source and at 33.5 Kwh/gal source that comes out to 12,000GWh per day burnt in gasoline.

Water usage is more balanced, depending on where the data centres are it can either be a significant problem or not at all. The water doesnt vanish it just goes back into the air, but that can be problematic if it is a significant draw on local freshwater sources. e.g. using river water just before it flows into the sea, 0 issue, using a ground aquifer in a desert, big problem.

FlyingSquid@lemmy.world on 26 Jul 2024 15:51 next collapse

Training is already over. This has nothing to do with training, so that is irrelevant. This is about how much power is needed as it is used more and more. I think you know that.

Also, I’m not sure why you think just because cars emit a lot of CO2, it doesn’t mean that other sources that emit a lot of CO2, but less than cars, are a good thing.

The water doesnt vanish it just goes back into the air,

Cool, tell that to all the people who rely on glaciers for their fresh water. That only includes a huge percentage of people in India and China.

But really, what you’re telling me is that studies and scientists are wrong and you’re right. Cool. Good luck convincing people of that.

Womble@lemmy.world on 26 Jul 2024 15:58 collapse

This New Yorker article estimates GPT usage at 0.5GWhr a day, which comes out to 0.0041% of the energy burnt just in vehicle gasoline per day in the USA (and this is for worldwide usage for chatGPT).

I’m not asking you to trust me at all, I’ve listed my sources, if you disagree with any of them or multiplying three numbers together that’s fine.

Cool, tell that to all the people who rely on glaciers for their fresh water. That only includes a huge percentage of people in India and China.

Yes, if you read my last reply I answered that directly. Water usage can be a big issue, or it can be a non-issue, its locale dependent.

FlyingSquid@lemmy.world on 26 Jul 2024 16:03 collapse

What New Yorker article? You didn’t link to one. I, however, linked to Yale University which has a slightly better track record on science than The New Yorker.

And, again, you are arguing that emitting less CO2 is a good thing. It is not.

And if water can be a big issue, why is AI a good thing when it uses it up? You can say “people shouldn’t build data centers in those locations,” but they are. And the world doesn’t run on “shouldn’t.”

Edit: Now you linked to it. It’s paywalled, which means I can’t read it and I doubt you did either.

Womble@lemmy.world on 26 Jul 2024 16:10 next collapse

Apologies, I didn’t post the link, it’s edited now.

If you want to take issue with all energy usage that’s fine, its a position to take. But it’s quite a fringe one given that harnessing energy is what gives us the quality of life we have. Thankfully electricity is one of the easiest forms of energy to decarbonise and is already happening rapidly with solar and wind power, we need to transition more of our energy usage to it in order to reduce fossil fuel usage. My main point is that this railing against AI energy usage is akin to the whole plastic straw ban, mostly performative and distracting from the places where truely vast amounts of fossil fuels are burnt that need to be tackled urgently.

You can say “people shouldn’t build data centres in those locations,” but they are. And the world doesn’t run on “shouldn’t.”

I’m 100% behind forcing data centres to use sustainable water sources or other methods of cooling. But that is a far cry from AI energy consumption being a major threat, the vast majority of data centre usage isn’t AI anyway, it’s serving websites like the one we are talking on right now.

FlyingSquid@lemmy.world on 26 Jul 2024 16:14 next collapse

Apologies, I didn’t post the link, it’s edited now.

Yes, and it’s paywalled, so I can’t read it. I think you knew that. It could say anything.

I’m 100% behind forcing data centre’s to use sustainable water sources or other methods of cooling.

Cool, good luck with that happening.

But that is a far cry from AI energy consumption being a major threat,

A different subject from water. You keep trying to get away from the water issue. I also think you know why you’re doing that.

Also, define threat. It contributes to climate change. It gets rid of potable water. I’d call that a threat.

By the way, there is nowhere in the U.S. where water is not going to be a problem soon.

geographical.co.uk/…/us-groundwater-reserves-bein…

But hey, we can just move the servers to the ocean, right? Or maybe outer space! It’s cold!

Womble@lemmy.world on 26 Jul 2024 16:16 collapse

Ok, you just want to shout not discuss so I wont engage any further.

FlyingSquid@lemmy.world on 26 Jul 2024 16:18 collapse

That’s a nice cop-out there since nothing I said could remotely be considered shouting and your New Yorker article in no way supported your point.

rekorse@lemmy.world on 29 Jul 2024 12:30 collapse

Why can’t we analyze AI on its own merits? We dont base our decisions on whether an idea is more or less polluting than automobiles. We can look at what we are getting for what’s being put into it.

The big tech companies could scrap their AI tech today and it wouldnt change most peoples lives.

Womble@lemmy.world on 26 Jul 2024 16:13 collapse

Whole article for ref since you cant access it for whatever reason (its not very nice assuming bad faith like that btw)

In 2016, Alex de Vries read somewhere that a single bitcoin transaction consumes as much energy as the average American household uses in a day. At the time, de Vries, who is Dutch, was working at a consulting firm. In his spare time, he wrote a blog, called Digiconomist, about the risks of investing in cryptocurrency. He found the energy-use figure disturbing.

“I was, like, O.K., that’s a massive amount, and why is no one talking about it?” he told me recently over Zoom. “I tried to look up some data, but I couldn’t really find anything.” De Vries, then twenty-seven, decided that he would have to come up with the information himself. He put together what he called the Bitcoin Energy Consumption Index, and posted it on Digiconomist. According to the index’s latest figures, bitcoin mining now consumes a hundred and forty-five billion kilowatt-hours of electricity per year, which is more than is used by the entire nation of the Netherlands, and producing that electricity results in eighty-one million tons of CO2, which is more than the annual emissions of a nation like Morocco. De Vries subsequently began to track the electronic waste produced by bitcoin mining—an iPhone’s worth for every transaction—and its water use—which is something like two trillion litres per year. (The water goes toward cooling the servers used in mining, and the e-waste is produced by servers that have become out of date.)

Last year, de Vries became concerned about another energy hog: A.I. “I saw that it has a similar capability, and also the potential to have a similar growth trajectory in the coming years, and I felt immediately prompted to make sure people are aware that this is also energy-intensive technology,” he explained. He added a new tab to his blog: “AI sustainability.” In a paper he published last fall, in Joule, a journal devoted to sustainable energy, de Vries, who now works for the Netherlands’ central bank, estimated that if Google were to integrate generative A.I. into every search, its electricity use would rise to something like twenty-nine billion kilowatt-hours per year. This is more than is consumed by many countries, including Kenya, Guatemala, and Croatia.

“There’s a fundamental mismatch between this technology and environmental sustainability,” de Vries said. Recently, the world’s most prominent A.I. cheerleader, Sam Altman, the C.E.O. of OpenAI, voiced similar concerns, albeit with a different spin. “I think we still don’t appreciate the energy needs of this technology,” Altman said at a public appearance in Davos. He didn’t see how these needs could be met, he went on, “without a breakthrough.” He added, “We need fusion or we need, like, radically cheaper solar plus storage, or something, at massive scale—like, a scale that no one is really planning for.”

Video From The New Yorker

What a Mammal’s Loss Teaches Us About Mortality: Requiem for a Whale

Last week, the International Energy Agency announced that energy-related global CO2 emissions rose, yet again, in 2023, to more than thirty-seven billion metric tons. The increase comes at a time when the whole world is supposedly striving to reach net-zero emissions, and it indicates that global efforts are, to put it mildly, falling short. Much of the increase in emissions came from China, and most of it was driven by century-old technologies, such as the internal-combustion engine. So data centers are, for now at least, a small part of the problem. Still, as the use of A.I. ramps up and bitcoin prices reach new heights, the question is: How can the world reach net zero if it keeps inventing new ways to consume energy? (In the U.S., data centers now account for about four per cent of electricity consumption, and that figure is expected to climb to six per cent by 2026.)

Mining cryptocurrencies like bitcoin eats up electricity owing to the way the system was set up. To acquire bitcoin (and other currencies that rely on a similar scheme), miners compete to answer cryptographic riddles. Winning the competition takes a lot of computing power. As a result, server farms devoted to crypto mining tend to be situated in parts of the world where electricity is cheap. China used to lead the world in crypto mining, but it imposed a ban on the practice in 2021, and now the U.S. is No. 1. A few months ago, the U.S. Department of Energy tried to compel mining concerns to report their energy use, but in February a Texas judge issued a temporary restraining order blocking the effort. (According to the White House Office of Science and Technology

FlyingSquid@lemmy.world on 26 Jul 2024 16:17 collapse

Your link is just about Google’s energy use, still says it uses a vast amount of energy, and says that A.I. is partially responsible for climate change.

It even quotes that moron Altman saying that there’s not enough energy to meet their needs and something new needs to be developed.

I have no idea why you think this supports your point at all.

Womble@lemmy.world on 26 Jul 2024 16:28 collapse

Artificial intelligence requires a lot of power for much the same reason. The kind of machine learning that produced ChatGPT relies on models that process fantastic amounts of information, and every bit of processing takes energy. When ChatGPT spits out information (or writes someone’s high-school essay), that, too, requires a lot of processing. It’s been estimated that ChatGPT is responding to something like two hundred million requests per day, and, in so doing, is consuming more than half a million kilowatt-hours of electricity. (For comparison’s sake, the average U.S. household consumes twenty-nine kilowatt-hours a day.)

That was the only bit I was referring to for a source for 0.5GWh energy usage per day for GPT, I agree what Altman says is worthless, or worse deliberately manipulative to keep the VC money flowing into openAI.

FlyingSquid@lemmy.world on 26 Jul 2024 16:30 collapse

I see, so if we ignore the rest of the article entirely, your point is supported. What an odd way of trying to prove a point.

Also, I guess this was a lie:

Ok, you just want to shout not discuss so I wont engage any further.

Although since it was a lie, I’d love you to tell me what you think I was shouting about.

rekorse@lemmy.world on 29 Jul 2024 12:27 collapse

They aren’t just taking water noone was using.

Melvin_Ferd@lemmy.world on 26 Jul 2024 23:40 collapse

I wrote this and feed it through chatGPT to help make it more readable. To me that’s pretty awesome. If I wanted I can have it written like an Elton John song. If that doesn’t convince you it’s fun and worth it then maybe the argument below could, or not. Either way I like it.


I don’t think I’ll convince you, but there are a lot of arguments to make here.

I heard a large AI model is equivalent to the emissions from five cars over its lifetime. And yes, the water usage is significant—something like 15 billion gallons a year just for a Microsoft data center. But that’s not just for AI; data centers are something we use even if we never touch AI. So, absent of AI, it’s not like we’re up in arms about the waste and usage from other technologies. AI is being singled out—it’s the star of the show right now.

But here’s why I think we should embrace it: the potential. I’m an optimist and I love technology. AI bridges gaps in so many areas, making things that were previously difficult much easier for many people. It can be an equalizer in various fields.

The potential with AI is fascinating to me. It could bring significant improvements in many sectors. Think about analyzing and optimizing power grids, making medical advances, improving economic forecasting, and creating jobs. It can reduce mundane tasks through personalized AI, like helping doctors take notes and process paperwork, freeing them up to see more patients.

Sure, it consumes energy and has costs, but its potential is huge. It’s here and advancing. If we keep letting the media convince us to hate it, this technology will end up hoarded by elites and possibly even made illegal for the rest of us. Imagine having a pocket advisor for anything—mechanical issues, legal questions, gardening problems, medical concerns. We’re not there yet, but remember, the first cell phones were the size of a brick. The potential is enormous, and considering all the things we waste energy and resources on, this one is weighed against it benefits.

FlyingSquid@lemmy.world on 26 Jul 2024 23:41 next collapse

Not being able to use your own words to explain something to me and having the thing that is an ecological disaster that also lies all the time explain it to me instead really only reinforces my point that there’s no reason to like this technology.

Melvin_Ferd@lemmy.world on 26 Jul 2024 23:50 collapse

It is my own words. Wrote out the whole thing but I was never good with grammar and fully admit that often what I write is confusing or ambiguous. I can leverage chatgpt same way I would leverage spell check in word. I don’t see any problems there.

But if you don’t mind, I’m interested in the points discussed.

FlyingSquid@lemmy.world on 26 Jul 2024 23:57 collapse

Ok, let’s look at your own words then:

I heard a large AI model is equivalent to the emissions from five cars over its lifetime.

Cool, I hear lots of things. Where’s the evidence?

So, absent of AI, it’s not like we’re up in arms about the waste and usage from other technologies. AI is being singled out—it’s the star of the show right now.

Who is we? I am not happy about any of it, but especially when it is something not especially useful (you could have used spelling and grammar checkers that have predated AI by many years but you decided to waste water).

And I don’t really care about the potential of an orphan-crushing machine as long as we let it keep crushing orphans.

I love this last part the best though:

Sure, it consumes energy and has costs

We can just forget about these because you didn’t want to use standard grammar and spellcheckers and they have the potential to do a bunch of things they can’t do. Awesome. Totally worth the end of civilization.

Melvin_Ferd@lemmy.world on 27 Jul 2024 00:06 collapse

Cool, I hear lots of things. Where’s the evidence?

technologyreview.com/…/training-a-single-ai-model…

It’s not crushing orphans. It’s solving advanced problems that human brains are not able to and reducing the time between discoveries but also just being fun to play with and helps everyone access tools that just speeds everything up and only going to get better.

Does more than spell checking, not a sound argument.

Everything in life will have a cost. We have to weight the benefits against the cost. AI is potentially the greatest benefit we could see in our lifetime.

FlyingSquid@lemmy.world on 27 Jul 2024 00:34 collapse

That is training, not use. You are being dishonest.

Melvin_Ferd@lemmy.world on 27 Jul 2024 00:50 collapse

And what is the usage?

[deleted] on 27 Jul 2024 01:35 collapse

.

Melvin_Ferd@lemmy.world on 27 Jul 2024 01:38 collapse

It’s not research?

[deleted] on 27 Jul 2024 01:58 collapse

.

Melvin_Ferd@lemmy.world on 27 Jul 2024 02:02 collapse

Any examples of what they’re doing to exploit us with it?

Most places I’ve seen are trying to find ways to incorporate AI to help check for errors and reduce time on tasks.

It’s not like AI is the cause of being exploited either. But it does assist me when I’m studying for a new role. Building a resume and upskilling on my own time.

And look I’m aware I’m taking AI side. I know most companies would fire anyone and replace them with a machine if they could. But I’m still better off with this technology if it leads to better medicine or gives us access to things that was unreachable or difficult to access in the past. It’s a two way street. But it’s like people on my street keep putting up barriers trying to make everyone take the long route

If it’s one thing in life that I can’t believe others just don’t see is how the rich embrace things that the rest reject and often the thing is what contributes to the success of the rich. They’re embracing it for a reason. It’s a forxEe multiplier. It reduces workloads. Why the hell are we acting like it’s some great sin. We should be fighting to keep it from them and for us instead of the other way around

[deleted] on 27 Jul 2024 02:27 collapse

.

Melvin_Ferd@lemmy.world on 27 Jul 2024 03:32 collapse

I’m not really following. I thought you were saying it was about exploiting workers. Now it’s about trump. I really can’t think of what this small group would accomplish that they don’t already accomplish by hiring quants to their 500 year old think tank. Difference with AI is we now have our own force multiplier threatening their power.

Partially why I think media is driving us to be so against AI.

I see the same articles for AI as I do with any other propaganda like this it all has a familiar smell. My gut is telling me these small groups of powerful people do not want us to embrace AI

SirDerpy@lemmy.world on 27 Jul 2024 04:07 collapse

I’m not really following

That’s because the truth sucks and your brain is rejecting it in defense of your mental health.

500 year old think tank

force multiplier

There’s one. What else?

Melvin_Ferd@lemmy.world on 26 Jul 2024 23:41 collapse

For the curious, the message rewritten as lyrics for an Elton John song:

(Verse 1) I don’t think I’ll convince you, but I’ve got a tale to tell, They say AI’s like five cars, burning fuel and raising hell. And the water that it guzzles, like rivers running dry, Fifteen billion gallons, under Microsoft’s sky.

(Pre-Chorus) But it’s not just AI, oh, it’s every data node, Even if you never touch it, it’s a heavy load. We point fingers at AI, like it’s the star tonight, But let me tell you why I think it shines so bright.

(Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

(Verse 2) I’m an optimist, a techie at heart, AI could change the world, give us a brand new start. From power grids to medicine, it’s a helping hand, Economic dreams and jobs across the land.

(Pre-Chorus) Yes, it drinks up energy, but what’s the price to pay? For the chance to see the mundane fade away. Imagine doctors with more time to heal, While AI handles notes, it’s a real deal.

(Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

(Bridge) If we let the media twist our minds, We’ll lose this gift to the elite, left behind. But picture this, a pocket guide for all, From car troubles to legal calls.

(Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

(Outro) First cell phones were the size of a brick, Now they’re magic in our hands, technology so quick. AI’s got the power, to change the way we live, So let’s embrace it now, there’s so much it can give.

(Chorus) Oh, the potential, can’t you see, It’s the future calling, setting us free. Bridging gaps and making life easier, An equalizer, for you and me.

(Outro) Oh, it’s the future, it’s the dream, AI’s the bright light, in the grand scheme.

rekorse@lemmy.world on 29 Jul 2024 12:25 collapse

This is the stupidest shit ive seen yet.

We dont care about other data centers as much because we get a service in return that people want.

Most people didnt ask for or want AI, didnt agree to its costs, and now have to deal with it potentially taking their jobs.

But go ahead and keep posting idiotic and selfish posts about how you like it so much and its so fun and cool, look at my shitty song lyrics that make no fucking sense!

I’d say touch grass but the lyrics make me want to say touch instrument instead.

Melvin_Ferd@lemmy.world on 29 Jul 2024 21:29 collapse

Didn’t realize the world needs to create thimgs and be driving by your personal wants and needs.

You sound like a republican complaining about immigrants.

Media has all of you in knots.

rekorse@lemmy.world on 30 Jul 2024 10:48 collapse

Never said it had to. Are you going to engage with anything I said or just call me stupid?

johnwilker@lemmy.world on 26 Jul 2024 13:48 collapse

Most folks don’t need an excuse to hate the internet enabled lie generator that “AI” is.

Melvin_Ferd@lemmy.world on 26 Jul 2024 19:52 collapse

No but most media moved quick to present every article to convince people why they should hate it. Pack mentality like when a popular kid starts spreading rumours about the new kid in class. People quickly adopt the common shared belief and most of those now are Media driven.

AI is pretty cool new tech. Most people would have been mediocre to interested in it if it were not for corporate media telling us all why we need to hate it.

I saw an article the other day about “people shitting on the beach” which was really an attack on immigrants. Media is now about forming opinions for us and we all accept it more than ever.

rekorse@lemmy.world on 29 Jul 2024 12:21 collapse

A majority of people have no use, nor want, AI. Just because you and a sub group of people like it, doesnt mean everyone else are idiots being misled by the media.

Why exactly so you think the media wants people to hate AI anyways? Wouldnt big corporate gain from automating news writing?

FarFarAway@startrek.website on 26 Jul 2024 13:02 collapse

The summary for the post kinda misses the mark on what the majority of the article is pushing.

Yes, the first part describes employees struggling with AI, but the majority of the article makes the case for hiring more freelancers and updating “outdated work models and systems…to unlock the full expected productivity value of AI.”

It essentially says that AI isn’t the problem, since freelancers can use it perfectly. So full time employees need to be “rethinking how to best do their work and accomplish their goals in light of AI advancements.”

rekorse@lemmy.world on 29 Jul 2024 12:32 collapse

The article is saying that instead of hiring more people, companies are trying to use AI to get the same output with less people. This leads to lost jobs.

Its not common people are actually fired and directly replaced by AI, but what happens is the normal turnover keeps turning but they won’t replace the lost jobs with as many people as before.

Personally I dont want to support any non-human created art in any field, although I think there are use cases for AI in other fields.

_sideffect@lemmy.world on 26 Jul 2024 13:32 next collapse

Lmao, so instead of ai taking our jobs, it made us MORE jobs.

Thanks, “ai”!

kent_eh@lemmy.ca on 26 Jul 2024 15:51 collapse

Except it didn’t make more jobs, it just made more work for the remaining employees who weren’t laid off (because the boss thought the AI could let them have a smaller payroll)

pineapplelover@lemm.ee on 26 Jul 2024 15:00 next collapse

If used correctly, AI can be helpful and can assist in easy and menial tasks

hswolf@lemmy.world on 26 Jul 2024 15:17 next collapse

It also helps you getting a starting point when you don’t know how ask a search engine the right question.

But people misinterpret its usefulness and think It can handle complex and context heavy problems, which must of the time will result in hallucinated crap.

captainlezbian@lemmy.world on 26 Jul 2024 15:27 next collapse

And are those use cases common and publicized? Because I see it being advertised as “improves productivity” for a novel tool with myriad uses I expect those trying to sell it to me to give me some vignettes and not to just tell my boss it’ll improve my productivity. And if I was in management I’d want to know how it’ll do that beyond just saying “it’ll assist in easy and menial tasks”. Will it be easier than doing them? Many tools can improve efficiency on a task at a similar time and energy investment to the return. Are those tasks really so common? Will other tools be worse?

jjjalljs@ttrpg.network on 26 Jul 2024 16:20 next collapse

I mean if it’s easy you can probably script it with some other tool.

“I have a list of IDs and need to make them links to our internal tool’s pages” is easy and doesn’t need AI. That’s something a product guy was struggling with and I solved in like 30 seconds with a Google sheet and concatenation

silasmariner@programming.dev on 26 Jul 2024 16:56 collapse

Yeah but the idea of AI in that kind of workflow is so that the product guy can actually do it themselves without asking you and in less than 30 mins

jjjalljs@ttrpg.network on 26 Jul 2024 17:00 collapse

Yeah but that’s like using an entire gasoline powered car to play a CD.

Competent product guy should be able to learn some simpler tools like Google sheets.

silasmariner@programming.dev on 26 Jul 2024 17:07 collapse

No arguments from me that it’s better if people are just better at their job, and I like to think I’m good at mine too, but let’s be real - a lot of people are out of their depth and I can imagine it can help there. OTOH is it worth the investment in time (from people who could themselves presumably be doing astonishing things) and carbon energy? Probably not. I appreciate that the tech exists and it needs to, but shoehorning it in everywhere is clearly bollocks. I just don’t know yet how people will find it useful and I guess not everyone gets that spending an hour learning to do something that takes 10s when you know how is often better than spending 5 mins making someone or something else do it for you… And TBF to them, they might be right if they only ever do the thing twice.

balder1991@lemmy.world on 26 Jul 2024 18:52 collapse

I think the actual problem here is that if the product people can’t learn such a simple thing by themselves, they also won’t be able to correctly prompt the LLM to their use case.

They said, I do think LLMs can boost productivity a lot. I’m learning a new framework and since there’s so much details to learn about it, it’s fast to ask ChatGPT what’s the proper way to do X on this framework etc. Although that only works because I already studied the foundation concepts of that framework first.

silasmariner@programming.dev on 26 Jul 2024 19:29 collapse

I think the actual problem is that they won’t know when they’ve got something that compiles but is wrong… I dunno though. I’ve never seen someone doing this and I can only speculate tbh. I only ever asked ChatGPT a couple of times, as a joke to myself when I got stuck, and it spouted completely useless nonsense both times… Although on one occasion the wrong code it produced looked like it had the pattern of a good idiom behind it and I stole that.

fine_sandy_bottom@discuss.tchncs.de on 27 Jul 2024 02:13 collapse

Well yes, but it’s not often I encounter an easy or menial task for which AI is the best solution.

For example, searching documentation us usually more informative than asking a bot trained on said documentation.

JohnnyH842@lemmy.world on 26 Jul 2024 16:15 next collapse

Admittedly I only skimmed the article, but I think one of the major problems with a study like this is how broad “AI” really is. MS copilot is just bing search in a different form unless you have it hooked up to your organizations data stores, collaboration platforms, productivity applications etc. and is not really helpful at all. Lots of companies I speak with are in a pilot phase of copilot which doesn’t really show much value because it doesn’t have access to the organizations data because it’s a big security challenge. On the other hand, a chat bot inside of a specific product that is trained on that product specifically and has access to the data that it needs to return valuable answers to prompts that it can assist in writing can be pretty powerful.

0laura@lemmy.world on 27 Jul 2024 01:45 collapse

the larger context sizes specifically are what I’m fascinated by. imagine running an LLM locally and feeding it all your data. appointments, relationships, notes whatever. you could also connect it to smart Home devices. I really need to get my hands on a GPU with 16 gigs of vram

alienanimals@lemmy.world on 26 Jul 2024 16:38 next collapse

The billionaire owner class continues to treat everyone like shit. They blame AI and the idiots eat it up.

tvbusy@lemmy.dbzer0.com on 26 Jul 2024 19:51 next collapse

This study failed to take into consideration the need to feed information to AI. Companies now prioritize feeding information to AI over actually making it usable for humans. Who cares about analyzing the data? Just give it to AI to figure out. Now data cannot be analyzed by humans? Just ask AI. It can’t figure out? Give it more so it can figure it out. Rinse, repeat. This is a race to the bottom where information is useless to humans.

iAvicenna@lemmy.world on 26 Jul 2024 22:47 next collapse

because on top of your duties you now have to check whatever the AI is doing in place of the employee it has replaced

dezmd@lemmy.world on 27 Jul 2024 00:59 next collapse

The Upwork Research Institute

Not exactly a panacea of rigorous scientific study.

Sanctus@lemmy.world on 27 Jul 2024 00:59 next collapse

AI is better when I use it for item generation. It kicks ass at generating loot drops for encounters. All I really have to do is adjust item names if its not a mundane weapon. I do occasionally change an item completely cause its effects can get bland. But dont do much more than that.

ClamDrinker@lemmy.world on 27 Jul 2024 01:41 collapse

That’s because you’re using AI for the correct thing. As others have pointed out, if AI usage is enforced (like in the article), chances are they’re not using AI correctly. It’s not a miracle cure for everything and should just be used when it’s useful. It’s great for brainstorming. Game development (especially on the indie side of things) really benefit from being able to produce more with less. Or are you using it for DnD?

Ragnarok314159@sopuli.xyz on 27 Jul 2024 01:54 next collapse

Wait, LLM’s can play DnD? You mean…I might finally be able to play that game?!? Hurray!

ShaggySnacks@lemmy.myserv.one on 27 Jul 2024 02:04 next collapse

Who needs friends when you can play with LLMs!

Ragnarok314159@sopuli.xyz on 27 Jul 2024 14:22 collapse

Right now the choice is Dark Soul bosses who are mean, scripted stories (although BG3 is good), or people online who have sex with my mother.

LLM chat bots just open up new possibilities.

rottingleaf@lemmy.world on 27 Jul 2024 08:53 collapse

You mean…I might finally be able to play that game?!? Hurray!

Once a couple weeks I go somewhere to play it or similar games. Can’t follow, feel awkward, get sensory overload and a headache, get terribly tired, come home depressed over a wasted day.

That is, once in 3-5 games I feel that maybe it wasn’t that bad.

Ragnarok314159@sopuli.xyz on 27 Jul 2024 14:32 collapse

The groups I learned of were really weird about letting anyone else show up. Was told I had to form my own group and write my own adventures.

Thank you, fellow nerds.

rottingleaf@lemmy.world on 27 Jul 2024 14:39 collapse

It’s the other way around for me, wanted to play in Star Wars KotOR setting, one time one guy showed up (but only over voice call), another time my buddy agreed to play.

Then wrote something in one DM’s setting, only that DM showed up, said the quest was actually cool with good ideas yadda-yadda and mentioned it on another game, and later reused some of the moments in his own ones.

But me coming to other DMs’ games seems welcomed.

Was told I had to form my own group and write my own adventures.

I think they didn’t like you or your way of playing busted something in the quest their DM wrote, or something like that.

Ragnarok314159@sopuli.xyz on 27 Jul 2024 15:33 collapse

It was probably the latter. Because if they didn’t like me that is much worse for a multitude of reasons.

Sanctus@lemmy.world on 28 Jul 2024 01:10 collapse

I use it for tabletops lol I haven’t thrown any game dev ideas in there but that might be because I already have a backlog of projects cause I’m that guy.

Sk1ll_Issue@feddit.nl on 27 Jul 2024 02:05 next collapse

The study identifies a disconnect between the high expectations of managers and the actual experiences of employees

Did we really need a study for that?

postmateDumbass@lemmy.world on 27 Jul 2024 02:50 collapse

Knock on effect: employees trying to google answers to simpler questions also stymied by AI.

superkret@feddit.org on 27 Jul 2024 08:56 collapse

The other 23% were replaced by AI (actually, their workload was added to that of the 77%)