A cheat sheet for why using ChatGPT is not bad for the environment (andymasley.substack.com)
from anus@lemmy.world to technology@lemmy.world on 01 May 04:28
https://lemmy.world/post/28938386

#technology

threaded - newest

TootSweet@lemmy.world on 01 May 04:44 next collapse

ChatGPT

Arm yourself with knowledge

Bruh

Vanth@reddthat.com on 01 May 05:04 next collapse

Is environmental impact on the top of anyones list for why they don’t like ChatGPT? It’s not on mine nor on anyones I have talked to.

The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

The author moves away from strict environmental focus despite claims to the contrary in their intro,

This post is not about the broader climate impacts of AI beyond chatbots, or about whether AI is bad for other reasons

[…]

Other Objections, This is all a gimmick anyway. Why not just use Google? ChatGPT doesn’t give better information

… yet doesn’t address the most common criticisms.

Worse, the author accuses anyone who pauses to think of the negatives of ChatGPT of being absurdly illogical.

Being around a lot of adults freaking out over 3 Wh feels like I’m in a dream reality. It has the logic of a bad dream. Everyone is suddenly fixating on this absurd concept or rule that you can’t get a grasp of, and scolding you for not seeing the same thing. Posting long blog posts is my attempt to get out of the weird dream reality this discourse has created.

IDK what logical fallacy this is but claiming people are “freaking out over 3Wh” is very disingenuous.

Rating as basic content: 2/10, poor and disingenuous argument

Rating as example of AI writing: 5/10, I’ve certainly seen worse AI slop

anus@lemmy.world on 01 May 05:54 next collapse

Thank you for your considered and articulate comment

What do you think about the significant difference in attitude between comments here and in (quite serious) programming communities like lobste.rs/…/cheat_sheet_for_why_using_chatgpt_is_…

Are we in different echo chambers? Is chatgpt a uniquely powerful tool for programmers? Is social media a fundamentally Luddite mechanism?

Rooki@lemmy.world on 01 May 06:55 next collapse

I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the “AI” autocomplete just triggers almost every time you stop typing or on random occasions.

anus@lemmy.world on 01 May 09:25 collapse

I don’t think this answers the question

Saik0Shinigami@lemmy.saik0.com on 01 May 18:51 collapse

I don’t think this answers the question

They’re specifically showing you that in the use case you asked about the assertions must change. Your question is bad for the case that you’re specifically asking about.

So no, it doesn’t answer the question… But your question has a bunch more caveats that must be accounted for that you’re just straight up missing.

anus@lemmy.world on 04 May 06:02 collapse

No that is not how reasoned debate works, you have to articulate your argument lest you’re just sloppily babbling talking points

Saik0Shinigami@lemmy.saik0.com on 04 May 06:03 collapse

If the premise of your argument is fundamentally flawed, then you’re not having a reasoned debate. You just a zealot.

anus@lemmy.world on 04 May 06:08 collapse

Please articulate why the premise of my argument is fundamentally flawed

Saik0Shinigami@lemmy.saik0.com on 04 May 06:16 collapse

I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the “AI” autocomplete just triggers almost every time you stop typing or on random occasions.

They did… You just refuse to acknowledge it. It’s no longer a discussion of simply 3Wh when GitHub copilot is making queries every time you pause typing. It could easily equate to hundreds or even thousands of queries a day (if not rate limited). That fully changes the scope of the argument.

anus@lemmy.world on 04 May 06:23 collapse

GitHub copilot is not chatgpt

Saik0Shinigami@lemmy.saik0.com on 04 May 06:32 collapse

Yet again… You fundamentally have the wrong answer…

en.wikipedia.org/wiki/GitHub_Copilot

GitHub Copilot is a code completion and automatic programming tool developed by GitHub and OpenAI

github.com/features/copilot <img alt="" src="https://lemmy.saik0.com/pictrs/image/2c1feea1-7644-4628-8b39-e7d7b746611a.png">

GitHub copilot was literally developed WITH OpenAI the creators of ChatGPT… and you can run o1, o3, o4 directly in there.

docs.github.com/…/changing-the-ai-model-for-copil…

By default, Copilot code completion uses the GPT-4o Copilot, a fine-tuned GPT-4o mini based large language model (LLM).

It defaults to 4o mini.

anus@lemmy.world on 04 May 08:38 collapse

Thank you

None of this was true of copilot for years, but I stand corrected as for the current state of affairs

Vanth@reddthat.com on 01 May 11:42 collapse

I’m curious if you can articulate the difference between being critical of how a particular technology is owned and managed versus being a Luddite?

anus@lemmy.world on 04 May 05:58 collapse

I think I’m on board with arguing against how LLMs are being owned and managed, so I don’t really have much to say

Saik0Shinigami@lemmy.saik0.com on 01 May 15:27 collapse

The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

My reason is that you can’t trust the answers regardless. Hallucinations are a rampant problem. Even if we managed to cut it down to 1/100 query will hallucinate, you can’t trust ANYTHING. We’ve seen well trained and targeted AIs that don’t directly take user input (so can’t be super manipulated) in google search results recommending that people put glue on their pizzas to make the cheese stick better… or that geologists recommend eating a rock a day.

If a custom tailored AI can’t cut it… the general ones are not going to be all that valuable without significant external validation/moderation.

Justas@sh.itjust.works on 01 May 18:47 next collapse

There is also the argument that a downpour of AI generated slop is making the Internet in general less usable, hurting everyone (except the slop makers) by making true or genuine information harder to find and verify.

anus@lemmy.world on 04 May 05:53 collapse

What exactly is the argument?

anus@lemmy.world on 04 May 05:54 collapse

Basically no. What you’re calling tailored AI is actually low cost AI. You’ll be hard pressed, on the other hand, to get ChatGPT o3 to hallucinate at all

Saik0Shinigami@lemmy.saik0.com on 04 May 06:17 collapse

No, not basically no.

mashable.com/…/openai-o3-o4-mini-hallucinate-high…

By OpenAI’s own testing, its newest reasoning models, o3 and o4-mini, hallucinate significantly higher than o1.

Stop spreading misinformation. The company itself acknowledges that it hallucinates more than previous models.

anus@lemmy.world on 04 May 06:22 collapse

I stand corrected thank you for sharing

I was commenting based on anecdotal experience and I didn’t know where was a test specifically for this

I do notice that o3 is more overconfident and tends to find a source online from some forum and treat it as gospel

Which, while not correct, I would not treat as hallucination

Beppe_VAL@feddit.it on 01 May 05:06 next collapse

Even Sam Altman acknowledged last year the huge amount of energy needed by chatgpt, and the need for a breakthrough in energy breakthrough…

anus@lemmy.world on 01 May 05:56 collapse

Do you hold Sam Altman’s opinion higher than the reasoning here? In general or just on this particular take?

Neverclear@lemmy.dbzer0.com on 01 May 07:05 collapse

What would Altman gain from overstating the environmental impact of his own company?

What if power consumption is not so much limited by the software’s appetite, but rather by the hardware’s capabilities?

anus@lemmy.world on 01 May 09:25 collapse

What would Altman gain from overstating the environmental impact of his own company?

You should consider the possibility that CEOs of big companies essentially always think very hard about how to talk about everything so that it always benefits them

I can see the benefits, I can try to explain if you’re actually interested

Neverclear@lemmy.dbzer0.com on 01 May 22:19 collapse

What weighs more: the cost of taking people at their word, or the effort it takes to interpret the subtext of every interaction?

anus@lemmy.world on 04 May 05:51 collapse

I don’t understand the nature of your question

Neverclear@lemmy.dbzer0.com on 04 May 14:21 collapse

You seem to spend a lot of energy questioning people’s intentions, inventing reasons to question whether people’s intentions toward you are genuine. Some do deserve to be questioned, no doubt. It just seems draining, and for what goal?

Do you aim to be the sole determiner of truth? To never be duped again? To sharpen your skills as an investigator?

How much more creative energy could you put into the world by taking people at their word in all but the highest risk cases?

anus@lemmy.world on 04 May 14:50 collapse

Stoic desire to be informed and to be a force of good for others with like intentions

carrion0409@lemm.ee on 01 May 05:23 next collapse

Everytime I see a post like this I lose a little more faith in humanity

anus@lemmy.world on 01 May 05:42 collapse

Every time I see a comment like this I lost a little more faith in lemmy

[deleted] on 01 May 06:25 collapse

.

mEEGal@lemmy.world on 01 May 05:34 next collapse

this reeks of AI slop

anus@lemmy.world on 01 May 05:42 collapse

No it doesn’t

Dekkia@this.doesnotcut.it on 01 May 06:03 next collapse

I struggle to see why numerous scientists (and even Sam ‘AI’ Altman himself) would be wrong about this but a random substack post holds the truth.

anus@lemmy.world on 01 May 09:27 next collapse

  1. Have you read the post?

  2. If you’d like to refute the content on the grounds of another scientist, can you please provide a reference? I will read it

Takapapatapaka@lemmy.world on 01 May 14:02 collapse

Having read the entire post, i think there’s a misunderstanding :

  • this post is about ChatGPT and LLM chatbots in general, not AI as a whole.
  • This post claims to be 100% aligned with scientists and that AI as a whole is bad for the environment.
  • What they claim is that chatbots are only 1-3% of AI use and yet benefit to 400 million people (rest is mostly business stuff and serves more entreprises or very specific needs), therefore they do not consume much by themselves (just like we could keep 1-3% of cars going and be just fine with environment)
jonathan@lemmy.zip on 01 May 07:20 next collapse

ChatGPT energy costs are highly variable depending on context length and model used. How have you factored that in?

anus@lemmy.world on 01 May 09:23 collapse

This isn’t my article and yes that’s controlled for

NeilBru@lemmy.world on 01 May 07:52 next collapse

Self-hosted LLMs are the way.

[deleted] on 01 May 08:21 next collapse

.

NeilBru@lemmy.world on 01 May 10:37 collapse

Oof, ok, my apologies.

I am, admittedly, “GPU rich”; I have ~48GB of VRAM at my disposal on my main workstation, and 24GB on my gaming rig. Thus, I am using Q8 and Q6_L quantized .gguf files.

Naturally, my experience with the “fidelity” of my LLM models re: hallucinations would be better.

anus@lemmy.world on 01 May 09:16 collapse

I actually think that (presently) self hosted LLMs are much worse for hallucination

MonkderVierte@lemmy.ml on 01 May 10:15 next collapse

A cheat sheet on how to argue your passion positive.

anus@lemmy.world on 04 May 06:12 collapse

I’m not familiar with the term

Takapapatapaka@lemmy.world on 01 May 14:24 next collapse

I was very sceptical at first, but this article kinda convinced me. I think it still has some bad biases (it often only considers 1 chatgpt request in its comparisons, when in reality you quickly make dozens of them, it often says ‘how weird to try and save tiny amounts of energy’ when we do that already with lights when leaving rooms, water when brushing teeths, it focuses on energy (to train, cool and generate electricity) and not on logistics and hardware required), but overall two arguments got me :

  • one chatgpt request seems to consume around 3Wh, which is relatively low
  • even with daily billions of requests, chatbots seems to represent less than 5% of AI power consumption, which is the real problem and lies in the hand of corporates.

Still probably cant hurt to boycott that stuff, but it’d be more useful to use less social media, especially those with videos or pictures, and watch videos in 140p

half@lemy.lol on 01 May 15:27 next collapse

Username checks out

anus@lemmy.world on 04 May 06:12 collapse

🆗

superkret@feddit.org on 04 May 06:09 collapse

tl/dr: “Yes it is, but not as much as other things so stop worrying.”

What a bullshit take.

anus@lemmy.world on 04 May 06:11 collapse

What makes this a bullshit take? Focusing attention on actual problems is a great way to make progress