Brian Eno: “The biggest problem about AI is not intrinsic to AI. It’s to do with the fact that it’s owned by the same few people” (musictech.com)
from cm0002@lemmy.world to technology@lemmy.world on 23 Mar 13:07
https://lemmy.world/post/27281388

#technology

threaded - newest

ElPussyKangaroo@lemmy.world on 23 Mar 13:25 next collapse

Truer words have never been said.

aramis87@fedia.io on 23 Mar 13:37 next collapse

The biggest problem with AI is that they're illegally harvesting everything they can possibly get their hands on to feed it, they're forcing it into places where people have explicitly said they don't want it, and they're sucking up massive amounts of energy AMD water to create it, undoing everyone else's progress in reducing energy use, and raising prices for everyone else at the same time.

Oh, and it also hallucinates.

pennomi@lemmy.world on 23 Mar 14:19 next collapse

Eh I’m fine with the illegal harvesting of data. It forces the courts to revisit the question of what copyright really is and hopefully erodes the stranglehold that copyright has on modern society.

Let the companies fight each other over whether it’s okay to pirate every video on YouTube. I’m waiting.

Electricblush@lemmy.world on 23 Mar 14:38 next collapse

I would agree with you if the same companies challenging copyright (protecting the intellectual and creative work of “normies”) are not also aggressively welding copyright against the same people they are stealing from.

With the amount of coprorate power tightly integrated with the governmental bodies in the US (and now with Doge dismantling oversight) I fear that whatever comes out of this is humans own nothing, corporations own everything. Death of free independent thought and creativity.

Everything you do, say and create is instantly marketable, sellable by the major corporations and you get nothing in return.

The world needs something a lot more drastic then a copyright reform at this point.

cyd@lemmy.world on 24 Mar 02:58 collapse

It’s seldom the same companies, though; there are two camps fighting each other, like Gozilla vs Mothra.

naught@sh.itjust.works on 23 Mar 14:39 next collapse

AI scrapers illegally harvesting data are destroying smaller and open source projects. Copyright law is not the only victim

thelibre.news/foss-infrastructure-is-under-attack…

interdimensionalmeme@lemmy.ml on 23 Mar 17:36 next collapse

In this case they just need to publish the code as a torrent. You wouldn’t setup a crawler if there was all the data in a torrent swarm.

cyd@lemmy.world on 24 Mar 03:04 collapse

That article is overblown. People need to configure their websites to be more robust against traffic spikes, news at 11.

Disrespecting robots.txt is bad netiquette, but honestly this sort of gentleman’s agreement is always prone to cheating. At the end of the day, when you put something on the net for people to access, you have to assume anyone (or anything) can try to access it.

naught@sh.itjust.works on 24 Mar 14:53 collapse

You think Red Hat & friends are just all bad sysadmins? Source hut maybe…

I think there’s a bit of both: poorly optimized/antiquated sites and a gigantic spike in unexpected and persistent bot traffic. The typical mitigations do not work anymore.

Not every site is and not every site should have to be optimized for hundreds of thousands of requests every day or more. Just because they can be doesn’t mean that it’s worth the time effort or cost.

catloaf@lemm.ee on 23 Mar 15:02 collapse

So far, the result seems to be “it’s okay when they do it”

selokichtli@lemmy.ml on 23 Mar 19:48 collapse

Yeah… Nothing to see here, people, go home, work harder, exercise, and don’t forget to eat your vegetables. Of course, family first and god bless you.

taladar@sh.itjust.works on 23 Mar 14:39 next collapse

I don’t care much about them harvesting all that data, what I do care about is that despite essentially feeding all human knowledge into LLMs they are still basically useless.

Sturgist@lemmy.ca on 23 Mar 15:05 next collapse

Oh, and it also hallucinates.

This is arguably a feature depending on how you use it. I’m absolutely not an AI acolyte. It’s highly problematic in every step. Resource usage. Training using illegally obtained information. This wouldn’t necessarily be an issue if people who aren’t tech broligarchs weren’t routinely getting their lives destroyed for this, and if the people creating the material being used for training also weren’t being fucked…just capitalism things I guess. Attempts by capitalists to cut workers out of the cost/profit equation.

If you’re using AI to make music, images or video… you’re depending on those hallucinations.
I run a Stable Diffusion model on my laptop. It’s kinda neat. I don’t make things for a profit, and now that I’ve played with it a bit I’ll likely delete it soon. I think there’s room for people to locally host their own models, preferably trained with legally acquired data, to be used as a tool to assist with the creative process. The current monetisation model for AI is fuckin criminal…

atrielienz@lemmy.world on 23 Mar 15:08 collapse

Tell that to the man who was accused by Gen AI of having murdered his children.

Sturgist@lemmy.ca on 23 Mar 15:30 collapse

Ok? If you read what I said, you’ll see that I’m not talking about using ChatGPT as an information source. I strongly believe that using LLMs as a search tool is incredibly stupid…for exactly reasons like it being so very confident when relaying inaccurate or completely fictional information.
What I was trying to say, and I get that I may not have communicated that very well, was that Generative Machine Learning Algorithms might find a niche as creative process assistant tools. Not as a way to search for publicly available information on your neighbour or boss or partner. Not as a way to search for case law while researching the defence of your client in a lawsuit. And it should never be relied on to give accurate information about what colour the sky is, or the best ways to make a custard using gasoline.

Does that clarify things a bit? Or do you want to carry on using an LLM in a way that has been shown to be unreliable, at best, as some sort of gotcha…when I wasn’t talking about that as a viable use case?

atrielienz@lemmy.world on 23 Mar 16:01 collapse

lol. I was just saying in another comment that lemmy users 1. Assume a level of knowledge of the person they are talking to or interacting with that may or may not be present in reality, and 2. Are often intentionally mean to the people they respond to so much so that they seem to take offense on purpose to even the most innocuous of comments, and here you are, downvoting my valid point, which is that regardless of whether we view it as a reliable information source, that’s what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don’t actually agree that it’s good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

Sturgist@lemmy.ca on 23 Mar 16:44 collapse

and here you are, downvoting my valid point

<img alt="" src="https://lemmy.ca/pictrs/image/b9115561-4181-4a61-b92c-4a4522db2cf6.jpeg">

Wasn’t me actually.

valid point

You weren’t really making a point in line with what I was saying.

regardless of whether we view it as a reliable information source, that’s what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don’t actually agree that it’s good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

This is a really valid point, and if you had taken the time to actually write this out in your first comment, instead of “Tell that to the guy that was expecting factual information from a hallucination generator!” I wouldn’t have reacted the way I did. And we’d be having a constructive conversation right now. Instead you made a snide remark, seemingly (personal opinion here, I probably can’t read minds) intending it as an invalidation of what I was saying, and then being smug about my taking offence to you not contributing to the conversation and instead being kind of a dick.

atrielienz@lemmy.world on 23 Mar 17:12 collapse

Not everything has to have a direct correlation to what you say in order to be valid or add to the conversation. You have a habit of ignoring parts of the conversation going around you in order to feel justified in whatever statements you make regardless of whether or not they are based in fact or speak to the conversation you’re responding to and you are also doing the exact same thing to me that you’re upset about (because why else would you go to a whole other post to “prove a point” about downvoting?). I’m not going to even try to justify to you what I said in this post or that one because I honestly don’t think you care.

It wasn’t you (you claim), but it could have been and it still might be you on a separate account. I have no way of knowing.

All in all, I said what I said. We will not get the benefits of Generative AI if we don’t 1. deal with the problems that are coming from it, and 2. Stop trying to shoehorn it into everything. And that’s the discussion that’s happening here.

Sturgist@lemmy.ca on 23 Mar 18:02 collapse

because why else would you go to a whole other post to “prove a point” about downvoting?
It wasn’t you (you claim)

I do claim. I have an alt, didn’t downvote you there either. Was just pointing out that you were also making assumptions. And it’s all comments in the same thread, hardly me going to an entirely different post to prove a point.

We will not get the benefits of Generative AI if we don’t 1. deal with the problems that are coming from it, and 2. Stop trying to shoehorn it into everything. And that’s the discussion that’s happening here.

I agree. And while I personally feel like there’s already room for it in some people’s workflow, it is very clearly problematic in many ways. As I had pointed out in my first comment.

I’m not going to even try to justify to you what I said in this post or that one because I honestly don’t think you care.

I do actually! Might be hard to believe, but I reacted the way I did because I felt your first comment was reductive, and intentionally trying to invalidate and derail my comment without actually adding anything to the discussion. That made me angry because I want a discussion. Not because I want to be right, and fuck you for thinking differently.

If you’re willing to talk about your views and opinions, I’d be happy to continue talking. If you’re just going to assume I don’t care, and don’t want to hear what other people think…then just block me and move on. 👍

riskable@programming.dev on 23 Mar 15:14 next collapse

They’re not illegally harvesting anything. Copyright law is all about distribution. As much as everyone loves to think that when you copy something without permission you’re breaking the law the truth is that you’re not. It’s only when you distribute said copy that you’re breaking the law (aka violating copyright).

All those old school notices (e.g. “FBI Warning”) are 100% bullshit. Same for the warning the NFL spits out before games. You absolutely can record it! You just can’t share it (or show it to more than a handful of people but that’s a different set of laws regarding broadcasting).

I download AI (image generation) models all the time. They range in size from 2GB to 12GB. You cannot fit the petabytes of data they used to train the model into that space. No compression algorithm is that good.

The same is true for LLM, RVC (audio models) and similar models/checkpoints. I mean, think about it: If AI is illegally distributing millions of copyrighted works to end users they’d have to be including it all in those files somehow.

Instead of thinking of an AI model like a collection of copyrighted works think of it more like a rough sketch of a mashup of copyrighted works. Like if you asked a person to make a Godzilla-themed My Little Pony and what you got was that person’s interpretation of what Godzilla combined with MLP would look like. Every artist would draw it differently. Every author would describe it differently. Every voice actor would voice it differently.

Those differences are the equivalent of the random seed provided to AI models. If you throw something at a random number generator enough times you could–in theory–get the works of Shakespeare. Especially if you ask it to write something just like Shakespeare. However, that doesn’t meant the AI model literally copied his works. It’s just doing it’s best guess (it’s literally guessing! That’s how work!).

natecox@programming.dev on 23 Mar 16:02 next collapse

The problem with being like… super pedantic about definitions, is that you often miss the forest for the trees.

Illegal or not, seems pretty obvious to me that people saying illegal in this thread and others probably mean “unethically”… which is pretty clearly true.

riskable@programming.dev on 23 Mar 17:33 collapse

I wasn’t being pedantic. It’s a very fucking important distinction.

If you want to say “unethical” you say that. Law is an orthogonal concept to ethics. As anyone who’s studied the history of racism and sexism would understand.

Furthermore, it’s not clear that what Meta did actually was unethical. Ethics is all about how human behavior impacts other humans (or other animals). If a behavior has a direct negative impact that’s considered unethical. If it has no impact or positive impact that’s an ethical behavior.

What impact did OpenAI, Meta, et al have when they downloaded these copyrighted works? They were not read by humans–they were read by machines.

From an ethics standpoint that behavior is moot. It’s the ethical equivalent of trying to measure the environmental impact of a bit traveling across a wire. You can go deep down the rabbit hole and calculate the damage caused by mining copper and laying cables but that’s largely a waste of time because it completely loses the narrative that copying a billion books/images/whatever into a machine somehow negatively impacts humans.

It is not the copying of this information that matters. It’s the impact of the technologies they’re creating with it!

That’s why I think it’s very important to point out that copyright violation isn’t the problem in these threads. It’s a path that leads nowhere.

selokichtli@lemmy.ml on 23 Mar 19:54 collapse

Just so you know, still pedantic.

natecox@programming.dev on 23 Mar 23:19 collapse

The irony of choosing the most pedantic way of saying that they’re not pedantic is pretty amusing though.

Gerudo@lemm.ee on 23 Mar 17:59 next collapse

The issue I see is that they are using the copyrighted data, then making money off that data.

riskable@programming.dev on 23 Mar 20:52 collapse

…in the same way that someone who’s read a lot of books can make money by writing their own.

blind3rdeye@lemm.ee on 24 Mar 11:10 next collapse

Do you know someone who’s read a billion books and can write a new (trashy) book in 5 mins?

Vespair@lemm.ee on 24 Mar 15:18 collapse

No, but humans have differences in scale also. Should a person gifted with hyper-fast reading and writing ability be given less opportunity than a writer who takes a year to read a book and a decade to write one? Imo if the argument comes down to scale, it’s kind of a shitty argument. Is the underlying principle faulty or not?

blind3rdeye@lemm.ee on 25 Mar 05:19 collapse

Part of my point is that a lot of everyday rules do break down at large scale. Like, ‘drink water’ is good advice - but a person can still die from drinking too much water. And having a few people go for a walk through a forest is nice, but having a million people go for a walk through a forest is bad. And using a couple of quotes from different sources to write an article for a website is good; but using thousands of quotes in an automated method doesn’t really feel like the same thing any more.

That’s what I’m saying. A person can’t physically read billions of books, or do the statistical work to put them together to create a new piece of work from them. And since a person cannot do that, no law or existing rule currently takes that possibility into account. So I don’t think we can really say that a person is ‘allowed to’ do that. Rather, it’s just an undefined area. A person simply cannot physically do it, and so the rules don’t have to consider it. On the other hand, computer systems can now do it. And so rather than pointing to old laws, we have to decide as a society whether we think that’s something we are ok with.

I don’t know what the ‘best’ answer is, but I do think we should at least stop to think about it carefully; because there are some clear downsides that need to be considered - and probably a lot of effects that aren’t as obvious which should also be considered!

Vittelius@feddit.org on 24 Mar 22:08 collapse

I hate to be the one to break it to you but AIs aren’t actually people. Companies claiming that they are “this close to AGI” doesn’t make it true.

The human brain is an exception to copyright law. Outsourcing your thinking to a machine that doesn’t actually think makes this something different and therefore should be treated differently.

Mavvik@lemmy.ca on 23 Mar 18:24 collapse

This is an interesting argument that I’ve never heard before. Isn’t the question more about whether ai generated art counts as a “derivative work” though? I don’t use AI at all but from what I’ve read, they can generate work that includes watermarks from the source data, would that not strongly imply that these are derivative works?

riskable@programming.dev on 23 Mar 20:59 collapse

If you studied loads of classic art then started making your own would that be a derivative work? Because that’s how AI works.

The presence of watermarks in output images is just a side effect of the prompt and its similarity to training data. If you ask for a picture of an Olympic swimmer wearing a purple bathing suit and it turns out that only a hundred or so images in the training match that sort of image–and most of them included a watermark–you can end up with a kinda-sorta similar watermark in the output.

It is absolutely 100% evidence that they used watermarked images in their training. Is that a problem, though? I wouldn’t think so since they’re not distributing those exact images. Just images that are “kinda sorta” similar.

If you try to get an AI to output an image that matches someone else’s image nearly exactly… is that the fault of the AI or the end user, specifically asking for something that would violate another’s copyright (with a derivative work)?

Prandom_returns@lemm.ee on 24 Mar 02:10 collapse

Sounds like a load of techbro nonsense.

By that logic mirroring an image would suffice to count as derivative work since it’s “kinda sorta similar”. It’s not the original, and 0% of pixels match the source.

“And the machine, it learned to flip the image by itself! Like a human!”

It’s a predictive keyboard on steroids, let’s not pretent that it can create anything but noise with no input.

kibiz0r@midwest.social on 23 Mar 16:33 next collapse

Well, the harvesting isn’t illegal (yet), and I think it probably shouldn’t be.

It’s scraping, and it’s hard to make that part illegal without collateral damage.

But that doesn’t mean we should do nothing about these AI fuckers.

In the words of Cory Doctorow:

Web-scraping is good, actually.

Scraping against the wishes of the scraped is good, actually.

Scraping when the scrapee suffers as a result of your scraping is good, actually.

Scraping to train machine-learning models is good, actually.

Scraping to violate the public’s privacy is bad, actually.

Scraping to alienate creative workers’ labor is bad, actually.

We absolutely can have the benefits of scraping without letting AI companies destroy our jobs and our privacy. We just have to stop letting them define the debate.

rottingleaf@lemmy.world on 23 Mar 16:47 next collapse

And also it’s using machines to catch up to living creation and evolution, badly.

A but similar to how Soviet system was trying to catch up to in no way virtuous, but living and vibrant Western societies.

That’s expensive, and that’s bad, and that’s inefficient. The only subjective advantage is that power is all it requires.

index@sh.itjust.works on 23 Mar 18:12 next collapse

We spend energy on the most useless shit why are people suddenly using it as an argument against AI? You ever saw someone complaining about pixar wasting energies to render their movies? Or 3D studios to render TV ads?

Sl00k@programming.dev on 23 Mar 18:38 next collapse

I see the “AI is using up massive amounts of water” being proclaimed everywhere lately, however I do not understand it, do you have a source?

My understanding is this probably stems from people misunderstanding data center cooling systems. Most of these systems are closed loop so everything will be reused. It makes no sense to “burn off” water for cooling.

lime@feddit.nu on 23 Mar 19:17 collapse

data centers are mainly air-cooled, and two innovations contribute to the water waste.

the first one was “free cooling”, where instead of using a heat exchanger loop you just blow (filtered) outside air directly over the servers and out again, meaning you don’t have to “get rid” of waste heat, you just blow it right out.

the second one was increasing the moisture content of the air on the way in with what is basically giant carburettors in the air stream. the wetter the air, the more heat it can take from the servers.

so basically we now have data centers designed like cloud machines.

Edit: Also, apparently the water they use becomes contaminated and they use mainly potable water. here’s a paper on it

Aceticon@lemmy.dbzer0.com on 24 Mar 10:25 collapse

Also the energy for those datacenters has to come from somewhere and non-renewable options (gas, oil, nuclear generation) also use a lot of water as part of the generation process itself (they all relly using the fuel to generate the steam to power turbines which generate the electricity) and for cooling.

lime@feddit.nu on 24 Mar 10:36 collapse

steam that runs turbines tends to be recirculated. that’s already in the paper.

wewbull@feddit.uk on 23 Mar 21:30 next collapse

Oh, and it also hallucinates.

Oh, and people believe the hallucinations.

Aceticon@lemmy.dbzer0.com on 24 Mar 10:21 next collapse

It varies massivelly depending on the ML.

For example things like voice generation or object recognition can absolutelly be done with entirelly legit training datasets - literally pay a bunch of people to read some texts and you can train a voice generation engine with it and the work in object recognition is mainly tagging what’s in the images on top of a ton of easilly made images of things - a researcher can literally go around taking photos to make their dataset.

Image generation, on the other hand, not so much - you can only go so far with just plain photos a researcher can just go around and take on the street and they tend to relly a lot on artistic work of people who have never authorized the use of their work to train them, and LLMs clearly cannot be do without scrapping billions of pieces of actual work from billions of people.

Of course, what we tend to talk about here when we say “AI” is LLMs, which are IMHO the worst of the bunch.

BlameTheAntifa@lemmy.world on 29 Mar 05:42 collapse

In a Venn Diagram, I think your “illegally harvesting” complaint is a circle fully inside the “owned by the same few people” circle. AI could have been an open, community-driven endeavor, but now it’s just mega-rich corporations stealing from everyone else. I guess that’s true of literally everything, not just AI, but you get my point.

RadicalEagle@lemmy.world on 23 Mar 13:44 next collapse

I’d say the biggest problem with AI is that it’s being treated as a tool to displace workers, but there is no system in place to make sure that that “value” (I’m not convinced commercial AI has done anything valuable) created by AI is redistributed to the workers that it has displaced.

protist@mander.xyz on 23 Mar 14:04 next collapse

Welcome to every technological advancement ever applied to the workforce

pennomi@lemmy.world on 23 Mar 14:22 collapse

The system in place is “open weights” models. These AI companies don’t have a huge head start on the publicly available software, and if the value is there for a corporation, most any savvy solo engineer can slap together something similar.

TheMightyCat@lemm.ee on 23 Mar 13:53 next collapse

No?

Anyone can run an AI even on the weakest hardware there are plenty of small open models for this.

Training an AI requires very strong hardware, however this is not an impossible hurdle as the models on hugging face show.

nalinna@lemmy.world on 23 Mar 14:13 next collapse

But the people with the money for the hardware are the ones training it to put more money in their pockets. That’s mostly what it’s being trained to do: make rich people richer.

Melvin_Ferd@lemmy.world on 23 Mar 14:21 next collapse

We shouldn’t do anything ever because poors

riskable@programming.dev on 23 Mar 15:20 next collapse

This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

I’ve downloaded several academic models and all commercial models and AI tools are based on all that public research.

I run AI models locally on my PC and you can too.

nalinna@lemmy.world on 24 Mar 12:39 collapse

That is entirely true and one of my favorite things about it. I just wish there was a way to nurture more of that and less of the, “Hi, I’m Alvin and my job is to make your Fortune-500 company even more profitable…the key is to pay people less!” type of AI.

TheMightyCat@lemm.ee on 23 Mar 17:24 collapse

But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.

Why attack the technology if its the rich people you are against and not the technology itself.

nalinna@lemmy.world on 24 Mar 12:47 collapse

It’s not even the people; it’s their actions. If we could figure out how to regulate its use so its profit-generation capacity doesn’t build on itself exponentially at the expense of the fair treatment of others and instead actively proliferate the models that help people, I’m all for it, for the record.

[deleted] on 23 Mar 14:13 next collapse

.

CodeInvasion@sh.itjust.works on 23 Mar 14:20 collapse

Yah, I’m an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

It’s really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I’m working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it’s the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

riskable@programming.dev on 23 Mar 15:24 next collapse

Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?

Not that it matters, of course! Just curious.

CodeInvasion@sh.itjust.works on 23 Mar 17:14 collapse

Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven’t actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.

The real innovation that isn’t commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drives the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It’s an absolute game changer and I’m surprised OpenAI has released their own MLA model yet.

While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can’t be accused of using copyrighted text to learn how to write, then AI shouldn’t either. Just my hot take that I know is controversial outside of academic circles.

cyd@lemmy.world on 24 Mar 02:52 collapse

It’s possible to run the big Deepseek model locally for around $15k, not $100k. People have done it with 2x M4 Ultras, or the equivalent.

Though I don’t think it’s a good use of money personally, because the requirements are dropping all the time. We’re starting to see some very promising small models that use a fraction of those resources.

Grandwolf319@sh.itjust.works on 23 Mar 14:02 next collapse

The biggest problem with AI is that it’s the brut force solution to complex problems.

Instead of trying to figure out what’s the most power efficient algorithm to do artificial analysis, they just threw more data and power at it.

Besides the fact of how often it’s wrong, by definition, it won’t ever be as accurate nor efficient as doing actual thinking.

It’s the solution you come up with the last day before the project is due cause you know it will technically pass and you’ll get a C.

TheBrideWoreCrimson@sopuli.xyz on 23 Mar 17:44 collapse

It’s moronic. Currently, decision makers don’t really understand what to do with AI and how it will realistically evolve in the coming 10-20 years. So it’s getting pushed even into environments with 0-error policies, leading to horrible results and any time savings are completely annihilated by the ensuing error corrections and general troubleshooting. But maybe the latter will just gradually be dropped and customers will be told to just “deal with it,” in the true spirit of enshittification.

kibiz0r@midwest.social on 23 Mar 14:11 next collapse

Idk if it’s the biggest problem, but it’s probably top three.

Other problems could include:

  • Power usage
  • Adding noise to our communication channels
  • AGI fears if you buy that (I don’t personally)
pennomi@lemmy.world on 23 Mar 14:23 next collapse

Dead Internet theory has never been a bigger threat. I believe that’s the number one danger - endless quantities of advertising and spam shoved down our throats from every possible direction.

Fingolfinz@lemmy.world on 23 Mar 16:12 collapse

We’re pretty close to it, most videos on YouTube and websites that exist are purely just for some advertiser to pay that person for a review or recommendation

Sl00k@programming.dev on 23 Mar 18:25 next collapse

Power usage

I’m generally a huge eco guy but on power usage particularly I view this largely as a government failure. We have had to incredible energy resources that the government has chosen not to implement or effectively dismantled.

It reminds me a lot of how Recycling has been pushed so hard into the general public instead of and government laws on plastic usage and waste disposal.

It’s always easier to wave your hands and blame “society” than the is to hold the actual wealthy and powerful accountable.

JayDee@lemmy.sdf.org on 23 Mar 20:11 next collapse

Could also put up:

  • Massive collections of people are exploited in order to train various AI systems.
  • Machine learning apps that create text or images from prompts are supposed to be supplementary but businesses are actively trying to replace their workers with this software.
  • Machine learning image generation currently has diminishing returns for training as we pump exponentially more content into them.
  • Machine learning text and image generated content self-poisons their generater’s sample pool, greatly diminishing the ability for these systems to learn from real world content.

There’s actually a much longer list if we expand to talking about other AI systems, like the robot systems we’re currently training to use in automatic warfare. There’s also the angle of these image and text generation systems being used for political manipulation and scams. There’s alot of terrible problems created from this tech.

cyd@lemmy.world on 24 Mar 02:46 collapse

Power usage probably won’t be a major issue; the main take-home message of the Deepseek brouhaha is that training and inference can be much more efficiently than we had thought (our estimates had been based on well-funded Western companies that didn’t have to bother with optimization).

AI spam is an annoyance, but it’s not really AI-specific but the continuation of a trend; the Internet was already drowning in human-created slop before LLMs came along. At some point, we will probably all have to rely on AI tools to filter it out. This isn’t something that can be unwound, any more than you can undo computers being able to play chess well.

DarkCloud@lemmy.world on 23 Mar 14:22 next collapse

Like Sam Altman who invests in Prospera, a private “Start-up City” in Honduras where the board of directors pick and choose which laws apply to them!

The switch to Techno-Feudalism is progressing far too much for my liking.

nickwitha_k@lemmy.sdf.org on 23 Mar 22:31 collapse

Techno-Feudalism

I’ll say it, yet again. It’s just feudalism. “Techno-Feudalism” has nothing different enough to it to differentiate it as even a sub-type of feudalism. It’s just the same thing all over again, using technological advances to improve the ability to monitor and impose control over the populace. Historical feudalists also leveraged technology to cement their rule (plate armor, cavalry, crossbows, cannon, mills, control of literacy, etc).

DarkCloud@lemmy.world on 24 Mar 01:39 next collapse

Techno-Feudalism is a specific idea from Yanis Varifakous, about places like Amazon, Ebay, AliExpress, Steam, Facebook, even YouTube to some extent. It has to do with the Market Place controlling which prices are promoted to buyers and sellers, and is about price fixing and capturing industries that the bulk of the population require to do commerce.

This is a very important concept to note and understand because it relates to the end of two party Capitalism (where buyers and sellers negotiate prices with each other directly).

So no, the use of fuedalism isn’t to indicate something about old school mechanisms of war, weaponry, brutality, or repression. It’s a reference to the role of economic serfdom and the economic aspects of fuedalism. Comparing those particular aspects to the modern roles of content creators, drop shippers, and consumers. All of whom are forced through the economic lens of markets which are owned or controlled by billionaires who have captured/own these required marketplaces.

nickwitha_k@lemmy.sdf.org on 24 Mar 09:02 collapse

I’ve read Varifakous and don’t find his claim that it’s anything new beyond the technologies used to be at all compelling.

So no, the use of fuedalism isn’t to indicate something about old school mechanisms of war, weaponry, brutality, or repression. It’s a reference to the role of economic serfdom and the economic aspects of fuedalism.

Teotihuacan was the center on an empire but it had no military.

What I’m saying is that they even go with divine mandate at this point. Just because their not jousting and are using abstractions that are enabled by modern technology instead of castles doesn’t make it fundamentally a different, new thing. Commerce and who could engage in it was heavily regulated by feudal lords and organizations that they ran or allowed to run.

It’s literally just the same shit with better technology. The far-right isn’t that creative.

DarkCloud@lemmy.world on 24 Mar 10:07 collapse

Oh it’s the same shit as feudalism, but with technology… Thanks for letting me know that’s what Techno-Feudalism means. So glad we had this enlightening conversation to figure out those two words. I guess we could add “global” to the front of it so you know it’s not just happening in a castle in 14th century Europe, but all across the planet.

Like, how many castles were in Europe? Okay, compare that to how many Amazon’s there are? It’s not the same thing at all

Sorry, I don’t have time for this mind dulling discussion.

“Guns are just metal sling shots with technology! Bullets should be called rocks! They’re just rocks! It’s no different than throwing a snow ball which is why I should be allowed down range at the shooting range!”

“War is just a big fist fight! I wanna talk about swords!”

Yah. Bye!

nickwitha_k@lemmy.sdf.org on 24 Mar 15:59 collapse

Oh it’s the same shit as feudalism, but with technology… Thanks for letting me know that’s what Techno-Feudalism means.

Understanding the meaning and context of terms is very important.

… I guess we could add “global” to the front of it so you know it’s not just happening in a castle in 14th century Europe, but all across the planet.

I find “neo-feudalism” more appropriate. The previous incarnation already spanned the known world at the time.

Like, how many castles were in Europe? Okay, compare that to how many Amazon’s there are? It’s not the same thing at all

That’s really a comparison that makes me think that, perhaps, learning more about feudal history would do us all good. A more apt comparison would be “how many Vaticans were there?” (depending on the time period, two).

Rome was the seat of power through much of feudalism in the Common Era in Europe. Castles were extensions of the theocratic empire centered there, providing physical and visual/psychological enforcement of that power. Despite all of the war and megalomaniacal bickering, the feudal lords and kings all had the same boss.

There’s less difference than you apparently think.

Sorry, I don’t have time for this mind dulling discussion.

I’m sorry that you don’t know enough about history to understand how nearly identical the two are and didn’t mean to cause distress, not knowing how attached to the term you were.

G’luck.

conicalscientist@lemmy.world on 24 Mar 13:21 collapse

Attaching “tech” to everything makes it more palatable. Desirable even. It masks the fact that feudal lords are reinventing everything but with “tech”.

nickwitha_k@lemmy.sdf.org on 24 Mar 15:25 collapse

Exactly. And it makes it seem more special or at least a new idea. It’s not. We already have historical knowledge of what has worked in throwing off the shackles of monarchy and what hasn’t.

beto@lemmy.studio on 23 Mar 14:28 next collapse

And yet, he released his latest album exclusively on Apple Music.

kibiz0r@midwest.social on 23 Mar 16:19 collapse

<img alt="" src="https://midwest.social/pictrs/image/008bfcea-52fc-42ce-b345-ef318d6f6ff2.jpeg">

beto@lemmy.studio on 23 Mar 21:51 collapse

The difference is that he has the choice of not participating in that model, obviously.

ininewcrow@lemmy.ca on 23 Mar 14:30 next collapse

Technological development and the future of our civilization is in control of a handful of idiots.

frog_brawler@lemmy.world on 23 Mar 15:16 next collapse

COO > Return.

HANN@sh.itjust.works on 23 Mar 15:26 next collapse

Ollama and stable diffusion are free open source software. Nobody is forcing anybody to use chatGPT

afk_strats@lemmy.world on 23 Mar 23:16 collapse

Ollama is FOSS, SD has a proproprietary but permissive, source-available license, but it is not what most people would associate with “open-source”

HANN@sh.itjust.works on 24 Mar 18:48 collapse

Fair, it may not be strictly FOSS but I think my point still stands. If people are worried about AI being owned by “the elite” they can just run Ollama.

Grimy@lemmy.world on 23 Mar 15:41 next collapse

AI has a vibrant open source scene and is definitely not owned by a few people.

A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It’s a shame to see so many actually cheering them on.

cyd@lemmy.world on 24 Mar 02:49 collapse

So long as there are big players releasing open weights models, which is true for the foreseeable future, I don’t think this is a big problem. Once those weights are released, they’re free forever, and anyone can fine-tune based on them, or use them to bootstrap new models by distillation or synthetic RL data generation.

MyOpinion@lemm.ee on 23 Mar 16:45 next collapse

The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

lobut@lemmy.ca on 23 Mar 16:55 next collapse

I mean, it’s our work the result should belong to the people.

piecat@lemmy.world on 23 Mar 22:24 next collapse

This is where “universal basic income” comes into play

Aceticon@lemmy.dbzer0.com on 24 Mar 10:32 next collapse

More broadly, I would expect UBI to trigger a golden age of invention and artistic creation because a lot of people would love to spend their time just creating new stuff without the need to monetise it but can’t under the current system, and even if a lot of that would be shit or crazily niche, the more people doing it and the freer they are to do it, the more really special and amazing stuff will be created.

lemminator@lemmy.today on 24 Mar 19:54 collapse

I don’t know nearly enough history to be an expert on this subject, but I’ve heard that one of the causes of the Enlightenment was because peasants and poors were able to afford to spend time learning and creating, rather than substinance-farming.

Blackmist@feddit.uk on 24 Mar 10:55 collapse

Unfortunately one will not lead to the other.

It will lead to the plot of Elysium.

Aux@feddit.uk on 23 Mar 22:56 collapse

That’s what all artists have done since the dawn of ages.

interdimensionalmeme@lemmy.ml on 23 Mar 17:33 next collapse

Why is this message not being drilled into the heads of everyone. Sam Altman go to prison or publish your stolen weights.

index@sh.itjust.works on 23 Mar 18:08 next collapse

No brian eno, there are many open llm already. The problem is people like you who have accumulated too much and now control all the markets/platforms/medias.

PostiveNoise@kbin.melroy.org on 23 Mar 18:53 collapse

Totally right that there are already very impressive open source AI projects.

But Eno doesn't control diddly, and it's odd that you think he does. And I assume he is decently well off, but I doubt he is super rich by most people's standards.

index@sh.itjust.works on 23 Mar 19:00 collapse

Majors

SufferingSteve@feddit.nu on 23 Mar 18:24 next collapse

Reading the other comments, it seems there are more than one problem with AI. Probably even some perks as well.

Shucks, another one or these complex issues huh. Weird how everything you learn something about turns out to have these nuances to them.

C45513@lemm.ee on 23 Mar 22:58 collapse

most of the replies can be summarized as “the biggest problem with AI is that we live under capitalism”

pyre@lemmy.world on 23 Mar 18:50 next collapse

wrong. it’s that it’s not intelligent. if it’s not intelligent, nothing it says is of value. and it has no thoughts, feelings or intent. therefore it can’t be artistic. nothing it “makes” is of value either.

PostiveNoise@kbin.melroy.org on 23 Mar 18:54 next collapse

Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can't blame a hammer for some evil person using it to hit someone in the head, and there is more to 'hammers' than just assaulting people.

andros_rex@lemmy.world on 23 Mar 22:18 collapse

Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with.

There is a possibility for something interesting and cool there (I think about how Chuck Pearson’s eccojams is just like short loops of random songs repeated in different ways, but it’s an absolutely revolutionary album) even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.

PostiveNoise@kbin.melroy.org on 23 Mar 23:44 collapse

Yeah, Eno actually has made a variety of albums and art installations using generative simple AI for musical decisions, although I don't think he does any advanced programming himself. That's why it's really odd to see comments in an article that imply he is really uninformed about AI...he was pioneering generative music 20-30 years ago.

I've come to realize that there is a huge amount of misinformation about AI these days, and the issue is compounded by there being lots of clumsy, bad early AI works in various art fields, web journalism etc. I'm trying to cut back on discussing AI for these reasons, although as an AI enthusiast, it's hard to keep quiet about it sometimes.

jackalope@lemmy.ml on 24 Mar 00:32 collapse

Eno is more a traditional algorist than “AI” (by which people generally mean neural networks)

PostiveNoise@kbin.melroy.org on 24 Mar 02:03 next collapse

Sure. I worked in the game industry and sometimes AI can mean 'pick a random number if X occurs' or something equally simple, so I'm just used to the term used a few different ways.

jackalope@lemmy.ml on 24 Mar 15:10 collapse

Totally fair

andros_rex@lemmy.world on 24 Mar 13:27 collapse

I could see him using neural networks to generate and intentionally pick and loop short bits with weird anomalies or glitchy sounds. Thats the route I’d like AI in music to go, so maybe that’s what I’m reading in, but it fits Eno’s vibe and philosophy.

AI as a tool not to replace other forms of music, but doing things like training it on contrasting music genres or self made bits or otherwise creatively breaking and reconstructing the artwork.

John Cage was all about ‘stochastic’ music - composing based on what he divined from the I Ching. There are people who have been kicking around ideas like this for longer than the AI bubble has been around - the big problem will be digging out the good stuff when the people typing “generate a three hour vapor wave playlist” can upload ten videos a day…

canajac@lemmy.ca on 23 Mar 19:27 next collapse

AI will become one of the most important discoveries humankind has ever invented. Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare. Hey artist, writers, you cannot stop intellectual evolution. AI is here to stay. All we need is a proven way to differentiate the real art from AI art. An invisible watermark that can be scanned to see its true “raison d’etre”. Sorry for going off topic but I agree that AI should be more open to verification for using copyrighted material. Don’t expect compensation though.

jjjalljs@ttrpg.network on 23 Mar 23:13 next collapse

Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare.

That’s all kind of moot if we continue down the capitalist hellscape express. What good is an AI that can diagnose cancer if most people can’t afford access? What good is AI writing novels if our homes are destroyed by climate change induced disasters?

Those problems are mostly political, and AI isn’t going to fix them. The people that probably could be replaced with AI, the shitty “leaders” and such, are not going to voluntarily step down.

Ledericas@lemm.ee on 24 Mar 08:44 collapse

None of it is currently useful to those right now

iAvicenna@lemmy.world on 23 Mar 21:43 next collapse

like most of money

WrenFeathers@lemmy.world on 23 Mar 22:06 next collapse

The biggest problem with AI is the damage it’s doing to human culture.

rottingleaf@lemmy.world on 24 Mar 15:03 collapse

Not solving any of the stated goals at the same time.

It’s a diversion. Its purpose is to divert resources and attention from any real progress in computing.

RememberTheApollo_@lemmy.world on 23 Mar 23:20 next collapse

And those people want to use AI to extract money and to lay off people in order to make more money.

That’s “guns don’t kill people” logic.

Yeah, the AI absolutely is a problem. For those reasons along with it being wrong a lot of the time as well as the ridiculous energy consumption.

magic_smoke@lemmy.blahaj.zone on 24 Mar 00:05 next collapse

The real issues are capitalism and the lack of green energy.

If the arts where well funded, if people where given healthcare and UBI, if we had, at the very least, switched to nuclear like we should’ve decades ago, we wouldn’t be here.

The issue isn’t a piece of software.

gian@lemmy.grys.it on 24 Mar 17:01 collapse

Yeah, the AI absolutely is a problem.

AI is noto a problemi by itself, the problemi is that most of the people who make decisions in the workplace about these things do not understand what they are talking about and even less what something is capable of.

My impression is that AI now is what blockchain was some years ago, the solution to every problemi,which was of course false.

max_dryzen@mander.xyz on 24 Mar 00:53 next collapse

The government likes concentrated ownership because then it has only a few phonecalls to make if it wants its bidding done (be it censorship, manipulation, partisan political chicanery, etc)

futatorius@lemm.ee on 24 Mar 08:35 collapse

And it’s easier to manage and track a dozen bribe checks rather than several thousand.

AbsoluteChicagoDog@lemm.ee on 24 Mar 01:51 next collapse

Same as always. There is no technology capitalism can’t corrupt

futatorius@lemm.ee on 24 Mar 08:34 next collapse

Two intrinsic problems with the current implementations of AI is that they are insanely resource-intensive and require huge training sets. Neither of those is directly a problem of ownership or control, though both favor larger players with more money.

finitebanjo@lemmy.world on 24 Mar 08:49 next collapse

And a third intrinsic problem is that the current models with infinite training data have been proven to never approach human language capability, from papers written by OpenAI in 2020 and Deepmind in 2022, and also a paper by Stanford which proposes AI simply have no emergent behavior and only convergent behavior.

So yeah. Lots of problems.

andxz@lemmy.world on 24 Mar 11:08 collapse

While I completely agree with you, that is the one thing that could change with just one thing going right for one of all the groups that work on just that problem.

It’s what happens after that that’s really scary, probably. Perhaps we all go into some utopian AI driven future, but I highly doubt that’s even possible.

frezik@midwest.social on 24 Mar 20:18 collapse

If gigantic amounts of capital weren’t available, then the focus would be on improving the models so they don’t need GPU farms running off nuclear reactors plus the sum total of all posts on the Internet ever.

finitebanjo@lemmy.world on 24 Mar 08:49 next collapse

I don’t really agree that this is the biggest issue, for me the biggest issue is power consumption.

CitricBase@lemmy.world on 24 Mar 11:17 next collapse

That is a big issue, but excessive power consumption isn’t intrinsic to AI. You can run a reasonably good AI on your home computer.

The AI companies don’t seem concerned about the diminishing returns, though, and will happily spend 1000% more power to gain that last 10% better intelligence. In a competitive market why wouldn’t they, when power is so cheap.

frezik@midwest.social on 24 Mar 20:20 collapse

Large power consumption only happens because someone is willing to dump lots of capital into it so they can own it.

finitebanjo@lemmy.world on 24 Mar 20:57 collapse

Oh you’re right, let me just tally up all the days where that isn’t the case…

carry the 2…

don’t forget weekends and holidays…

Oh! It’s every single day. It’s just an always and forever problem. Neat.

frezik@midwest.social on 24 Mar 21:03 collapse

It’s nothing of the sort. If nobody had the capital to scale it through more power, then the research would be more focused on making it efficient.

Polderviking@feddit.nl on 24 Mar 10:52 next collapse

That it’s controlled by a few is only a problem if you use it… my issue with it starts before that.

My biggest gripe with AI is the same problem I have with anything crypto: It’s out of control power consumption relative to the problem it solves or purpose it serves. And by extension the fact nobody with any kind of real political power is addressing this.

Here we are using recycled bags, banning straws, putting explosive refrigerant in fridges and using led lights in everything, all in the name of the environment, while at the same time in some datacenter they are burning kwh’s by the bucket loads generating pictures of cats in space suits.

Knock_Knock_Lemmy_In@lemmy.world on 24 Mar 12:12 next collapse

My biggest gripe with AI is the same problem I have with anything crypto crypto: It’s out of control power consumption relative to the problem it solves or purpose it serves.

Don’t thrown all crypto under the bus. Only bitcoin and other proof of work protocols are power hungry. 2nd and 3rd generation crypto use mostly proof of stake and ZKrollups for security. Much more energy efficient.

Polderviking@feddit.nl on 24 Mar 13:47 next collapse

I’m aware of this, but it still mostly just something for people speculate on. Something people buy, sit on, and then hopefully sell with a profit.

Bitcoin was supposed to be a decentralized money alternative, but the amount of people actually, legitimately, buying things with crypto are highly negligible. And honestly even if it did serve it’s actual purpose, the cumulative power consumption would still be a point of debate.

null@slrpnk.net on 24 Mar 16:06 next collapse

And honestly even if it did serve it’s actual purpose, the cumulative power consumption would still be a point of debate.

Yeah, but at that point you’d have to consider it against how much power the traditional banking system uses.

Knock_Knock_Lemmy_In@lemmy.world on 24 Mar 20:16 collapse

Yes, most people buy, sit on, and then hopefully sell with a profit.

However, there are a large number of devs building useful things (supply chain, money transfer, digital identity). Most as good as, but not yet better than incumbent solutions.

My main challenge is the energy misconception. The cumulative power of the ethereum network runs on the energy equivalent of a single wind turbine.

bob_lemon@feddit.org on 24 Mar 13:50 collapse

Sure, but despite all the crypto bros assurances to the contrary, the only real-world applications for it is buying drugs, paying ransoms and getting scammed. Which means that any non-zero amount of energy is too much energy.

Knock_Knock_Lemmy_In@lemmy.world on 24 Mar 21:02 collapse
Guns0rWeD13@lemmy.world on 24 Mar 12:43 next collapse

Here we are using recycled bags, banning straws, putting explosive refrigerant in fridges and using led lights in everything

lol, sucker. none of that does shit and industry was already destroying the planet just fine before ai came along.

Polderviking@feddit.nl on 24 Mar 13:49 collapse

Dare I assume you are aware we have “industry” because we consume?

Guns0rWeD13@lemmy.world on 24 Mar 13:52 collapse

yes. we are cancer. i live on as little as possible but i don’t delude myself into thinking my actions have any effect on the whole.

i spent nearly 20 years not using paper towels until i realized how pointless it was. now i throw my trash out the window. we’re all fucked. if we want to change things, there’s only one tool that will fix it. until people realize that, i really don’t fucking care any more.

Polderviking@feddit.nl on 24 Mar 14:03 collapse

now i throw my trash out the window.

You don’t believe not using paper towels was a net positive so now you choose to create and by extention live in a pigsty? I’m not following.

rottingleaf@lemmy.world on 24 Mar 15:00 collapse

Here we are using recycled bags, banning straws, putting explosive refrigerant in fridges and using led lights in everything, all in the name of the environment, while at the same time in some datacenter they are burning kwh’s by the bucket loads generating pictures of cats in space suits.

That’s, #1, fashion and not about environment, #2, fashion promoted because it’s cheaper for the industry.

And yes, power saved somewhere will just be spent elsewhere. Cheaper. Cause that means reduced demand for power (or grown not as fast as otherwise).

Guns0rWeD13@lemmy.world on 24 Mar 12:44 next collapse

brian eno is cooler than most of you can ever hope to be.

rottingleaf@lemmy.world on 24 Mar 14:56 collapse

Dunno, the part about generative music (not like LLMs) I’ve tried, I think if I spent a few more years of weekly migraines on that, I’d become better.

Guns0rWeD13@lemmy.world on 24 Mar 21:27 collapse

you mean like in the same way that learning an instrument takes time and dedication?

nialv7@lemmy.world on 24 Mar 13:01 next collapse

That’s… just not true? Current frontier AI models are actually surprisingly diverse, there are a dozen companies from America, Europe, and China releasing competitive models. Let alone the countless finetunes created by the community. And many of them you can run entirely on your own hardware so no one really has control over how they are used. (Not saying that that’s a good thing necessarily, just to point out Eno is wrong)

KingThrillgore@lemmy.ml on 24 Mar 13:33 next collapse

He’s not wrong.

captain_aggravated@sh.itjust.works on 24 Mar 13:53 next collapse

For some reason the megacorps have got LLMs on the brain, and they’re the worst “AI” I’ve seen. There are other types of AI that are actually impressive, but the “writes a thing that looks like it might be the answer” machine is way less useful than they think it is.

ameancow@lemmy.world on 24 Mar 16:25 collapse

most LLM’s for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.

As for practical use, the things can’t do math so they’re useless at work. I write better Emails on my own so I can’t imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.

I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it’s reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.

I have also talked to way too many delusional maniacs who are literally planning for the day an Artificial Super Intelligence is created and the whole world becomes like Star Trek and they personally will become wealthy and have all their needs met. They think this is going to happen within the next 5 years.

frezik@midwest.social on 24 Mar 20:13 collapse

The delusional maniacs are going to be surprised when they ask the Super AI “how do we solve global warming?” and the answer is “build lots of solar, wind, and storage, and change infrastructure in cities to support walking, biking, and public transportation”.

ameancow@lemmy.world on 25 Mar 14:27 collapse

Which is the answer they will get right before sending the AI back for “repairs.”

As we saw with Grock already several times.

They absolutely adore AI, it makes them feel in-touch with the world and able to feel validated, since all it is is a validation machine. They don’t care if it’s right or accurate or even remotely neutral, they want a biased fantasy crafting system that paints terrible pictures of Donald Trump all ripped and oiled riding on a tank and they want the AI to say “Look what you made! What a good boy! You did SO good!”

[deleted] on 24 Mar 14:52 next collapse

.

interdimensionalmeme@lemmy.ml on 24 Mar 21:37 collapse

That’s why we need the weights, right now! Before they figure out how to do this. It will happen, but at least we can prevent backsliding from what we have now.

zapzap@lemmings.world on 24 Mar 16:27 next collapse

“Biggest” maybe. But it’s not the only relevant problem. I think AI is gonna pan out like social media did, which is to say it’s gonna be a shit show for society. And that would be the same no matter who owned it.

frezik@midwest.social on 24 Mar 20:16 collapse

Both AI and social media are a shit show because it’s owned by a few people.

Unironically, the best social media is Fetlife. Not that it’s perfect by any means–not by far–but it is designed to facilitate bringing people together.

umbraroze@lemmy.world on 24 Mar 16:31 next collapse

AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want (“fuck the copyright, especially fuck the natural resources”) who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.

I don’t have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.

interdimensionalmeme@lemmy.ml on 24 Mar 21:34 collapse

Well I’m on board for fuck intellectual property. If openai doesn’t publish the weights then all their datacenter get visited by the killdozer

Heliumfart@sh.itjust.works on 25 Mar 06:24 next collapse

Reminds me of “biotech is Godzilla”. Sepultura version of course

umbrella@lemmy.ml on 25 Mar 07:41 collapse

ai excels at some specific tasks. the chatbots they push us to are a gimmick rn.