OpenAI is reportedly going all-in as a for-profit company (mashable.com)
from RmDebArc_5@sh.itjust.works to technology@lemmy.world on 14 Sep 2024 20:29
https://sh.itjust.works/post/25195398

Surprised pikachu face

#technology

threaded - newest

dinckelman@lemmy.world on 14 Sep 2024 20:34 next collapse

So what exactly is open about their ai

RmDebArc_5@sh.itjust.works on 14 Sep 2024 20:37 next collapse

It’s called OpenAI because they are open to stealing content to train their AI

dinckelman@lemmy.world on 14 Sep 2024 20:44 next collapse

Can’t argue with objective truth

abobla@lemm.ee on 14 Sep 2024 20:50 next collapse

shit, bro. Deep

irreticent@lemmy.world on 15 Sep 2024 03:03 collapse

“Don’t tell me what to do, bro!”

seaQueue@lemmy.world on 14 Sep 2024 20:55 collapse

“I made this!”

billiam0202@lemmy.world on 14 Sep 2024 21:39 collapse

You made this?

…I made this!

jaybone@lemmy.world on 15 Sep 2024 08:25 next collapse

Except in the AI version it has the wrong number of fingers, and the text is spelled wrong.

noodlejetski@lemm.ee on 15 Sep 2024 19:34 collapse

I͖ͭ̍̀̏͂̋̏͛ ͕͚̱̗̭͗͗͑ͥͨ̆ͥ͊ͅm̤̻͕̪̥͓͍̿̑̚a̜̝͖̯̰̦͐ͭ̄͋c̠̹̱̱̖ͦ̿̋͗̎l̝̭͚͇̎ͧe͙͕͂͆ ̳̩̦͙̯̮̙t̘̯̯ͣͧ̍ȉ̩̜́̏̂ͯ̉͑̄s̪͖̎ͬ̐͆

Lucidlethargy@sh.itjust.works on 15 Sep 2024 22:32 collapse

I just used ChatGPT to copy this, so I’m sorry fellas, but actually now I made this.

NegativeLookBehind@lemmy.world on 14 Sep 2024 22:38 next collapse

Open(your fucking wallet)AI

Jocker@sh.itjust.works on 15 Sep 2024 03:41 next collapse

It’s criminal they’re keeping the name OpenAI

just_an_average_joe@lemmy.dbzer0.com on 15 Sep 2024 07:38 next collapse

They put open in their name to get good talent, investments and so people would have a soft spot for them when they collect tons of data to build their product.

Their internal chats that were released in musk lawsuit reveals they knew they were gonna switch to for profit model (they here means the top brass). But they still lied to everybody about their intentions.

glitchdx@lemmy.world on 16 Sep 2024 16:10 collapse
jabathekek@sopuli.xyz on 14 Sep 2024 20:44 next collapse

“ClosedAI”

NounsAndWords@lemmy.world on 14 Sep 2024 20:45 next collapse

This is what Ilya saw…

JoMiran@lemmy.ml on 14 Sep 2024 20:56 next collapse

<img alt="" src="https://lemmy.ml/pictrs/image/f0c19321-4e19-4da7-abd6-5be4772b70d2.jpeg">

Plopp@lemmy.world on 14 Sep 2024 21:22 next collapse

Ah, the John Cleesachu.

bobs_monkey@lemm.ee on 14 Sep 2024 23:41 collapse

Shocked Cleesachu

Harvey656@lemmy.world on 15 Sep 2024 02:51 collapse

My favorite part of this image is the image corruption on the bottom lol. Hopefully that wasn’t a local my side issue or I’m gonna look insane

JoMiran@lemmy.ml on 15 Sep 2024 03:28 collapse

Somewhere along the way my copy got janked. I liked it so I keep using it.

potentiallynotfelix@lemdro.id on 14 Sep 2024 21:33 next collapse

<img alt="" src="https://lemdro.id/pictrs/image/70e8fb82-6914-42f7-8a19-2c8f7979b1f0.png">

irreticent@lemmy.world on 15 Sep 2024 03:07 collapse

I’m partial to:

<img alt="" src="https://lemmy.world/pictrs/image/814792d0-7793-4844-97d4-302251c3b055.jpeg">

Fixbeat@lemmy.ml on 14 Sep 2024 21:55 next collapse

As long as the shareholders are happy.

webghost0101@sopuli.xyz on 14 Sep 2024 21:57 next collapse

How exactly does one “outgrow” “AGI for the benefit of all humanity?

OpenAI Charter openai.com/charter

Our primary fiduciary duty is to humanity. We anticipate needing to marshal substantial resources to fulfill our mission, but will always diligently act to minimize conflicts of interest among our employees and stakeholders that could compromise broad benefit.

asbestos@lemmy.world on 15 Sep 2024 07:27 collapse

Great read

itsathursday@lemmy.world on 14 Sep 2024 21:59 next collapse

Much open, very organic, very demure, so mindful.

bzarb8ni@lemm.ee on 14 Sep 2024 22:00 next collapse

Almost like Sam Altman is just another run of the mill tech bro scam guy.

just_an_average_joe@lemmy.dbzer0.com on 15 Sep 2024 07:41 collapse

I don’t think he is a “tech bro scam guy”, i think he is worse like he is smart and has a documented track record of lying. Unlike other tech bros, he actually knows the capability /limits of his products and he still lies and makes it out to be something it’s not.

MyOpinion@lemm.ee on 14 Sep 2024 22:12 next collapse

Them investors got to get paid!

avidamoeba@lemmy.ca on 14 Sep 2024 22:36 next collapse

I hope OpenAI is going to serve as a radicalizing example to all the engineers, who fell for the “ethical guy/company” rhetoric, that the minority-controlled corporate structures they’re used to cannot withstand the push for profit. I hope this will make more of them choose majority-controlled structures for their startups and demand unions in existing corpos.

gravitas_deficiency@sh.itjust.works on 14 Sep 2024 23:06 collapse

I mean, I was already radicalized in that respect, but it’s definitely reaffirming that radicalization.

But also: I fuckin told you so. This progression was so blindingly obvious from the get-go.

CountVon@sh.itjust.works on 14 Sep 2024 23:35 collapse

OpenAI on that enshittification speedrun any% no-glitch!

Honestly though, they’re skipping right past the “be good to users to get them to lock in” step. They can’t even use the platform capitalism playbook because it costs too much to run AI platforms. Shit is egregiously expensive and doesn’t deliver sufficient return to justify the cost. At this point I’m ~80% certain that AI is going to be a dead tech fad by the end of this decade because the economics just don’t work now that the free money era has ended.

lemmyvore@feddit.nl on 15 Sep 2024 00:51 collapse

It will fall through much faster than that. I’m thinking two years, tops.

irreticent@lemmy.world on 15 Sep 2024 03:01 collapse

If your username is any prediction then it will be consumed by Lemmy… 🎶downtown🎶

TriflingToad@lemmy.world on 14 Sep 2024 22:53 next collapse

reminder, there are localy ran LLMs. Right now is a vital time for open source to fight against closed source in the AI arms race.

www.nomic.ai/gpt4all

mayo@lemmy.world on 15 Sep 2024 02:05 next collapse

Another good resource to help people find models llm.extractum.io

Blaster_M@lemmy.world on 15 Sep 2024 03:18 collapse

Or just straight up install ollama.com

utopiah@lemmy.world on 15 Sep 2024 07:31 collapse

I like Ollama, and recommend it to tinker, but I admit this “LLM Explorer” is quite neat thanks to sections like “LLMs Fit 16GB VRAM”

Ollama just works but it doesn’t help to pick which model best fits your needs.

Knock_Knock_Lemmy_In@lemmy.world on 15 Sep 2024 10:06 collapse

pick which model best fits your needs.

What is the need I have to put the effort in to install all this locally. Websites win in terms of convenience.

utopiah@lemmy.world on 15 Sep 2024 12:29 next collapse

I don’t think I understand your point, are you saying there is no benefit in running locally and that Websites or APIs are more convenient?

Knock_Knock_Lemmy_In@lemmy.world on 15 Sep 2024 14:40 collapse

I already have stable diffusion on a local machine. I was trying to find motivation to install a LLM locally. You answered my question in a different response

use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.

morriscox@lemmy.world on 16 Sep 2024 19:20 collapse

I want to work on my stuff in peace and in private without worrying about a company grabbing my stuff and using it for themselves and to give/sell it to other outfits, including the government. “If you have nothing to hide…” is bullshit and needs to die.

Knock_Knock_Lemmy_In@lemmy.world on 16 Sep 2024 19:50 collapse

Good point. Everything you feed into chatgpt is stored for future reference.

finitebanjo@lemmy.world on 15 Sep 2024 03:33 next collapse

Okay but what problem does that solve? Is the solution setting up our own spambots to fill forums with arguments counter to their bullshit spambots? I don’t see how an LLM improves literally anything ever in any circumstance.

mayo@lemmy.world on 15 Sep 2024 04:24 next collapse

You seem unnecessarily hostile about this. If you don’t like LLM just move on.

This is exactly why this sub about technology is better off without business news. You’re just reacting to something you hate and directing that at others.

finitebanjo@lemmy.world on 15 Sep 2024 06:43 collapse

But answer the question maybe

Also, my “hate” was very clearly directed towards LLMs and not a “person”.

DavidDoesLemmy@lemmynsfw.com on 15 Sep 2024 07:42 next collapse

It definitely improves my experience coding in unfamiliar languages. So there’s your counter example.

finitebanjo@lemmy.world on 15 Sep 2024 07:54 next collapse

From all the studies available, LLMs increased the rate at which low skilled workers complete tasks. They also lower accuracy, so expect some of the tasks to be done incorrectly.

If your metric for “improves” is being a better low skill drone forever then yes I’m sure it’s helping you. Here is a novel idea, maybe learn the language from a reliable source instead of taking the word of a bullshit generator at face value?

DavidDoesLemmy@lemmynsfw.com on 15 Sep 2024 22:19 collapse

Here’s an idea, maybe start with curiosity about how someone is getting value out of it? It’s possible you don’t know everything about other people’s experiences.

finitebanjo@lemmy.world on 15 Sep 2024 22:31 collapse

It’s something being shoved down our throats every second of every day and I’ve seen enough to know I don’t like it. Curiosity was satiated a long ass time ago. It’s just a bigger power draw than Cryptocurrency but somehow magically even less value.

utopiah@lemmy.world on 15 Sep 2024 12:21 collapse

improves my experience coding in unfamiliar languages

Alan Perlis said “A programming language that doesn’t change the way you think is not worth learning.”

So… if you code in another language without actually “getting it”, solely having a usable result, what is actually the point of changing languages?

DavidDoesLemmy@lemmynsfw.com on 15 Sep 2024 22:15 next collapse

I have a job to do. And I understand the other language conceptually, I am just rusty on the syntax.

Also the chat feature is invaluable. I can highlight a piece of code and ask what it does, and copilot explains it.

sugar_in_your_tea@sh.itjust.works on 16 Sep 2024 16:24 collapse

Exactly. I see AI as a tool to automate the boring parts, if you try to automate the hard parts, you’re going to have a bad time.

Take the time to learn the tools you use thoroughly, and then you can turn to AI to make your use of those tools more efficient. If I’m learning woodworking, for example, I’m going to learn to use hand tools first before using power tools, but there’s no way I’m sticking to hand tools when producing a lot of things. Programming isn’t any different, I’ll learn the language and its idioms as deeply as I can, and only then will I turn to things like AI to spit out boilerplate to work from.

utopiah@lemmy.world on 17 Sep 2024 05:19 collapse

Mind explaining a bit your workflow at the moment?

sugar_in_your_tea@sh.itjust.works on 18 Sep 2024 04:01 collapse

I’m not sure how to succinctly do that.

When I learn a new language, I:

  1. go through whatever tutorial is provided by the language developers - for Rust, that’s The Rust Programming Language, for Go, it’s Tour of Go and Effective Go
  2. build something - for Go, this was a website, and for Rust it was a Tauri app (basically a website); it should be substantial enough to exercise the things I would normally do with the language, but not so big that I won’t finish
  3. read through substantial portions of the standard library - if this is minimal (e.g. in Rust), read through some high profile projects
  4. repeat 2 & 3 until I feel confident I understand the idioms of the language

I generally avoid setting up editor tooling until I’ve at least run through step 3, because things like code completion can distract from the learning process IMO.

Some books I’ve really enjoyed (i.e. where 1 doesn’t exist):

  • The C Programming Language - by Brian Kernighan and Dennis Richie
  • Programming in Lua - by Roberto Ierusalimschy
  • Learn You a Haskell for Great Good - by Miran Lipovača (available free online)

But regardless of the form it takes, I appreciate a really thorough introduction to the language, followed by some experimentation, and then topped off with some solid, practical code examples. I generally allow myself about 2 weeks before expecting to write anything resembling production code.

These days, I feel confident in a dozen or so programming languages (I really like learning new languages), and I find that thoroughly learning each has made me a better programmer.

utopiah@lemmy.world on 18 Sep 2024 07:02 collapse

Thanks for that, was quite interesting and I agree that completion too early (even… in general) can be distracting.

I did mean about AI though, how you manage to integrate it in your workflow to “automate the boring parts” as I’m curious which parts are “boring” for you and which tools you actual use, and how, to solve the problem. How in particular you are able to estimate if it can be automated with AI, how long it might take, how often you are correct about that bet, how you store and possibly share past attempts to automate, etc.

sugar_in_your_tea@sh.itjust.works on 18 Sep 2024 07:49 collapse

I honestly don’t use it much, but so far, the most productive uses are:

  • generate some common structure/algorithm - web app, CLI program, recursive function, etc
  • search documentation - I may not know what the function/type is, but I can describe it
  • generate documentation - list arguments, return types, etc

But honestly, the time I save there honestly isn’t worth fighting with the AI most of the time, so I’ll only do it if I’m starting up a big greenfield project and need something up and going quickly. That said, there are some things I refuse to use AI for:

  • testing - AI may be able to get high coverage, but I don’t think it can produce high quality tests
  • business logic - the devil is in the details, and I don’t trust AI with details
  • producing documentation - developers hate writing documentation, which is precisely why devs should be the ones to do it; if AI could do it, other devs could just use AI to generate it, but good docs will do far more than what AI can intuit
utopiah@lemmy.world on 19 Sep 2024 06:33 collapse

Super, thanks again for taking the time to do so.

I can’t remember if I shared this earlier but I’m jolting down notes on the topic in …benetou.fr/…/SelfHostingArtificialIntelligence so I do also invest time on the topic. Yet my results have also been… subpar so I’m asking as precisely as I can how others actually benefit from it. I’m tired of seeing posts with grand claims that, unlike you, only talk about the happy path in usage. Still, I’m digging not due to skepticism as much as trying to see what can actually be leveraged, not to say salvaged. So yes, genuine feedback like yours is quite precious.,

I do seem to hear from you and others that to kickstart what would be a blank project and get going it can help. Also that for whatever is very recurrent AND popular, like common structures, it can help.

My situation though is in prototyping where documentation is sparse, if even existent, and working examples are very rare. So far it’s been a bust quite often.

Out of curiosity, which AI tools specifically do you use and do you pay for them?

PS: you mention documentation is both cases, so I imagine it’s useful when it’s very structured and when the user can intuit most of how something works, closer to a clearly named API with arguments than explaining the architecture of the project.

sugar_in_your_tea@sh.itjust.works on 19 Sep 2024 16:14 collapse

Out of curiosity, which AI tools specifically do you use and do you pay for them?

Just whatever is free, so no, I don’t pay for them for two reasons:

  • my boss doesn’t allow AI to have access to our codebase
  • I honestly don’t find enough value to actually pay

So I’ll just find something with a free tier or trial and generate a little bit of code or something. Or I’ll use the AI feature in a search engine to help me get search terms for relevant documentation (i.e. list libraries that do X), and then I’ll actually read the documentation. I have coworkers who use it for personal projects (not sure what they use), and that’s also part of what I’ve listed above (i.e. the generating documentation part).

But I very rarely use AI, because I very rarely start projects from scratch. 99% of my work is updates to existing projects, so it’s really not that useful.

utopiah@lemmy.world on 15 Sep 2024 07:46 collapse

FWIW I did try a lot (LLMs, code, generative AI for images, 3D models) in a lot of ways (CLI, Web based, chat bot) both locally and using APIs.

I don’t use any on a daily basis. I find it exciting that we can theoretically do a lot “more” automatically but… so far the results have not been worth the efforts. Sadly some of the best use cases are exactly what you highlighted, i.e low effort engagement for spam. Overall I find that either working with a professional (script writer, 3D modeler, dev, designer, etc) is a lot more rewarding but also more efficient which itself makes it cheaper.

For use cases where customization helps while quality does matter much due to scale, i.e spam, then LLMs and related tools are amazing.

PS: I’d love to hear the opinion of a spammer actually, maybe they also think it’s not that efficient either.

T156@lemmy.world on 15 Sep 2024 14:41 collapse

I have personally found generative-text LLMs quite good for creating titles. As an example, I have a few hundred tweets that I’m trying to put into a file, and I’ll use an LLM to create a human-readable name for them. It’s much better than a lot of the other summarisation mechanisms (like BERT) I’ve tried with it, but it’s still not perfect, because the model tends to output the same thing in slightly different words each time, so repeat runs will often result in the same thing with a different title.

But, that is also a fairly limited use case.

T156@lemmy.world on 15 Sep 2024 08:16 collapse

At the same time, the trouble with local LLMs is that they’re very resource heavy. Your average household computer isn’t going to be able to run one with much usability or speed.

floquant@lemmy.dbzer0.com on 15 Sep 2024 09:08 next collapse

Which, you know, is fine. Maybe if people had an idea of how much power is required to run them, they would think twice before using a gigawatt to output a poem about farts, and perhaps even wonder how OpenAI can offer that for free. Btw, a 7b model should run ok on any PC with at least 16GB of RAM and a modern processor/GPU.

RmDebArc_5@sh.itjust.works on 15 Sep 2024 12:15 next collapse

Phi 3 can run on pretty low specs (requires 4gb RAM) and has relatively good output

TriflingToad@lemmy.world on 15 Sep 2024 21:08 collapse

it’s a lot slower that chatgpt but on my integrated graphics i7 laptop it ran decent, def enough to be useable. Also there’s different models to play around with, some are faster but worse and some are smarter but slower

mayo@lemmy.world on 15 Sep 2024 02:03 next collapse

I’m happy to pay for LLM but not at the prices OpenAI is charging for their models.

Jordan117@lemmy.world on 15 Sep 2024 02:14 next collapse

The fact that Silicon Valley interests effortlessly shrugged off the non-profit board’s attempt to hit the kill switch last year, and now are preparing to take the company commercial despite the deliberate design otherwise, becomes much more interesting when you consider the theory that corporations are a form of artificial superintelligence.

If the AI idealists can’t stand up to basic forces of capitalism, how do they expect to control an actually dangerous AGI?

irreticent@lemmy.world on 15 Sep 2024 02:27 next collapse

If the AI idealists can’t stand up to basic forces of capitalism, how do they expect to control an actually dangerous AGI?

My guess is they don’t expect to. I guess that that is one of the reasons they seem to not care about out of control climate change; burn it all down before it all literally burns down.

Krauerking@lemy.lol on 15 Sep 2024 15:26 collapse

Yeah, the people leading the “AGI will save us” are the same as super church pastors.
They don’t believe it, they just want their bank account limitless before they go into oblivion.

finitebanjo@lemmy.world on 15 Sep 2024 03:31 next collapse

You give them far too much credit to assume this specific company will ever achieve anything even close to AGI.

Knock_Knock_Lemmy_In@lemmy.world on 15 Sep 2024 10:03 collapse

We don’t know what they aren’t showing us. GPT was only one strand of research

ShepherdPie@midwest.social on 15 Sep 2024 22:09 collapse

If they had something better, don’t you think they’d be putting it out front and center? This is akin to all those conspiracy theorists claiming they have proof to back their claims but they just can’t show it to you right now but it’s definitely coming at some indeterminate time in the future.

Knock_Knock_Lemmy_In@lemmy.world on 16 Sep 2024 13:49 collapse

If they had something better, don’t you think they’d be putting it out front and center?

Only when the product is ready. Look at open.ai/research and you’ll see they do a lot more than just chatGPT.

lemto@sopuli.xyz on 16 Sep 2024 20:39 collapse

I kinda liked the text you linked. Here’s a quote.

There are also structural changes that can be made to corporations to realign their values system with human welfare. Corporate charters can be amended to optimize for a triple bottom line of social, environmental, and financial outcomes (the so-called “triple Ps” of people, planet, and profit.)

This reminds me of what we are trying to do where I live. The hard thing is this requires a lot of work and it doesn’t just go against the corporate agenda; it goes against the normal lifestyle most everyone around us lives. It has made me want to quit sometimes.

But then again, true life is in true living among real people and real things, not in daydreaming of better days.

yamanii@lemmy.world on 15 Sep 2024 03:35 next collapse

As Ed said, Sam Altman has been a plague.

DragonTypeWyvern@midwest.social on 15 Sep 2024 07:12 next collapse

Hey, remember when you guys lined up to suck his dick when the board tried to keep OpenAI working for the good of humanity instead of the oligarchy?

Wrufieotnak@feddit.org on 15 Sep 2024 08:06 next collapse

Yeah, that was a strange moment. Those in the company being for Altman, I can understand. They expected big returns of investment from keeping him around. But the outsiders on the internet cheering him on? Felt like Elon Musk in the beginning again. And yes I also fell for his engineer persona in the beginning. But I learned from that.

[deleted] on 15 Sep 2024 10:39 next collapse

.

Ragnarok314159@sopuli.xyz on 15 Sep 2024 22:00 collapse

Musk the Fuck doesn’t even have an engineering degree and was never accepted into any PhD program. Lying piece of shit copies some code out of an early 80’s magazine and thinks he is a programmer and got his degree after a donation from daddy.

I repeatedly see loser here, and other places online, defend his dipshit ass. People keep falling for rich tech bro bullshit and still do.

Ilandar@aussie.zone on 16 Sep 2024 00:15 collapse

I guess that’s just the reality of living in a world where money is equated with success and success is equated with talent.

jacksilver@lemmy.world on 16 Sep 2024 14:16 collapse

What was so obvious in that instance was the board members trying to push him out were calling out the lack of openness OpenAI was trending towards. They were literally calling him out for not upholding the vision of why the company was founded.

All the engineers clearly saw their payday slipping away and revolted for that reason. Can’t say I blame them, but it was a scenario where the board was actually doing the right thing and everyone turned on them for profit.

SkyeStarfall@lemmy.blahaj.zone on 15 Sep 2024 09:01 next collapse

I hate being right

Why do people keep being fooled by rich assholes

s3p5r@lemm.ee on 15 Sep 2024 10:35 collapse

People have grown up reading comic books and watching movies about generous billionaire superhero saviors. They want to believe that exists because it’s what they’ve been taught justice looks like.

xthexder@l.sw0.com on 15 Sep 2024 19:14 next collapse

Surprising, since Lex Luthor was often portrayed as a wealthy billionaire.

DragonTypeWyvern@midwest.social on 15 Sep 2024 21:54 next collapse

And yet the problem was never that he was a billionaire, and Lexcorp was never portrayed as anything but an industrial powerhouse whose existence was ultimately good.

UnderpantsWeevil@lemmy.world on 16 Sep 2024 14:01 collapse

Lexcorp was never portrayed as anything but an industrial powerhouse whose existence was ultimately good.

The largest international arms dealing firm that did Captain Planet Villain tier pollution, corruption, and financial scams was “ultimately good”?

Didn’t Lexcorp literally clone an army of Doomsdays?

DragonTypeWyvern@midwest.social on 16 Sep 2024 15:31 collapse

Obviously it’s going to be the “villain” as Lex’s plaything but it’s also on that “job creator” cope.

dc.fandom.com/wiki/LexCorp

Employs literally 2/3 of Metropolis’s population lol. Lex even handed it over Superman at one point and made him the CEO because he was on a “Earth needs Superman” arc while obviously CEOs are the real heroes and such, and what are you going to do, Superman? Unemploy a supermajority of Metropolis? It NEEDS Lexcorp, etc etc.

UnderpantsWeevil@lemmy.world on 16 Sep 2024 16:08 collapse

“How will we get by without all our financialized monopolies?!”

Idk, bro. Just keep doing what you’re doing, minus the extraordinary rents to your bloated bourgeois landlords, maybe?

s3p5r@lemm.ee on 15 Sep 2024 23:33 collapse

I feel like Luthor was a better counterexample for this before the model for his billionaire redesign was elected President of the USA.

Even so, Luthor hasn’t had quite the same volume of appearances as Iron Man, Batman, Captain America and the other rich superhero tropes.

Big_Boss_77@lemmynsfw.com on 18 Sep 2024 04:45 collapse

I’m not super hip on comics… but Grandpa Popsicle was rich?

Odd_so_Star_so_Odd@lemmy.world on 16 Sep 2024 00:34 collapse

They do exist they just prefer to be anonymous in their altruism so nobody hears about them.

s3p5r@lemm.ee on 16 Sep 2024 00:50 collapse

How convenient that a counterexample can’t be named

morriscox@lemmy.world on 16 Sep 2024 19:11 collapse

Actually, this is part of Jewish society. Essentially, one is not supposed to do it for glory. I suspect that part of that is to avoid getting letters pleading for more money from those who they have helped or who knows that they helped someone. A lot of charities share/sell donor lists.

UnderpantsWeevil@lemmy.world on 16 Sep 2024 13:58 next collapse

I don’t remember this at all.

AI has looked like a scam since the Metaverse days when Facebook realized it couldn’t push those shitty headsets on people and decided to pivot.

I_Clean_Here@lemmy.world on 17 Sep 2024 15:22 collapse

I’m always in line for a good dick sucking

DragonTypeWyvern@midwest.social on 17 Sep 2024 15:53 collapse

Not to knock dick sucking as a whole, but you need some standards.

CosmoNova@lemmy.world on 15 Sep 2024 07:29 next collapse

The article could be from 2022 and I’d be as unsurprised as I am now.

BilboBargains@lemmy.world on 15 Sep 2024 07:45 next collapse

Step 1. Make an AI that hoovers up content.

Step 2. When owners of content complain about privacy violations and copyright infringement, allay their fears. This AI is for the Good of Humanity.

Step 3. ???

Step 4. Profit.

ShepherdPie@midwest.social on 15 Sep 2024 22:06 collapse

Yet another example of doing crime at a big enough scale that you get rewarded for it. That’s what this country was built on.

x00za@lemmy.dbzer0.com on 15 Sep 2024 08:06 next collapse

I think their name goes against some consumer laws here in Europe.

Better call themselves ClosedAI

Zacryon@feddit.org on 15 Sep 2024 08:19 next collapse

Probably has to be renamed to “ClosedAI” then.

JustARaccoon@lemmy.world on 15 Sep 2024 08:31 next collapse

Surely they will be sued into oblivion if they tried right? Them being non profit was the main pillar holding up their defense for scraping the web into datasets.

EnderMB@lemmy.world on 15 Sep 2024 15:51 next collapse

How is this going to work while OpenAI currently burns through an absolute ocean of cash to keep improving its services? Alongside this, a good software engineer or applied scientist can make close to $1m a year. While I do think professionals should earn what their value is to an employer, OpenAI still loses a ton of money.

As someone that works in AI, I think most of us know it’s full of people trying to make a quick buck while investors will stupidly throw money at it. OpenAI is ultimately the figurehead of this market though, because at least the big companies can prop their AI offerings with the money they make from shopping, cloud, ads, etc. The second OpenAI looks weak and needs money, the vultures will slice off a piece and we’ll see the AI market reduce to a wimper - just enough for tech to focus on the next grift.

barsoap@lemm.ee on 15 Sep 2024 16:40 collapse

About the only AI company currently alive that I’m sure will survive is CivitAI. Huggingface probably, too. Both are, in the end, in the datacenter business. Huggingface has exposure to VC BS in their client base, they might be in trouble if a significant number suddenly go belly-up but if they have any sense they’ll simply not overextend. And, well, they, too, can switch to cat pictures.

TropicalDingdong@lemmy.world on 15 Sep 2024 22:04 collapse

Yeah some of my team members use hf and it really does represent a convenience (basically a GitHub for models), but I’m sure to be clear we can’t rely on them alone. I don’t trust any company to exist or not be bought out and enshittified in 3 years.

emmy67@lemmy.world on 15 Sep 2024 21:16 next collapse

Wild for a company that’s never made a profit

Blackmist@feddit.uk on 15 Sep 2024 23:49 next collapse

Oh it’s made plenty for Nvidia.

exanime@lemmy.world on 16 Sep 2024 00:36 collapse

These companies do not make profit in paper but have already made millions for others.

It’s all smoke and mirrors

Etterra@lemmy.world on 15 Sep 2024 22:10 next collapse

OpenAI: It’s not fair to charge us to use copywriten works.

Also OpenAI: Also you have to pay us for using them.

bitjunkie@lemmy.world on 16 Sep 2024 16:28 next collapse

copyrighted*

buttfarts@lemy.lol on 16 Sep 2024 21:08 collapse

That’s why all human creative works done online need to be bean related. To fuck up the data stream and make it unintelligible for AIs and marketing algorithms.

socialmedia@lemmy.world on 15 Sep 2024 23:14 next collapse

Just want to point out that it absolutely is possible to train an AI that will keep track of its sources for inspiration and can attribute those when it makes a response.

Meaning creators could be compensated for their parts of AI generated stuff, if anyone wanted to.

blorp@feddit.uk on 15 Sep 2024 23:46 next collapse

Doesn’t Phind do this already? I haven’t used it much but I remember it showing its sources for answers of code-related stuff

bluewing@lemm.ee on 16 Sep 2024 14:19 collapse

I use Phind solving computer problems. It does cite the sources it uses. At least for distro and general Linux issues. So far, it’s been a very good resource when I’ve needed it.

Trantarius@lemmy.dbzer0.com on 16 Sep 2024 00:26 next collapse

Other than citing the entire training data set, how would this be possible?

UnderpantsWeevil@lemmy.world on 16 Sep 2024 13:55 collapse

The entire training set isn’t used in each permutation. Your keywords are building the samples based on metadata tags tied back to the original images.

If you ask for “Iron Man in a cowboy hat”, the toolset will reach for some catalog of Iron Man images and some catalog of cowboy hat images and some catalog of person-in-cowboy-hat images, when looking for a basis of comparison as it renders the image.

These would be the images attributed to the output.

Trantarius@lemmy.dbzer0.com on 16 Sep 2024 16:22 collapse

Do you have a source for this? This sounds like fine-tuning a model, which doesn’t prevent data from the original training set from influencing the output. The method you described would only work if the AI is trained from scratch on only images of iron man and cowboy hats. And I don’t think that’s how any of these models work.

mm_maybe@sh.itjust.works on 16 Sep 2024 20:56 collapse

I think that there are some people working on this, and a few groups that have claimed to do it, but I’m not aware of any that actually meet the description you gave. Can you cite a paper or give a link of some sort?

exanime@lemmy.world on 16 Sep 2024 00:34 next collapse

… To the surprise of <checks notes> absolutely nobody

Actually I have a question and I admit knowing nothing of the legal framework here but…

Isn’t it absolutely ridiculous that a not-for-profit entity can exists solely for the purpose of developing a closed-source piece of software, demand to train it for free off copyrighted material, just to switch to a for-profit entity??

Sound 100% like tax avoidance. Like me registering a charity so I can throw a mega concert/party privately, secure preferencial treatment on supplies, get discounts on artists or even free performance and then switch to for profit as I start selling tickets

jacksilver@lemmy.world on 16 Sep 2024 14:11 collapse

Originally all their work was supposed to be published and shared with the world, hence the “open” in OpenAI. However somewhere along the way they made a for-profit break off of the original company and started pulling everything in that direction.

glitchdx@lemmy.world on 16 Sep 2024 16:00 next collapse

… They weren’t before?

sugar_in_your_tea@sh.itjust.works on 16 Sep 2024 16:17 collapse

Yeah, they weren’t as synergized. Now they’re coordinating with key stakeholders to maximize the efficiency of their aggressive roadmap. Or something, I kinda suck at business jargon.

aaaaace@lemmy.blahaj.zone on 16 Sep 2024 17:05 collapse

Face is looking increasingly compromised and distorted every photo.