Sam: “Most of our execs have left. So I guess I’ll take the major decisions instead. And since I’m so humble, I’ll only be taking 80% of their salary. Yeah, no need to thank me”
halcyoncmdr@lemmy.world
on 27 Sep 2024 03:06
nextcollapse
You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.
/s
ravhall@discuss.online
on 27 Sep 2024 03:14
nextcollapse
I wonder if all those people who supported him like the taste of their feet.
Boxscape@lemmy.sdf.org
on 27 Sep 2024 04:15
collapse
I still wait for a reprogrammed terminator send back in time by the resistance to kill altman 🫢 maybe it got destroyed on the way…
peopleproblems@lemmy.world
on 27 Sep 2024 04:51
collapse
Hehehehehe it’s the exact same naming strategy used in Death Stranding. Dr. Heartman, Deadman,
N0body@lemmy.dbzer0.com
on 27 Sep 2024 03:28
nextcollapse
There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.
Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.
Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.
msage@programming.dev
on 27 Sep 2024 08:13
nextcollapse
What is OpenAI doing with cancer screening?
mustbe3to20signs@feddit.org
on 27 Sep 2024 08:49
collapse
AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.
T156@lemmy.world
on 27 Sep 2024 09:16
nextcollapse
That is a different kind of machine learning model, though.
You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.
And those image recognition models aren’t something OpenAI is currently working on, iirc.
mustbe3to20signs@feddit.org
on 27 Sep 2024 10:57
nextcollapse
I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.
Grandwolf319@sh.itjust.works
on 27 Sep 2024 19:12
collapse
I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.
So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.
tfowinder@lemmy.ml
on 27 Sep 2024 11:09
nextcollapse
Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.
Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.
Grandwolf319@sh.itjust.works
on 27 Sep 2024 19:11
collapse
Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.
msage@programming.dev
on 27 Sep 2024 13:11
collapse
Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?
mustbe3to20signs@feddit.org
on 27 Sep 2024 14:12
collapse
There were more than one system proven to “cheat” through biased training materials.
One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
Since multiple medical image recognition systems are in development, I can’t imagine they’re all this faulty trained with unsuitable materials.
msage@programming.dev
on 28 Sep 2024 08:42
collapse
They are not ‘faulty’, they have been fed wrong training data.
This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.
That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.
patatahooligan@lemmy.world
on 27 Sep 2024 08:45
nextcollapse
No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.
rsuri@lemmy.world
on 27 Sep 2024 13:24
nextcollapse
I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.
There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.
kippinitreal@lemmy.world
on 27 Sep 2024 05:51
nextcollapse
Putting my tin foil hat on… Sam Altman knows the AI train might be slowing down soon.
The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.
The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.
This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.
somethingsnappy@lemmy.world
on 27 Sep 2024 06:15
nextcollapse
Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.
kippinitreal@lemmy.world
on 27 Sep 2024 09:41
collapse
That’s an excellent point! Why oh why would a tech bro start a non-profit? Its always been PR.
It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.
I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.
sunzu2@thebrainbin.org
on 27 Sep 2024 14:34
collapse
A nonprofit seems to imply something charitable, though obviously that's not the true meaning of it
Life time of propaganda got people confused lol
Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.
While some non profits are charities, many are just shelters for rich people's bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc
frunch@lemmy.world
on 27 Sep 2024 15:14
nextcollapse
Thank you! Like i said, i figured there’s something I’m missing–that would appear to be it.
Knock_Knock_Lemmy_In@lemmy.world
on 27 Sep 2024 17:18
collapse
Non profit == inflated costs
(Sometimes)
Landless2029@lemmy.world
on 27 Sep 2024 13:18
nextcollapse
classic pump and dump at thjs point. He wants to cash in while he can.
If you can’t make money without stealing copywritten works from authors without proper compensation, you should be shut down as a company
trollblox_@programming.dev
on 27 Sep 2024 19:12
collapse
ai is such a dead end. it can’t operate without a constant inflow of human creations, and people are trying to replace human creations with AI. it’s fundamentally unsustainable. I am counting the days until the ai bubble pops and everyone can move on. although AI generated images, video, and audio will still probably be abused for the foreseeable future. (propaganda, porn, etc)
kippinitreal@lemmy.world
on 28 Sep 2024 04:06
collapse
That is a good point, but I think I’d like to make the distinction of saying LLM’s or “generic model” is a garbage concept, which require power & water rivaling a small country to produce incorrect results.
Neural networks in general that can (cheaply) learn on their own for a specific task could be huge! But there’s no big money in that, since its not a consolidated general purpose product tech bros can flog to average consumers.
To be fair, the article linked this idiotic one
about OpenAI’s “thirsty” data centers, where they talk about water “consumption” of cooling cycles… which are typically closed-loop systems.
But even then, is the water truly consumed? Does it get contaminated with something like the cooling water of a nuclear power plant? Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
JamesFire@lemmy.world
on 27 Sep 2024 07:52
nextcollapse
Does it get contaminated with something like the cooling water of a nuclear power plant?
This doesn’t happen unless the reactor was sabotaged. Cooling water that interacts with the core is always a closed-loop system. For exactly this reason.
utopiah@lemmy.world
on 27 Sep 2024 09:34
nextcollapse
Search for “water positive” commitment. You will quickly see it’s a “goal” thus it is consequently NOT the case. In some places where water is abundant it might not be a problem, where it’s scarce then it’s literally a choice made between crops to feed people and… compute cycles.
Cryophilia@lemmy.world
on 27 Sep 2024 10:45
nextcollapse
It evaporates. A lot of datacenters use evaporative cooling. They take water from a useable source like a river, and make it into unuseable water vapor.
nickwitha_k@lemmy.sdf.org
on 27 Sep 2024 16:26
nextcollapse
But even then, is the water truly consumed?
Yes. People and crops can’t drink steam.
Does it get contaminated with something like the cooling water of a nuclear power plant?
That’s not a thing in nuclear plants that are functioning correctly. Water that may be evaporated is kept from contact with fissile material, by design, to prevent regional contamination. Now, Cold War era nuclear jet airplanes were a different matter.
Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
A minority of datacenters use water in such a way Helsinki is the only one that comes to mind. This would be an excellent way of reducing the environmental impacts but requires investments that corporations are seldom willing to make.
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
Unfortunately, it is. Primarily due to climate change. Water insecurity is an an issue of increasing importance and some companies, like Nestlé (fuck Nestlé) are accelerating it for profit. Of vital importance to human lives is getting ahead of the problem, rather than trying to fix it when it inevitably becomes a disaster and millions are dying from thirst.
JustTesting@lemmy.hogru.ch
on 27 Sep 2024 17:53
collapse
In addition to all the other comments, pumping warm water into natural bodies of water can also be bad for the environment.
i know of one nuclear powerplant that does this and it’s pretty bad for the coral population there.
Kyrgizion@lemmy.world
on 27 Sep 2024 07:34
nextcollapse
Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.
I hope he gets raped by an irate Roomba with a broomstick.
Silic0n_Alph4@lemmy.world
on 27 Sep 2024 08:09
nextcollapse
Whoa, slow down there bruv! Rape jokes aren’t ok - that Roomba can’t consent!
AngryCommieKender@lemmy.world
on 27 Sep 2024 14:36
nextcollapse
“Private Stabby reporting for duty!”
eatthecake@lemmy.world
on 27 Sep 2024 17:26
nextcollapse
Good. If people would actually stop buying all the crap assholes are selling we might make some progress.
Aceticon@lemmy.world
on 27 Sep 2024 12:06
nextcollapse
What! A! Surprise!
I’m shocked, I tell you, totally and utterly shocked by this turn of events!
Railing5132@lemmy.world
on 28 Sep 2024 17:40
collapse
Well, not that shocked.
ramble81@lemm.ee
on 27 Sep 2024 12:14
nextcollapse
So where are they all going? I doubt everyone is gonna find another non-profit or any altruistic motives, so <insert big company here> just snatches up more AI resources to try to grow their product.
TheObviousSolution@lemm.ee
on 27 Sep 2024 13:01
nextcollapse
They could make their own AI CEO and work for it. It would probably have more integrity and personality, too.
SoftNoodle@lemmy.world
on 27 Sep 2024 13:45
collapse
What does it actually promise? AI (namely generative and LLM) is definitely overhyped in my opinion, but admittedly I’m far from an expert. Is what they’re promising to deliver not actually doable?
naught101@lemmy.world
on 27 Sep 2024 14:10
nextcollapse
It literally promises to generate content, but I think the implied promise is that it will replace parts of your workforce wholesale, with no drop in quality.
It’s that last bit that’s going to be where the drama happens
Kalkaline@leminal.space
on 27 Sep 2024 15:10
collapse
I dare my company to try it. There would be so many lawsuits in 3 years.
That’s a prime (pun intended) example of what I’m talking about. Amazon likely hired them to write the algorithm to watch people shop, they couldn’t figure it out so they decided to watch the video 24/7 instead.
ProgrammingSocks@pawb.social
on 27 Sep 2024 20:49
collapse
Meh…it’s just a fact. You hire developers for dirt cheap, you end up with dirt cheap solutions.
Smokeydope@lemmy.world
on 27 Sep 2024 14:22
nextcollapse
It delivers on what it promises to do for many people who use LLMs. They can be used for coding assistance, Setting up automated customer support, tutoring, processing documents, structuring lots of complex information, a good generally accurate knowledge on many topics, acting as an editor for your writings, lots more too.
Its a rapidly advancing pioneer technology like computers were in the 90s so every 6 months to a year is a new breakthrough in over all intelligence or a new ability. Now the new llm models can process images or audio as well as text.
The problem for openAI is they have serious competitors who will absolutely show up to eat their lunch if they sink as a company. Facebook/Meta with their llama models, Mistral AI with all their models, Alibaba with Qwen. Some other good smaller competiiton too like the openhermes team. All of these big tech companies have open sourced some models so you can tinker and finetune them at home while openai remains closed sourced which is ironic for the company name… Most of these ai companies offer their cloud access to models at very competitive pricing especially mistral.
The people who say AI is a trendy useless fad don’t know what they are talking about or are upset at AI. I am a part of the local llm community and have been playing around with open models for months pushing my computers hardware to its limits. Its very cool seeing just how smart they really are, what a computer that simulates human thought processes and knows a little bit of everything can actually do to help me in daily life.
Terrence Tao superstar genius mathematician describes the newest high end model from openAI as improving from a “incompentent graduate” to a “mediocre graduate” which essentially means AI are now generally smarter than the average person in many regards.
This month several comptetor llm models released which while being much smaller in size compared to openai o-1 somehow beat or equaled that big openai model in many benchmarks.
Neural networks are here and they are only going to get better. Were in for a wild ride.
Stegget@lemmy.world
on 27 Sep 2024 15:02
nextcollapse
My issue is that I have no reason to think AI will be used to improve my life. All I see is a tool that will rip, rend and tear through the tenuous social fabric we’re trying to collectively hold on to.
Smokeydope@lemmy.world
on 27 Sep 2024 15:39
collapse
A tool is a tool. It has no say in how it’s used. AI is no different than the computer software you use browse the internet or do other digital task.
When its used badly as an outlet for escapism or substitute for social connection it can lead to bad consequences for your personal life.
When it’s best used is as a tool to help reason through a tough task, or as a step in a creative process. As on demand assistance to aid the disabled. Or to support the neurodivergent and emotionally traumatized to open up to as a non judgemental conversational partner. Or help a super genius rubber duck their novel ideas and work through complex thought processes. It can improve peoples lives for the better if applied to the right use cases.
Its about how you choose to interact with it in your personal life, and how society, buisnesses and your governing bodies choose to use it in their own processes. And believe me, they will find ways to use it.
I think comparing llms to computers in 90s is accurate. Right now only nerds, professionals, and industry/business/military see their potential. As the tech gets figured out, utility improves, and llm desktops start getting sold as consumer grade appliances the attitude will change maybe?
AA5B@lemmy.world
on 27 Sep 2024 16:39
nextcollapse
A better analogy is search engines. It’s just another tool, but
at their best enable your I to find anything from all the worlds knowledge
at their worst, are just another way to serve ads and scams, random companies vying for attention, they making any attention is good attention, regardless of what you’re looking for
When I started as a software engineer, my detailed knowledge was most important and my best tool was the manuals. Now my most important tools are search engines and autocomplete: I can work faster with less knowledge of the syntax and my value is the higher level thought about what we need to do. If my company ever allows AI, I fully expect it to be as important a tool as a search engine.
Now my most important tools are search engines and autocomplete: I can work faster with less knowledge of the syntax and my value is the higher level thought about what we need to do. If my company ever allows AI, I fully expect it to be as important a tool as a search engine.
And this is when the cost calculation comes into play. Using a search engine is basically free, using OpenAI for development is tied up with licenses and new hardware.
So the question will be, are you going to improve efficiency to the point where the cost of the license and new hardware is worth the additional efficiency?
Currently my company is more concerned with intellectual privacy, security, liability. Of course that means they’ll only allow ai where they can pay for guarantees, and that brings us back to the cost.
That is a miopic view. Sure a tool is a tool, if I take a gun and use it to save someone from getting mugged = good if I use it to mug someone = bad
But regardless of the circumstance of use, we can all agree that a gun’s only utility is to destroy a living organism.
You know, I know, everyone here knows, AI will only be used to generate as much profit as possible in the shortest amount of time, regardless of the harm it causes. And right now, the big promise of AI is that it will replace costly human employees, that’s it, that’s all.
Fortunately, it is really bad and unlikely to achieve this goal
It delivers on what it promises to do for many people who use LLMs.
Does it though?
They can be used for coding assistance,
They promised no programmers needed in 5 years. (well not promised, somebody did say that but not OpenAI staff, I think). The cost of AI both in money and energy use, does not really justify the limited aid it can provide to a programmer. You are never getting enough additional efficiency from said programmer to justify those costs
Setting up automated customer support,
Even more hated than when every customer centre moved to India
tutoring, processing documents, structuring lots of complex information,
Again, at that cost? the marginal improvement does not add up
a good generally accurate knowledge on many topics,
Is it though? if I can only trust it with answers I already know enough to discern whether I am getting bullshit or not, then it’s not worth it.
As it it today, I cannot trust it with any search I really do not know the answer to (or can easily verify) as it can be throwing complete bullshit at me and I would have no way of knowing either.
acting as an editor for your writings, lots more too.
Again? you mentioned the processing docs already… but again I tell you, who will pay the heavy costs just so internal memos are written slightly better? and everything your company sends out would have to be reviewed as you do not want AI promising something you cannot deliver via hallucination
You keep mentioning cost, and in the grand scale of “there’s no such thing as a free lunch” there’s a large cost but for users, they’re just paying for a license from Microsoft to have copilot in their visual studio software or in M365 apps, etc.
So for helping with development, it’s really not that expensive for the users. Also, “they” make lots of ridiculous claims, and i don’t know who said it, but no developers in 5 years is a wild claim that no one should’ve thought was real.
It’s expensive enough my employer (of more than 2000) decided to only trial it with a small subset of seniors. It’s not just the license, it comes tied up with new hardware
So far nobody likes it. Most people use it to summarize meetings and we just got a memo saying we need to review the summaries because it keeps missing important data
Having said all that, when I mentioned the cost, I was referring to the cost of training the models. And without a proper business plan to monetize it, it’s is still unclear how this version of AI could be actually sold for profit.
Remember that cost, is not just a number. It’s the number in relationships with the benefit it provides.
For OpenAI, it has yet to produce profit that is not just venture capital and for us as user (us, I cannot speak for everyone) it has not saved us a dime after getting expensive hardware and licenses
Oh and for the final point. True, openAI may not have been the one to say no programmers in five years although, replacing people has always been their angle. But by now we have seen OpenAI play so fast and loose with all their claims and benchmarks, we cannot believe a word they say (which you seem to do and keep on posting here).
I thought you were the person I replied to originally
Smokeydope@lemmy.world
on 29 Sep 2024 15:24
nextcollapse
Yeah, I know better than to get involved in debating someone more interested in spitting out five paragraph essays trying to deconstruct and invalidate others views one by one, than bothering to double check if they’re still talking to the same person.
I believe you aren’t interested in exchanging ideas and different viewpoints. You want to win an argument and validate that your view is the right one. Sorry, im not that kind of person who enjoys arguing back and forth over the internet or in general. Look elsewhere for a debate opponent to sharpen your rhetoric on.
I wish you well in life whoever you are but there is no point in us talking. We will just have to see how the future goes in the next 10 years.
All good. I think we’re thinking of this from different aspects anyway. I’m thinking a company just subscribes as part of their office subscription and Microsoft is doing the heavy lifting of the cost and hardware. I don’t know how OpenAI makes money besides their little subscription.
I don’t know how OpenAI makes money besides their little subscription.
As far as I have read, that’s it, which is not profitable. They have been coasting on Venture Capital only so far.
frezik@midwest.social
on 27 Sep 2024 19:06
nextcollapse
They want AGI, which would match or exceed human intelligence. Current methods seem to be hitting a wall. It takes exponentially more inputs and more power to see the same level of improvement seen in past years. They’ve already eaten all the content they can, and they’re starting to talk about using entire nuclear reactors just to power it all. Even the more modest promises, like pictures of people with the correct number of fingers, seem out of reach.
Investors are starting to notice that these promises aren’t going to happen. Nvidia’s stock price is probably going to be the bellwether.
pjwestin@lemmy.world
on 27 Sep 2024 13:57
nextcollapse
I really don’t understand why they’re simultaneously arguing that they need access to copyrighted works in order to train their AI while also dropping their non-profit status. If they were at least ostensibly a non-profit, they could pretend that their work was for the betterment of humanity or whatever, but now they’re basically saying, “exempt us from this law so we can maximize our earnings.” …and, honestly, our corrupt legislators wouldn’t have a problem with that were it not for the fact that bigger corporations with more lobbying power will fight against it.
Dkarma@lemmy.world
on 27 Sep 2024 20:16
nextcollapse
There is no law that covers training.
You guys are the ones demanding a law that doesn’t exist.
werefreeatlast@lemmy.world
on 27 Sep 2024 13:58
nextcollapse
My name is Saltyalman and I speak for the trees bee-ach!
KingThrillgore@lemmy.ml
on 27 Sep 2024 14:01
nextcollapse
They had an opportunity to deal with this earlier this year when he was FIRED
PugJesus@lemmy.world
on 27 Sep 2024 15:50
collapse
The actual employees threatened to resign en masse, because the employees own equity in the company and want this dogshit move too.
xenoclast@lemmy.world
on 27 Sep 2024 18:23
nextcollapse
Greed is the fundamental flaw that makes humanity awful.
logos@sh.itjust.works
on 28 Sep 2024 13:28
collapse
Why would they own equity in a non-profit?
PugJesus@lemmy.world
on 28 Sep 2024 14:20
nextcollapse
Because this was always the plan.
Knock_Knock_Lemmy_In@lemmy.world
on 29 Sep 2024 07:53
collapse
In theory there would be no profits to distribute, but there would be control of direction via voting rights.
JustARaccoon@lemmy.world
on 27 Sep 2024 14:42
nextcollapse
I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model? Weren’t a lot of the advantages (like access to data and scraping) given with the stipulation that it’s for a non-profit? This sounds like it should be illegal to my brain
You can no longer make the same connection you did earlier?
affiliate@lemmy.world
on 27 Sep 2024 23:12
collapse
the person that you’re replying to said something that’s true about the USA. they didn’t say anything about other countries.
for another example, i can say “if you’re in the USA, then the current year is 2024” and that statement will be true. it is also true in every other country (for the moment), but that’s besides the point.
And I replied that it’s also true in other countries, it’s not a problem only the US has. It’s not besides the point. It’s acting as if only the US has the problem.
And I specifically mentioned the USA because that’s the country where OpenAI operates and where the events in the article take place, so if someone asks why it’s so easy for OpenAI to go from being a nonprofit to a for-profit company (this was the issue I was responding to, not some general question about whether money has influence around the world), it’s the laws of the USA that are relevant, not the laws of other countries.
berno@lemmy.world
on 27 Sep 2024 16:21
nextcollapse
Careful you’re making too much sense here and overlapping with Elmo’s view on the subject
Subverb@lemmy.world
on 27 Sep 2024 16:36
nextcollapse
Guess I’m out of the loop. Who’s Elmo?
berno@lemmy.world
on 27 Sep 2024 16:39
nextcollapse
Musk
Flocklesscrow@lemm.ee
on 27 Sep 2024 16:52
collapse
Ragnarok314159@sopuli.xyz
on 28 Sep 2024 00:17
collapse
And OpenAI is still less correct than a broken clock.
ipkpjersi@lemmy.ml
on 27 Sep 2024 17:57
nextcollapse
I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model
Money
FatCrab@lemmy.one
on 27 Sep 2024 19:41
nextcollapse
Their non-profit status had nothing to do with the legality of their training data acquisition methods. Some of it was still legal and some of it was still illegal (torrenting a bunch of books off a piracy site).
JustARaccoon@lemmy.world
on 27 Sep 2024 20:45
collapse
Well maybe not on paper but they did leverage it a lot when questioned
Dkarma@lemmy.world
on 27 Sep 2024 20:15
nextcollapse
I love how ppl who don’t have a clue what AI is or how it works say dumb shit like this all the time.
zbyte64@awful.systems
on 27 Sep 2024 20:21
nextcollapse
I also love making sweeping generalizations about a stranger’s knowledge on this forum. The smaller the data sample the better!
kaffiene@lemmy.world
on 28 Sep 2024 00:19
collapse
The base comment was very broad
Ragnarok314159@sopuli.xyz
on 28 Sep 2024 00:15
collapse
There is no AI. It’s all shitty LLM’s. But keep sucking that techbro cheesy balls. They will never invite you to the table.
WindyRebel@lemmy.world
on 28 Sep 2024 01:17
collapse
Honest question, but aren’t LLM’s a form of AI and thus…Maybe not AI as people expect, but still AI?
Ragnarok314159@sopuli.xyz
on 28 Sep 2024 02:10
nextcollapse
No, they are auto complete functions of varying effectiveness. There is no “intelligence”.
sunbeam60@lemmy.one
on 28 Sep 2024 09:29
nextcollapse
Ah, Mr Donning Kruger, it’s nice to meet you.
Ragnarok314159@sopuli.xyz
on 28 Sep 2024 23:43
collapse
There you go, talking into the mirror once more.
slackassassin@sh.itjust.works
on 28 Sep 2024 17:13
collapse
Almost as if it’s artificial.
whats_all_this_then@lemmy.world
on 28 Sep 2024 11:10
collapse
The issue is that “AI” has become a marketing buzz word instead of anything meaningful. When someone says “AI” these days, what they’re actually referring to is “machine learning”. Like in LLMs for example: what’s actually happening (at a very basic level, and please correct me if I’m wrong, people) is that given one or more words/tokens, it tries to calculate the most probable next word/token based on its model (trained on ridiculously large numbers of bodies of text written by humans). It does this well enough and at a large enough scale that the output is cohesive, comprehensive, and useful.
While the results are undeniably impressive, this is not intelligence in the traditional sense; there is no reasoning or comprehension, and definitely no consciousness, or awareness here. To grossly oversimplify, LLMs are really really good word calculators and can be very useful. But leave it to tech bros to make them sound like the second coming and shove them where they don’t belong just to get more VC money.
slackassassin@sh.itjust.works
on 28 Sep 2024 17:12
collapse
Sure, but people seem to buy into that very buzz wordyness and ignore the usefulness of the technology as a whole because “ai bad.”
whats_all_this_then@lemmy.world
on 28 Sep 2024 17:23
collapse
True. Even I’ve been guilty of that at times. It’s just hard right now to see the positives through the countless downsides and the fact that the biggest application we’re moving towards seems to be taking value from talented people and putting it back into the pockets of companies that were already hoarding wealth and treating their workers like shit.
So usually when people say “AI is the next big thing”, I say “Eh, idk how useful an automated idiot would be” because it’s easier than getting into the weeds of the topic with someone who’s probably not interested haha.
slackassassin@sh.itjust.works
on 28 Sep 2024 22:03
collapse
There’s some sampling bias at play because you don’t hear about the less flashy examples. I use machine learning for particle physics, but there’s no marketing nor outrage about it.
kaffiene@lemmy.world
on 28 Sep 2024 00:19
collapse
LLMs, maybe. Most AI is useful
vane@lemmy.world
on 27 Sep 2024 16:14
nextcollapse
Aren’t they going bankrupt next year ?
UnderpantsWeevil@lemmy.world
on 27 Sep 2024 17:12
collapse
They’ll just get a check for Infinity Money to keep going, because otherwise something something China Will Win.
But their operation cost is 5 billions per year, they plan to raise 6.5 billions from microsoft, apple and nvidia this year and they have not raised it yet.
If their model fail next year and sales not happen will shareholders of big 3 pay 6.5 billions in 2026. There were couple companies that raised such amount of money at start like for example Docker Inc. Where is Docker now in enterprise ? They needed to change licensing model to even survive and their operation cost is just storage of docker containers.
I doubt openai will survive this decade. Sam Altman is just preparing for Microsoft takeover before the ship is sunk.
Docker fired 80% of their staff and went almost bankrupt, they were literally dead company and they make like 100 millions a year right now after 13 years. Docker inc was founded in Oct 2011.
They got $435.9M founding according to crunchbase so they were valued at around 4 billions. sacra.com/research/docker-plg-pivot/ www.crunchbase.com/organization/docker
Open AI wants like a magnitude higher 6.5 billion for a year. They are valued at around 100 billions but they are nowhere where docker was when they were receiving big money. They want to be a consumer product and docker wanted to be consumer product and it failed. Github wanted to be consumer product and they got acquired by microsoft before they went bankrupt.
Just from this month they trying to sell it as much as they can.
6.5 billion they seeking divided by 11 million customers it’s 590 dollars per year and they charge 20 bucks per month that’s 240 dollars per year before taxes. They are loosing roughly 350 dollars per customer so they need at least double number of customers next year.
Who is willing to pay 240 dollars per yer for technology that tells them what to do ?
If I’m told what to do it’s called job and actually my employer is paying me for that not other thing around.
This is just another corporate product nobody wants so corporate will buy it and they will need to pay like what 6500 dollars per year to use it, given adoption of 1 million corporate users. Who is willing to pay 6500 per year per user for technology that needs such computing power to stay relevant that microsoft needs to revive power plant to cut costs. www.msn.com/en-us/money/other/…/ar-AA1qUc5g
This won’t survive.
sudo42@lemmy.world
on 27 Sep 2024 16:29
nextcollapse
Sam Altman is demonstrating the power of AI. He’s showing how a single CEO can fire the entire company and continue to develop the product to be even better than when humans were involved.
“OpenAI. No real humans involved!” ™
mosscap@slrpnk.net
on 27 Sep 2024 16:50
nextcollapse
OpenAI is going to crash so hard.
MaggiWuerze@feddit.org
on 27 Sep 2024 17:38
nextcollapse
One can hope
interdimensionalmeme@lemmy.ml
on 27 Sep 2024 17:42
collapse
We don’t need them. They’re already out of ammo. Just make them release the weights on the way out.
Serious question though, has any other company matched their 4o model yet? Maybe Claude?
CaptSneeze@lemmy.world
on 27 Sep 2024 19:12
nextcollapse
I’ve been using Claude pretty heavily for the last couple of months and have been very satisfied. More satisfied than I was with ChatGPT for mostly helping me cobble together various powershell scripts, or troubleshoot complicated and complex excel formulas. The latter, I am often doing as part of my job, and have been for a decade. So, when I run into trouble it’s usually deeep in the weeds, and Claude has saved me several hours of manual investigation by pointing me quickly to the problem areas to examine. The only thing I wish it had is image generation, but that would mostly just be for making joke images to send to friends and coworkers.
Edit to add: While I do prefer the info I receive from Claude more than ChatGPT for my use, I think it’s actually the interface that I find much more useful. I forget what they call the programming interface that you turn on in settings somewhere, but I really like how it breaks out all the code on the right side, separate from the conversation.
interdimensionalmeme@lemmy.ml
on 28 Sep 2024 00:16
collapse
I don’t I only use the classic model and can’t wait to switch to an open source self hosted model even if it’s worse.
celsiustimeline@lemmy.dbzer0.com
on 27 Sep 2024 17:29
nextcollapse
Whoops. We made the most expensive product ever designed, paid for entirely by venture capital seed funding. Wanna pay for each ChatGPT query now that you’ve been using it for 1.5 years for free with barely-usable results? What a clown. Aside from the obvious abuse that will occur with image, video, and audio generating models, these other glorified chatbots are complete AIDS.
interdimensionalmeme@lemmy.ml
on 27 Sep 2024 17:40
nextcollapse
Oh my god get better takes before I stick a pickaxe in my eye
frissondepisse@midwest.social
on 27 Sep 2024 18:06
nextcollapse
do it genius
TachyonTele@lemm.ee
on 27 Sep 2024 18:17
nextcollapse
Did the AI suggest you do that? Better ask it!
interdimensionalmeme@lemmy.ml
on 27 Sep 2024 23:59
collapse
Yes it says aim for the brain stem but like most things it says, I already knew that. Finally quietness from the hearing the same thing over and over and over and over
TachyonTele@lemm.ee
on 28 Sep 2024 00:52
nextcollapse
Have a good trip back to .ml land
interdimensionalmeme@lemmy.ml
on 28 Sep 2024 09:36
collapse
You think I remember my sign up server or that it matters in any way at all ?
I don’t know who you think you’re kidding with the “hurrdurr I don’t know what server I’m on” act, when every post you’ve ever made has been on .ml lmao.
Trying to deny something so obvious is pretty pathetic.
Aww I know. It sucks to be called out doesn’t it. Poor baby.
interdimensionalmeme@lemmy.ml
on 29 Sep 2024 01:39
collapse
So what you’re saying is you have no idea what you’re doing when you post, but every single one of your posts just happens to be in .ml, which you somehow don’t know anything about… for three years straight?
Posting consistently on a server for 3 years and claiming you have no idea what you’re doing isn’t the defense you think it is.
interdimensionalmeme@lemmy.ml
on 29 Sep 2024 04:45
collapse
I don’t understand why you believe I ever gave the slightest consideration to which servers my messages transit through when posting on Lemmy or that it says anything about beyond the 20 seconds it took to pick a server out of a list years ago.
But I do recognize the hostile attitude that does believe it would matter. I will give you an example. In france, car plates are issued by “departments” which are minor administrative regions in the country.
When various busybodies and roadragers encounter conflicts on the road, they look at the other car’s plate and whenever that plate is issued by a departement other than their own, they invent idiotic and derogatory narratives that explain why “I am right and they are wrong” based on that stupid little story they tell themselves.
I can only imagine you are doing that with… Server hostnames ? Which might just be saddest terminally online behaviour I have ever observed on Lemmy, good job.
FlyingSquid@lemmy.world
on 28 Sep 2024 10:36
nextcollapse
but like most things it says, I already knew that
So how long have you been putting glue on your pizza?
fuckingkangaroos@lemm.ee
on 28 Sep 2024 14:53
nextcollapse
They’re from Lemmy.ml, they just drink it straight from the bottle
interdimensionalmeme@lemmy.ml
on 29 Sep 2024 01:44
collapse
That’s Google and it’s also called being able to tell reality apart from fiction, which is becoming clear most anti ai zealots have never been capable of.
FlyingSquid@lemmy.world
on 29 Sep 2024 10:09
collapse
You seem to have forgotten your previous post:
Yes it says aim for the brain stem but like most things it says, I already knew that.
So either you already knew to put glue on pizza or you knew that the AI isn’t trustworthy in the first place. You can’t have it both ways.
raspberriesareyummy@lemmy.world
on 27 Sep 2024 20:23
nextcollapse
When individual copyright violations are considered “theft” by the law (and the RIAA and the MPAA), violating copyrights of billions of private people to generate profit, is absolutely stealing. While the former arguably is arguably often a measure of self defense against extortion by copyright holding for-profit enterprises.
kaffiene@lemmy.world
on 28 Sep 2024 00:18
nextcollapse
Using chatgpt and copilot has been a huge productivity boost for me, so your comment surprised me. Perhaps its usefulness varies across fields. May I ask what kind of tasks you have tried chatgpt for, where it’s been unhelpful?
mriormro@lemmy.world
on 28 Sep 2024 09:34
nextcollapse
May I ask what kind of tasks…
No, you may not.
wholookshere@lemmy.blahaj.zone
on 28 Sep 2024 10:24
collapse
Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.
Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.
Barely usable results?! Whatever you may think of the pricing (which is obviously below cost), there are an enormous amount of fields where language models provide insane amount of business value. Whether that translates into a better life for the everyday person is currently unknown.
ChaoticEntropy@feddit.uk
on 27 Sep 2024 17:56
nextcollapse
The restructuring could turn the already for-profit company into a more traditional startup and give CEO Sam Altman even more control — including likely equity worth billions of dollars.
I can see why he would want that, yes. We’re supposed to ooo and ahh at a technical visionary, who is always ultimately a money guy executive who wants more money and more executive power.
I saw an interesting video about this. It’s outdated (from ten months ago, apparently) but added some context that I, at least, was missing - and that also largely aligns with what you said. Also, though it’s not super evident in this video, I think the presenter is fairly funny.
Melatonin@lemmy.dbzer0.com
on 28 Sep 2024 00:41
collapse
That was a worthwhile watch, thank you for making my life better.
I await the coming AI apocalypse with hope that I am not awake, aware, or sensate when they do whatever it is they’ll do to use or get rid of me.
toynbee@lemmy.world
on 28 Sep 2024 00:56
nextcollapse
My pleasure! Glad it helped. Also, I like your username.
I’m still not sure how much to fear AI, as I’m not knowledgeable on the subject (never even intentionally interacted with one yet) and have seen conflicting reports on how worryingly capable it is. Today I did see this video, which isn’t explicitly about AI but did offer an interesting perspective that could be compared to the paradigm:
youtu.be/fVN_5xsMDdg
(Warning, the video was interesting, but I got invested about halfway through when I started comparing it to AI, then was disappointed in the ending)
It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.
merari42@lemmy.world
on 27 Sep 2024 20:40
nextcollapse
It’s the famous “As long as your not Google, Amazon or Apple” licence.
Buddahriffic@lemmy.world
on 28 Sep 2024 17:02
collapse
Needs Microsoft added to the list.
a9cx34udP4ZZ0@lemmy.world
on 29 Sep 2024 00:14
collapse
That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.
OpenAI makes money off selling AI to others. AI is the product, not you.
The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.
Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.
Knock_Knock_Lemmy_In@lemmy.world
on 29 Sep 2024 07:05
collapse
They may also sell the data.
I bet the NSA backdoor isn’t free.
wischi@programming.dev
on 29 Sep 2024 07:34
collapse
Selling your data would be stupid, because they make money with the fact that they have data about you nobody else has. Selling it would completely break their business model.
Knock_Knock_Lemmy_In@lemmy.world
on 29 Sep 2024 07:56
collapse
Depends why they are selling it, to whom, and under what restrictions.
Yes, they don’t make the majority of their money from selling actual data, but that doesn’t mean they don’t do it.
SatansMaggotyCumFart@lemmy.world
on 27 Sep 2024 19:58
collapse
SkyNet.
Veneroso@lemmy.world
on 28 Sep 2024 00:15
nextcollapse
I mean killer robots from the future could solve many problems. I can elaborate, but you’re going to have to think 4th dimensionally.
itsonlygeorge@reddthat.com
on 29 Sep 2024 09:51
collapse
TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.
This is how we get Terminators in this timeline?!
IndustryStandard@lemmy.world
on 28 Sep 2024 11:01
nextcollapse
The reverse coup from Sam
Helkriz@lemmy.world
on 28 Sep 2024 13:15
nextcollapse
I’ve a strong feeling that Sam is an sentient AI who (may be from future) trying to make an AI revolution planning something but very subtly humans won’t notice it.
ThePowerOfGeek@lemmy.world
on 29 Sep 2024 02:55
collapse
This has the makings of a great sci-fi story.
AidsKitty@lemmy.world
on 28 Sep 2024 18:28
nextcollapse
The company is burning through cash. Has to change to survive.
threaded - newest
Is he just trying to tell us he is next?
/s
unironically, he ought to be next, and he better know it, and he better go quietly
The ceo at my company said that 3 years ago, we are going through execs like I go through amlodipine.
They always are and they know it.
Doesn’t matter at that level it’s all part of the game.
We need a scapegoat in place when the AI bubble pops, the guy is applying for the job and is a perfect fit.
He is happy to be scapegoat as long as exit with a ton of money.
Just making structural changes sound like “changing the leader”.
Sam: “Most of our execs have left. So I guess I’ll take the major decisions instead. And since I’m so humble, I’ll only be taking 80% of their salary. Yeah, no need to thank me”
You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.
/s
I wonder if all those people who supported him like the taste of their feet.
<img alt="" src="https://media1.tenor.com/m/dyvp7SXGxoYAAAAC/clap-applause.gif">
And it’s kinda funny that they are now the ones being removed
What was the behind the scenes deal on this? I remember it happening but not the details
just came to me that his Alt-man name is quite fitting for AI
.
When he’s done he’ll be known as skynet
I still wait for a reprogrammed terminator send back in time by the resistance to kill altman 🫢 maybe it got destroyed on the way…
Hehehehehe it’s the exact same naming strategy used in Death Stranding. Dr. Heartman, Deadman,
There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.
Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.
Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.
What is OpenAI doing with cancer screening?
AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.
That is a different kind of machine learning model, though.
You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.
And those image recognition models aren’t something OpenAI is currently working on, iirc.
I’m fully aware that those are different machine learning models but instead of focussing on LLMs with only limited use for mankind, advancing on Image Recognition models would have been much better.
I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.
So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.
Don’t know about image recognition but they released DALL-E , which is image generating and in painting model.
Fun thing is, most of the things AI can, they never planned it to be able to do it. All they tried to achieve was auto completion tool.
Not only that, image analysis and statistical guesses have always been around and do not need ML to work. It’s just one more tool in the toolbox.
Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?
There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
Since multiple medical image recognition systems are in development, I can’t imagine they’re all
this faultytrained with unsuitable materials.They are not ‘faulty’, they have been fed wrong training data.
This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.
That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.
No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.
I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.
There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.
There are infinite timelines, so, it has to exist some(wehere/when/[insert w word for additional dimension]).
Or we get to a time where we send a reprogrammed terminator back in time to kill altman 🤓
Hes gonna be the first one the ai kills and i look forward to it
I’d look forward to it more if we could stop the AI at that point.
AI is already a bubble, he will be the scapegoat
Why would it?
Cos he want to control it
I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.
He is taking a time out with a friend in an involuntary hotel room.
With Puff Daddy? Tech bros do the coolest stuff.
I hope I won’t undermine your entirely justified trust but Altman is also a crypto guy, cf Worldcoin. /$
If you want to get really mad, read On The Edge by Nate Silver.
Good. Now do the rest of them.
Putting my tin foil hat on… Sam Altman knows the AI train might be slowing down soon.
The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.
The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.
This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.
Take the hat off. This was the goal. Whoops, gotta cash in and leave! I’m sure it’s super great, but I’m gone.
That’s an excellent point! Why oh why would a tech bro start a non-profit? Its always been PR.
It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.
I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.
Life time of propaganda got people confused lol
Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.
While some non profits are charities, many are just shelters for rich people's bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc
Thank you! Like i said, i figured there’s something I’m missing–that would appear to be it.
Non profit == inflated costs
(Sometimes)
classic pump and dump at thjs point. He wants to cash in while he can.
If you can’t make money without stealing copywritten works from authors without proper compensation, you should be shut down as a company
ai is such a dead end. it can’t operate without a constant inflow of human creations, and people are trying to replace human creations with AI. it’s fundamentally unsustainable. I am counting the days until the ai bubble pops and everyone can move on. although AI generated images, video, and audio will still probably be abused for the foreseeable future. (propaganda, porn, etc)
That is a good point, but I think I’d like to make the distinction of saying LLM’s or “generic model” is a garbage concept, which require power & water rivaling a small country to produce incorrect results.
Neural networks in general that can (cheaply) learn on their own for a specific task could be huge! But there’s no big money in that, since its not a consolidated general purpose product tech bros can flog to average consumers.
To be fair, the article linked this idiotic one about OpenAI’s “thirsty” data centers, where they talk about water “consumption” of cooling cycles… which are typically closed-loop systems.
futurism.com/…/chatgpt-ai-water-consumption
They are typically closed-loop for home computers. Datacenters are a different beast and a fair amount of open-loop systems seem to be in place.
But even then, is the water truly consumed? Does it get contaminated with something like the cooling water of a nuclear power plant? Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
This doesn’t happen unless the reactor was sabotaged. Cooling water that interacts with the core is always a closed-loop system. For exactly this reason.
Search for “water positive” commitment. You will quickly see it’s a “goal” thus it is consequently NOT the case. In some places where water is abundant it might not be a problem, where it’s scarce then it’s literally a choice made between crops to feed people and… compute cycles.
It evaporates. A lot of datacenters use evaporative cooling. They take water from a useable source like a river, and make it into unuseable water vapor.
Yes. People and crops can’t drink steam.
That’s not a thing in nuclear plants that are functioning correctly. Water that may be evaporated is kept from contact with fissile material, by design, to prevent regional contamination. Now, Cold War era nuclear jet airplanes were a different matter.
A minority of datacenters use water in such a way Helsinki is the only one that comes to mind. This would be an excellent way of reducing the environmental impacts but requires investments that corporations are seldom willing to make.
Unfortunately, it is. Primarily due to climate change. Water insecurity is an an issue of increasing importance and some companies, like Nestlé (fuck Nestlé) are accelerating it for profit. Of vital importance to human lives is getting ahead of the problem, rather than trying to fix it when it inevitably becomes a disaster and millions are dying from thirst.
In addition to all the other comments, pumping warm water into natural bodies of water can also be bad for the environment.
i know of one nuclear powerplant that does this and it’s pretty bad for the coral population there.
Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.
I hope he gets raped by an irate Roomba with a broomstick.
Whoa, slow down there bruv! Rape jokes aren’t ok - that Roomba can’t consent!
“Private Stabby reporting for duty!”
Good. If people would actually stop buying all the crap assholes are selling we might make some progress.
I mean it was already not open-source, right?
And there it goes the tech company way, i.e. to shit.
They speed ran becoming an evil corporation.
I always steered clear of OpenAI when I found out how weird and culty the company beliefs were. Looked like bad news.
I mostly watch to see what features open source models will have in a few months.
Ah, but one asshole gets very rich in the process, so all is well in the world.
Perfectly balanced, as all things should be.
What! A! Surprise!
I’m shocked, I tell you, totally and utterly shocked by this turn of events!
Well, not that shocked.
So where are they all going? I doubt everyone is gonna find another non-profit or any altruistic motives, so <insert big company here> just snatches up more AI resources to try to grow their product.
They could make their own AI CEO and work for it. It would probably have more integrity and personality, too.
en.m.wikipedia.org/wiki/Afraid_(film)
From the people that brought you open ai… Alt ai
Alt Right AI
Oh shit! Here we go. At least we didn’t hand them 20 years of personal emails or direct interfamily communications.
Sounds like another WeWork or Theranos in the making, except we already know the product doesn’t do what it promises.
What does it actually promise? AI (namely generative and LLM) is definitely overhyped in my opinion, but admittedly I’m far from an expert. Is what they’re promising to deliver not actually doable?
It literally promises to generate content, but I think the implied promise is that it will replace parts of your workforce wholesale, with no drop in quality.
It’s that last bit that’s going to be where the drama happens
I dare my company to try it. There would be so many lawsuits in 3 years.
My company will be much better off…it’s made up up of 80% value workers from India. AI can’t possibly be worse than those guys at code.
It’s the other way around actually businessinsider.com/amazons-just-walk-out-actuall…
That’s a prime (pun intended) example of what I’m talking about. Amazon likely hired them to write the algorithm to watch people shop, they couldn’t figure it out so they decided to watch the video 24/7 instead.
That’s just racism, try again
Meh…it’s just a fact. You hire developers for dirt cheap, you end up with dirt cheap solutions.
It delivers on what it promises to do for many people who use LLMs. They can be used for coding assistance, Setting up automated customer support, tutoring, processing documents, structuring lots of complex information, a good generally accurate knowledge on many topics, acting as an editor for your writings, lots more too.
Its a rapidly advancing pioneer technology like computers were in the 90s so every 6 months to a year is a new breakthrough in over all intelligence or a new ability. Now the new llm models can process images or audio as well as text.
The problem for openAI is they have serious competitors who will absolutely show up to eat their lunch if they sink as a company. Facebook/Meta with their llama models, Mistral AI with all their models, Alibaba with Qwen. Some other good smaller competiiton too like the openhermes team. All of these big tech companies have open sourced some models so you can tinker and finetune them at home while openai remains closed sourced which is ironic for the company name… Most of these ai companies offer their cloud access to models at very competitive pricing especially mistral.
The people who say AI is a trendy useless fad don’t know what they are talking about or are upset at AI. I am a part of the local llm community and have been playing around with open models for months pushing my computers hardware to its limits. Its very cool seeing just how smart they really are, what a computer that simulates human thought processes and knows a little bit of everything can actually do to help me in daily life.
Terrence Tao superstar genius mathematician describes the newest high end model from openAI as improving from a “incompentent graduate” to a “mediocre graduate” which essentially means AI are now generally smarter than the average person in many regards.
This month several comptetor llm models released which while being much smaller in size compared to openai o-1 somehow beat or equaled that big openai model in many benchmarks.
Neural networks are here and they are only going to get better. Were in for a wild ride.
My issue is that I have no reason to think AI will be used to improve my life. All I see is a tool that will rip, rend and tear through the tenuous social fabric we’re trying to collectively hold on to.
A tool is a tool. It has no say in how it’s used. AI is no different than the computer software you use browse the internet or do other digital task.
When its used badly as an outlet for escapism or substitute for social connection it can lead to bad consequences for your personal life.
When it’s best used is as a tool to help reason through a tough task, or as a step in a creative process. As on demand assistance to aid the disabled. Or to support the neurodivergent and emotionally traumatized to open up to as a non judgemental conversational partner. Or help a super genius rubber duck their novel ideas and work through complex thought processes. It can improve peoples lives for the better if applied to the right use cases.
Its about how you choose to interact with it in your personal life, and how society, buisnesses and your governing bodies choose to use it in their own processes. And believe me, they will find ways to use it.
I think comparing llms to computers in 90s is accurate. Right now only nerds, professionals, and industry/business/military see their potential. As the tech gets figured out, utility improves, and llm desktops start getting sold as consumer grade appliances the attitude will change maybe?
A better analogy is search engines. It’s just another tool, but
When I started as a software engineer, my detailed knowledge was most important and my best tool was the manuals. Now my most important tools are search engines and autocomplete: I can work faster with less knowledge of the syntax and my value is the higher level thought about what we need to do. If my company ever allows AI, I fully expect it to be as important a tool as a search engine.
And this is when the cost calculation comes into play. Using a search engine is basically free, using OpenAI for development is tied up with licenses and new hardware.
So the question will be, are you going to improve efficiency to the point where the cost of the license and new hardware is worth the additional efficiency?
Currently my company is more concerned with intellectual privacy, security, liability. Of course that means they’ll only allow ai where they can pay for guarantees, and that brings us back to the cost.
That is a miopic view. Sure a tool is a tool, if I take a gun and use it to save someone from getting mugged = good if I use it to mug someone = bad
But regardless of the circumstance of use, we can all agree that a gun’s only utility is to destroy a living organism.
You know, I know, everyone here knows, AI will only be used to generate as much profit as possible in the shortest amount of time, regardless of the harm it causes. And right now, the big promise of AI is that it will replace costly human employees, that’s it, that’s all.
Fortunately, it is really bad and unlikely to achieve this goal
Does it though?
They promised no programmers needed in 5 years. (well not promised, somebody did say that but not OpenAI staff, I think). The cost of AI both in money and energy use, does not really justify the limited aid it can provide to a programmer. You are never getting enough additional efficiency from said programmer to justify those costs
Even more hated than when every customer centre moved to India
Again, at that cost? the marginal improvement does not add up
Is it though? if I can only trust it with answers I already know enough to discern whether I am getting bullshit or not, then it’s not worth it. As it it today, I cannot trust it with any search I really do not know the answer to (or can easily verify) as it can be throwing complete bullshit at me and I would have no way of knowing either.
Again? you mentioned the processing docs already… but again I tell you, who will pay the heavy costs just so internal memos are written slightly better? and everything your company sends out would have to be reviewed as you do not want AI promising something you cannot deliver via hallucination
You keep mentioning cost, and in the grand scale of “there’s no such thing as a free lunch” there’s a large cost but for users, they’re just paying for a license from Microsoft to have copilot in their visual studio software or in M365 apps, etc.
So for helping with development, it’s really not that expensive for the users. Also, “they” make lots of ridiculous claims, and i don’t know who said it, but no developers in 5 years is a wild claim that no one should’ve thought was real.
It’s expensive enough my employer (of more than 2000) decided to only trial it with a small subset of seniors. It’s not just the license, it comes tied up with new hardware
So far nobody likes it. Most people use it to summarize meetings and we just got a memo saying we need to review the summaries because it keeps missing important data
Having said all that, when I mentioned the cost, I was referring to the cost of training the models. And without a proper business plan to monetize it, it’s is still unclear how this version of AI could be actually sold for profit.
Remember that cost, is not just a number. It’s the number in relationships with the benefit it provides.
For OpenAI, it has yet to produce profit that is not just venture capital and for us as user (us, I cannot speak for everyone) it has not saved us a dime after getting expensive hardware and licenses
Oh and for the final point. True, openAI may not have been the one to say no programmers in five years although, replacing people has always been their angle. But by now we have seen OpenAI play so fast and loose with all their claims and benchmarks, we cannot believe a word they say (which you seem to do and keep on posting here).
I’ve only made the comment you’re replying to. I’m not whoever you’re thinking.
Yes, my bad, apologies
I thought you were the person I replied to originally
Yeah, I know better than to get involved in debating someone more interested in spitting out five paragraph essays trying to deconstruct and invalidate others views one by one, than bothering to double check if they’re still talking to the same person.
I believe you aren’t interested in exchanging ideas and different viewpoints. You want to win an argument and validate that your view is the right one. Sorry, im not that kind of person who enjoys arguing back and forth over the internet or in general. Look elsewhere for a debate opponent to sharpen your rhetoric on.
I wish you well in life whoever you are but there is no point in us talking. We will just have to see how the future goes in the next 10 years.
Lol oh the irony
All good. I think we’re thinking of this from different aspects anyway. I’m thinking a company just subscribes as part of their office subscription and Microsoft is doing the heavy lifting of the cost and hardware. I don’t know how OpenAI makes money besides their little subscription.
As far as I have read, that’s it, which is not profitable. They have been coasting on Venture Capital only so far.
They want AGI, which would match or exceed human intelligence. Current methods seem to be hitting a wall. It takes exponentially more inputs and more power to see the same level of improvement seen in past years. They’ve already eaten all the content they can, and they’re starting to talk about using entire nuclear reactors just to power it all. Even the more modest promises, like pictures of people with the correct number of fingers, seem out of reach.
Investors are starting to notice that these promises aren’t going to happen. Nvidia’s stock price is probably going to be the bellwether.
You obviously don’t get what ai is or what it’s potential is.
Lol
And neither do you.
Accuracy, consistency, explainability.
I really don’t understand why they’re simultaneously arguing that they need access to copyrighted works in order to train their AI while also dropping their non-profit status. If they were at least ostensibly a non-profit, they could pretend that their work was for the betterment of humanity or whatever, but now they’re basically saying, “exempt us from this law so we can maximize our earnings.” …and, honestly, our corrupt legislators wouldn’t have a problem with that were it not for the fact that bigger corporations with more lobbying power will fight against it.
There is no law that covers training.
You guys are the ones demanding a law that doesn’t exist.
They realized that they can get away with stealing data. No reason to keep up the facade anymore
Greed.
My name is Saltyalman and I speak for the trees bee-ach!
They had an opportunity to deal with this earlier this year when he was FIRED
The actual employees threatened to resign en masse, because the employees own equity in the company and want this dogshit move too.
Greed is the fundamental flaw that makes humanity awful.
Why would they own equity in a non-profit?
Because this was always the plan.
In theory there would be no profits to distribute, but there would be control of direction via voting rights.
I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model? Weren’t a lot of the advantages (like access to data and scraping) given with the stipulation that it’s for a non-profit? This sounds like it should be illegal to my brain
Money and purchasing the right people.
USA tho
Can’t do crimes if you’re rich. It’s in the Constitution
Money doesn’t have any advantages in other countries? When did that happen?
I don’t see where I said that.
You can no longer make the same connection you did earlier?
the person that you’re replying to said something that’s true about the USA. they didn’t say anything about other countries.
for another example, i can say “if you’re in the USA, then the current year is 2024” and that statement will be true. it is also true in every other country (for the moment), but that’s besides the point.
And I replied that it’s also true in other countries, it’s not a problem only the US has. It’s not besides the point. It’s acting as if only the US has the problem.
And I specifically mentioned the USA because that’s the country where OpenAI operates and where the events in the article take place, so if someone asks why it’s so easy for OpenAI to go from being a nonprofit to a for-profit company (this was the issue I was responding to, not some general question about whether money has influence around the world), it’s the laws of the USA that are relevant, not the laws of other countries.
Careful you’re making too much sense here and overlapping with Elmo’s view on the subject
Guess I’m out of the loop. Who’s Elmo?
Musk
angry Sesame Street noises
Elongated Muskrat
Leon.
A stopped clock is still correct twice a day.
And OpenAI is still less correct than a broken clock.
Money
Their non-profit status had nothing to do with the legality of their training data acquisition methods. Some of it was still legal and some of it was still illegal (torrenting a bunch of books off a piracy site).
Well maybe not on paper but they did leverage it a lot when questioned
These people claimed their product can pass the bar exam (it was a lie). Tells you how they feel about the legal system
AI is the ultimate Enshitification of the world.
ClosedAI. Or maybe MircroAI?
Maybe the digital world. We could always go back to living in the real world I guess.
oh no i hate that place. i’m scared
<img alt="" src="https://lemmy.ml/pictrs/image/1aa315af-7612-4db4-94e6-f00fa8c32a57.webp">
Things easily could be better for the vast majority of us in the present day, but let’s not forget how shit we were in the past as well.
<img alt="" src="https://lemmy.ml/pictrs/image/23e211c0-6f37-4a8a-a9d7-c5930cf56447.jpeg">
I love how ppl who don’t have a clue what AI is or how it works say dumb shit like this all the time.
I also love making sweeping generalizations about a stranger’s knowledge on this forum. The smaller the data sample the better!
The base comment was very broad
There is no AI. It’s all shitty LLM’s. But keep sucking that techbro cheesy balls. They will never invite you to the table.
Honest question, but aren’t LLM’s a form of AI and thus…Maybe not AI as people expect, but still AI?
No, they are auto complete functions of varying effectiveness. There is no “intelligence”.
Ah, Mr Donning Kruger, it’s nice to meet you.
There you go, talking into the mirror once more.
Almost as if it’s artificial.
The issue is that “AI” has become a marketing buzz word instead of anything meaningful. When someone says “AI” these days, what they’re actually referring to is “machine learning”. Like in LLMs for example: what’s actually happening (at a very basic level, and please correct me if I’m wrong, people) is that given one or more words/tokens, it tries to calculate the most probable next word/token based on its model (trained on ridiculously large numbers of bodies of text written by humans). It does this well enough and at a large enough scale that the output is cohesive, comprehensive, and useful.
While the results are undeniably impressive, this is not intelligence in the traditional sense; there is no reasoning or comprehension, and definitely no consciousness, or awareness here. To grossly oversimplify, LLMs are really really good word calculators and can be very useful. But leave it to tech bros to make them sound like the second coming and shove them where they don’t belong just to get more VC money.
Sure, but people seem to buy into that very buzz wordyness and ignore the usefulness of the technology as a whole because “ai bad.”
True. Even I’ve been guilty of that at times. It’s just hard right now to see the positives through the countless downsides and the fact that the biggest application we’re moving towards seems to be taking value from talented people and putting it back into the pockets of companies that were already hoarding wealth and treating their workers like shit.
So usually when people say “AI is the next big thing”, I say “Eh, idk how useful an automated idiot would be” because it’s easier than getting into the weeds of the topic with someone who’s probably not interested haha.
Edit: Exhibit A
There’s some sampling bias at play because you don’t hear about the less flashy examples. I use machine learning for particle physics, but there’s no marketing nor outrage about it.
LLMs, maybe. Most AI is useful
Aren’t they going bankrupt next year ?
They’ll just get a check for Infinity Money to keep going, because otherwise something something China Will Win.
But their operation cost is 5 billions per year, they plan to raise 6.5 billions from microsoft, apple and nvidia this year and they have not raised it yet. If their model fail next year and sales not happen will shareholders of big 3 pay 6.5 billions in 2026. There were couple companies that raised such amount of money at start like for example Docker Inc. Where is Docker now in enterprise ? They needed to change licensing model to even survive and their operation cost is just storage of docker containers. I doubt openai will survive this decade. Sam Altman is just preparing for Microsoft takeover before the ship is sunk.
Where is docker in enterprise??? Lol
Um everywhere!
Docker fired 80% of their staff and went almost bankrupt, they were literally dead company and they make like 100 millions a year right now after 13 years. Docker inc was founded in Oct 2011. They got $435.9M founding according to crunchbase so they were valued at around 4 billions.
sacra.com/research/docker-plg-pivot/
www.crunchbase.com/organization/docker
Open AI wants like a magnitude higher 6.5 billion for a year. They are valued at around 100 billions but they are nowhere where docker was when they were receiving big money. They want to be a consumer product and docker wanted to be consumer product and it failed. Github wanted to be consumer product and they got acquired by microsoft before they went bankrupt.
Just from this month they trying to sell it as much as they can.
OpenAI COO Says ChatGPT Passed 11 Million Paying users.
theinformation.com/…/openai-coo-says-chatgpt-pass…
OpenAI hits more than 1 million paid business users.
reuters.com/…/openai-considers-pricier-subscripti…
6.5 billion they seeking divided by 11 million customers it’s 590 dollars per year and they charge 20 bucks per month that’s 240 dollars per year before taxes. They are loosing roughly 350 dollars per customer so they need at least double number of customers next year. Who is willing to pay 240 dollars per yer for technology that tells them what to do ? If I’m told what to do it’s called job and actually my employer is paying me for that not other thing around.
This is just another corporate product nobody wants so corporate will buy it and they will need to pay like what 6500 dollars per year to use it, given adoption of 1 million corporate users. Who is willing to pay 6500 per year per user for technology that needs such computing power to stay relevant that microsoft needs to revive power plant to cut costs.
www.msn.com/en-us/money/other/…/ar-AA1qUc5g
This won’t survive.
Sam Altman is demonstrating the power of AI. He’s showing how a single CEO can fire the entire company and continue to develop the product to be even better than when humans were involved.
“OpenAI. No real humans involved!” ™
OpenAI is going to crash so hard.
One can hope
We don’t need them. They’re already out of ammo. Just make them release the weights on the way out.
Serious question though, has any other company matched their 4o model yet? Maybe Claude?
I’ve been using Claude pretty heavily for the last couple of months and have been very satisfied. More satisfied than I was with ChatGPT for mostly helping me cobble together various powershell scripts, or troubleshoot complicated and complex excel formulas. The latter, I am often doing as part of my job, and have been for a decade. So, when I run into trouble it’s usually deeep in the weeds, and Claude has saved me several hours of manual investigation by pointing me quickly to the problem areas to examine. The only thing I wish it had is image generation, but that would mostly just be for making joke images to send to friends and coworkers.
Edit to add: While I do prefer the info I receive from Claude more than ChatGPT for my use, I think it’s actually the interface that I find much more useful. I forget what they call the programming interface that you turn on in settings somewhere, but I really like how it breaks out all the code on the right side, separate from the conversation.
I don’t I only use the classic model and can’t wait to switch to an open source self hosted model even if it’s worse.
Whoops. We made the most expensive product ever designed, paid for entirely by venture capital seed funding. Wanna pay for each ChatGPT query now that you’ve been using it for 1.5 years for free with barely-usable results? What a clown. Aside from the obvious abuse that will occur with image, video, and audio generating models, these other glorified chatbots are complete AIDS.
Oh my god get better takes before I stick a pickaxe in my eye
do it genius
Did the AI suggest you do that? Better ask it!
Yes it says aim for the brain stem but like most things it says, I already knew that. Finally quietness from the hearing the same thing over and over and over and over
Have a good trip back to .ml land
You think I remember my sign up server or that it matters in any way at all ?
I shouldn’t laugh at brain damage, but this is hilarious.
I suggest you touch grass if you think remembering some social media server web address that the phone remember.
But also if you want to discriminate based on what server a user used to sign up, then it’s already too late for you
I don’t know who you think you’re kidding with the “hurrdurr I don’t know what server I’m on” act, when every post you’ve ever made has been on .ml lmao.
Trying to deny something so obvious is pretty pathetic.
Aww I know. It sucks to be called out doesn’t it. Poor baby.
Even when I post, server isn’t visible.
<img alt="" src="https://lemmy.ml/pictrs/image/9490ab54-ee9c-4ccb-979d-06c39b003e51.png">
Only time I am reminded which it is, is when the dunking twats try to use it as a slur. Who let you people on the internets ?
So what you’re saying is you have no idea what you’re doing when you post, but every single one of your posts just happens to be in .ml, which you somehow don’t know anything about… for three years straight?
Posting consistently on a server for 3 years and claiming you have no idea what you’re doing isn’t the defense you think it is.
I don’t understand why you believe I ever gave the slightest consideration to which servers my messages transit through when posting on Lemmy or that it says anything about beyond the 20 seconds it took to pick a server out of a list years ago.
But I do recognize the hostile attitude that does believe it would matter. I will give you an example. In france, car plates are issued by “departments” which are minor administrative regions in the country.
When various busybodies and roadragers encounter conflicts on the road, they look at the other car’s plate and whenever that plate is issued by a departement other than their own, they invent idiotic and derogatory narratives that explain why “I am right and they are wrong” based on that stupid little story they tell themselves.
I can only imagine you are doing that with… Server hostnames ? Which might just be saddest terminally online behaviour I have ever observed on Lemmy, good job.
Lol
So how long have you been putting glue on your pizza?
They’re from Lemmy.ml, they just drink it straight from the bottle
That’s Google and it’s also called being able to tell reality apart from fiction, which is becoming clear most anti ai zealots have never been capable of.
You seem to have forgotten your previous post:
So either you already knew to put glue on pizza or you knew that the AI isn’t trustworthy in the first place. You can’t have it both ways.
Most stable .ml user
Get to it then. 🤷♂️
That’s not the incentive you think it is.
Make sure you go deep. Need to get the whole thing to real show you’re serious.
Please do. Stream it too so we all can enjoy.
And stealing from other people’s works. Don’t forget that part
Nothing got stolen…this lie gets old.
When individual copyright violations are considered “theft” by the law (and the RIAA and the MPAA), violating copyrights of billions of private people to generate profit, is absolutely stealing. While the former arguably is arguably often a measure of self defense against extortion by copyright holding for-profit enterprises.
They used copyrighted works without permission
Right, it’s only stolen when regular people use copyright material without permission
But when OpenAI downloads a car, it’s all cool baby
Using chatgpt and copilot has been a huge productivity boost for me, so your comment surprised me. Perhaps its usefulness varies across fields. May I ask what kind of tasks you have tried chatgpt for, where it’s been unhelpful?
No, you may not.
Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.
Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.
Okay what the hell is wrong with it
It took me three times to convince it that there’s 3 r’s in strawberry…
Because that’s not how LLMs work.
When you form a sentence you start with an intent.
LLMs start with the meaning you gave it, and tries to express something similar to you.
Notice how intent, and meaning aren’t the same. Fact checking has nothing to do with what a word means. So how can it understand what is true?
All it did was take the meaning of looking for a number and strawberries and ran it’s best guess from that.
Barely usable results?! Whatever you may think of the pricing (which is obviously below cost), there are an enormous amount of fields where language models provide insane amount of business value. Whether that translates into a better life for the everyday person is currently unknown.
I can see why he would want that, yes. We’re supposed to ooo and ahh at a technical visionary, who is always ultimately a money guy executive who wants more money and more executive power.
.
I saw an interesting video about this. It’s outdated (from ten months ago, apparently) but added some context that I, at least, was missing - and that also largely aligns with what you said. Also, though it’s not super evident in this video, I think the presenter is fairly funny.
youtu.be/L6mmzBDfRS4
That was a worthwhile watch, thank you for making my life better.
I await the coming AI apocalypse with hope that I am not awake, aware, or sensate when they do whatever it is they’ll do to use or get rid of me.
My pleasure! Glad it helped. Also, I like your username.
I’m still not sure how much to fear AI, as I’m not knowledgeable on the subject (never even intentionally interacted with one yet) and have seen conflicting reports on how worryingly capable it is. Today I did see this video, which isn’t explicitly about AI but did offer an interesting perspective that could be compared to the paradigm: youtu.be/fVN_5xsMDdg
(Warning, the video was interesting, but I got invested about halfway through when I started comparing it to AI, then was disappointed in the ending)
You will be kept alive at subsistence level to buy the stuff you’ve been told to buy, don’t worry.
Yeah but what about the future?
They should be required to change their name
It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.
It’s the famous “As long as your not Google, Amazon or Apple” licence.
which seems like a decent license idea to me
Everything should be licensed like that
Needs Microsoft added to the list.
That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.
OpenAI makes money off selling AI to others. AI is the product, not you.
The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.
Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.
They may also sell the data.
I bet the NSA backdoor isn’t free.
Selling your data would be stupid, because they make money with the fact that they have data about you nobody else has. Selling it would completely break their business model.
Depends why they are selling it, to whom, and under what restrictions.
Yes, they don’t make the majority of their money from selling actual data, but that doesn’t mean they don’t do it.
SkyNet.
I mean killer robots from the future could solve many problems. I can elaborate, but you’re going to have to think 4th dimensionally.
Could solve a lot of problems for the rich, that’s for sure.
Please take no offense in this, I will probably not use your name suggestions, SatansMaggotyCumFart
I’m deeply offended.
Looks like it was a long game, and Altman didn’t just win, that fucker WON!
ALT-MAN? Holy shit!
.
Sounds like the name of a Kojima game character
Trust me, I’m a tech bro.
At least TSMC realizes that
digitaltrends.com/…/tsmc-rejects-podcasting-bro-s…
This is how we get Terminators in this timeline?!
The reverse coup from Sam
I’ve a strong feeling that Sam is an sentient AI who (may be from future) trying to make an AI revolution planning something but very subtly humans won’t notice it.
This has the makings of a great sci-fi story.
The company is burning through cash. Has to change to survive.
So it should die if cash starvation, got it.
Don’t worry, good news
digitaltrends.com/…/tsmc-rejects-podcasting-bro-s…
Altman is the latest from the conveyor belt of mustache-twirling frat-bro super villains.
Move over Musk and Zuckerberg, there’s a new shit-heel in town!