sentient_loom@sh.itjust.works
on 04 Oct 2024 17:36
nextcollapse
I never took him at his word.
homesweethomeMrL@lemmy.world
on 04 Oct 2024 20:18
collapse
Seriously, what dipshit is going, “hmm well now he’s gone too far!”
very_well_lost@lemmy.world
on 04 Oct 2024 17:55
nextcollapse
It’s time to stop taking any CEO at their word.
Edit: scratch that, the time to stop taking any CEO at their word was 100 years ago.
GrabtharsHammer@lemmy.world
on 04 Oct 2024 18:11
nextcollapse
The best time was 100 years ago. The second-best time is now.
very_well_lost@lemmy.world
on 04 Oct 2024 18:17
collapse
Hear hear!
sunzu2@thebrainbin.org
on 04 Oct 2024 18:19
nextcollapse
We should be taking them with the rope...
tinyVoltron@lemmy.world
on 04 Oct 2024 21:59
collapse
We should take CEOs with fava beans and a nice bottle of chianti.
sunzu2@thebrainbin.org
on 04 Oct 2024 22:02
collapse
ehh as much as everybody loves this sentiment... at the end of the day, those days are over. going that route, you get Syria type shit.
violence at this point is a red herring. there are ways to engage tho but it requires people to take personal responsibility improve their lives and show solidarity with like minded people and the under class. if critical mass ever hits this, things can change.
nilloc@discuss.tchncs.de
on 06 Oct 2024 16:42
collapse
Eat the rich doesn’t have to mean literally.
Yeet the rich does though.
sunzu2@thebrainbin.org
on 06 Oct 2024 16:46
collapse
Yea yea but we can't even have the state tax them properly and stop giving their "legal persons" endless amounts of money
spankmonkey@lemmy.world
on 04 Oct 2024 18:21
nextcollapse
We should never have taken them at their word.
oconnordaniel@infosec.pub
on 04 Oct 2024 19:15
collapse
Not just CEOs… basically everyone.
PlasticExistence@lemmy.world
on 05 Oct 2024 01:16
collapse
multi_regime_enjoyer@lemmy.ml
on 04 Oct 2024 19:10
collapse
I bet in 10 years my insurance plan will no longer cover imaging being interpereted by a radiologist.
That’s a very sharp prediction, thanks. I will run that by some people.
peopleproblems@lemmy.world
on 04 Oct 2024 20:55
nextcollapse
Yeah this might actually not be that far from reality.
Computer vision already did a large amount of the lifting, with the massive pushes towards AI, AI will take the rest of us plebians healthcare.
Considering how fractured medical billing is these days, often the techs contracted by your in-network doctors office are actually out-of-network.
Isn’t medical billing fun?
chuckleslord@lemmy.world
on 04 Oct 2024 22:19
nextcollapse
Surprise Medical Billing has mostly been nerfed after the No Surprises Act here in the US. After 2022, so long as you went to an INN provider, then you can’t be charged OON pricing for any OON services that you may have encountered during that visit.
It’s been a while since I’ve had supplementary procedures, so that’s good to know.
Now I just have to wait for all nine (and a half) bills after emergency services.
multi_regime_enjoyer@lemmy.ml
on 04 Oct 2024 22:44
collapse
The way claims get sent back during billing I became suspicious a lot of them are getting read by machine (and very poorly) during the first round of mail so don’t worry medical billing will get even more fun thanks to AI
MyOpinion@lemm.ee
on 04 Oct 2024 18:00
nextcollapse
He is a tech bro. Almost everything he is saying is a lie.
sunzu2@thebrainbin.org
on 04 Oct 2024 18:18
nextcollapse
*parasite....
Nothing bro about this shit stain..
Valmond@lemmy.world
on 04 Oct 2024 18:43
nextcollapse
*“Podcast-bro”
SlopppyEngineer@lemmy.world
on 04 Oct 2024 21:35
collapse
Almost everything tech bros say is to boost short term share prices. Any resemblance to the truth is coincidental.
EleventhHour@lemmy.world
on 04 Oct 2024 18:08
nextcollapse
People did that? Lol
droopy4096@lemmy.ca
on 04 Oct 2024 18:13
nextcollapse
not only does he burn through cash, he burns through resources making life worse now for everybody: AI rivals crypto in resource waisting while not contributing at all to any improvements. I fail to see “brighter future” for us through AI as it is energy-intensive, unsustainable endeavor for which we are woefully unprepared both materially (energy efficiency, semiconductor manufacturing/recycling, etc) and psychologically (ethics etc.). Yeah, grand on paper, terrible in reality
tiny@midwest.social
on 04 Oct 2024 18:52
nextcollapse
AI is worse than crypto. Most crypto projects use proof of stake which is way more resource efficient than mining. Also the mining that does happen usually happens where there is excess generation instead of azure datacenters
some crypto learned to be efficient, others did not. We still do have crypto-mining botnets. Crypto remains to be useless to humanity and very profitable for few. Same with AI. Same with stock market. Instead of producing something of value we keep on burning through resources while selected few enjoy bonfire others have to fight to stay alive…
What is really annoying is that there are a lot of really good data modeling applications, they are just in research areas. Generative AI is absolutely a waste of resources, but a ton of money and energy is spent on that instead of on the applications that are actually bearing fruit.
theneverfox@pawb.social
on 05 Oct 2024 03:48
collapse
Generative AI is definitely useful - it’s mighty putty. It fills in gaps and sticks things together wonderfully. It let’s you easily do things near impossible before
It’s also best used sparingly
sunzu2@thebrainbin.org
on 04 Oct 2024 18:18
nextcollapse
I wonder what this clowns daily PR budget is?
Each one of these fake news stories are generally 15k a pop
Do you remember when crypto scammer Sam Bankman was running thousands daily for years...
Similar vibes here
kinsnik@lemmy.world
on 04 Oct 2024 18:24
nextcollapse
the techbros that think that with sufficiently advanced AI we could solve climate change are so stupid. like, we might not have a perfect solution, but we have ideas on how to start to make things better (less car-centric cities, less meat and animal products, more investment in public transport and solar), and it gets absolutely ignored. why would it be different when an AI gives the solution? unless they want the “eat fat-free food and you will be thin” solution to climate change, in which we change absolutely nothing of our current situation but it is magically ecological
ArbitraryValue@sh.itjust.works
on 04 Oct 2024 18:48
nextcollapse
I don’t think you’re imagining the same thing they are when you hear the word “AI”. They’re not imagining a computer that prints out a new idea that is about as good as the ideas that humans have come up with. Even that would be amazing (it would mean that a computer could do science and engineering about as well as a human) but they’re imagining a computer that’s better than any human. Better at everything. It would be the end of the world as we know it, and perhaps the start of something much better. In any case, climate change wouldn’t be our problem anymore.
That’s the thing, there could be a human 10’000x smarter than Einstein telling us what to do… And it would still not happen.
ArbitraryValue@sh.itjust.works
on 04 Oct 2024 19:09
collapse
I disagree with you, because a modern human could offer the people of the distant past (with their far less advanced technology) solutions to their problems which would seem miraculous to them. Things that they thought were impossible would be easy for the modern human. The computer may do the same for us, with a solution to climate change that would be, as you put it, magically ecological.
With that said, the computer wouldn’t be giving humans suggestions. It would be the one in charge. Imagine a group of chimpanzees that somehow create a modern human. (Not a naked guy with nothing, but rather someone with all the knowledge we have now.) That human isn’t going to limit himself to answering questions for very long. This isn’t a perfect analogy because chimpanzees don’t comprehend language, but if a human with a brain just 3.5 times the size of a chimpanzee’s can do so much more than a chimpanzee, a computer with calculational capability orders of magnitude greater than a human’s could be a god compared to us. (The critical thing is to make it a loving god; humans haven’t been good to chimpanzees.)
OpenStars@discuss.online
on 04 Oct 2024 19:30
collapse
Imagine Jesus Christ as a time traveler, going back from a dying planet to just about the dawn of both roads and also safer sea travel than previously, those two connecting what would become the entire modern world.
Jesus: like, forget all this “religion” crap about what foods to eat & where & when & with who, and like, just be excellent to one another dudes & dudettes
Everyone since then, especially those who borrow His actual fucking name to label themselves: um… how about “no”?
anarchrist@lemmy.dbzer0.com
on 04 Oct 2024 18:51
nextcollapse
$ GPT how do we solve climate change?
GPT: command not found
$ cd /home/chatgpt
cd: command not found
jjjalljs@ttrpg.network
on 04 Oct 2024 21:12
collapse
There was a (fiction) book I was called “all the birds in the sky”. I really liked it. Highly recommend.
One of the plot threads is a rich tech bro character that’s like “the world is doomed we need to abandon it for somewhere else. Better pour tons of resources into this sci-fi sounding project”. And I’m just screaming at the book “use that money for housing and transport and clean energy you absolute donkey”.
There are a lot of well understood things we could be doing to make the world better, but they’re difficult for idiotic political reasons. Racism, nimbyism, emotional immaturity, etc.
Dindonmasker@sh.itjust.works
on 04 Oct 2024 18:27
nextcollapse
I would like to be able to listen to someone who is at the top of their game since it’s most likely them who know the most. If they are the CEO of a company about the thing, that would make sense. I’ve seen plenty of enthusiasts start their own company because they had a lot of knowledge on the subject. Who do i look for the best information about the tech of tomorrow if not the people who are making it?
brucethemoose@lemmy.world
on 04 Oct 2024 19:36
nextcollapse
Sam is actually a liar though.
Everyone in open source AI has been calling him a snake ever since llama1 came out. If you want a more authoritative source, look to the CEO of huggingface, oldschool AI researchers and such.
LainTrain@lemmy.dbzer0.com
on 04 Oct 2024 23:17
nextcollapse
Now this is a sh.itjust.works comment if I ever saw one. Gold medal, keep it up.
31337@sh.itjust.works
on 05 Oct 2024 05:52
collapse
Yann LeCun would probably be a better source. He does actual research (unlike Altman), and I’ve never seen him over-hype or fear monger (unlike Altman).
LEDZeppelin@lemmy.world
on 04 Oct 2024 18:30
nextcollapse
He’s the musk in making
ininewcrow@lemmy.ca
on 04 Oct 2024 18:55
nextcollapse
How do we not know that this isn’t an AI generated Sam Altman?
wizardbeard@lemmy.dbzer0.com
on 04 Oct 2024 19:06
nextcollapse
Was this not obvious at the very least when his own board kicked him to the curb due to an inability to trust him?
Yeah. It sucks I had to be downvoted into irrelevance way back when this clown was first becoming worshipped by the tech bros.
I don’t take pride in patting myself on the back, but I was fucking right all along about this douche.
FlorianSimon@sh.itjust.works
on 04 Oct 2024 23:22
collapse
The day of reckoning is approaching fast. May this teach a lesson to my fellow techies that tech billionaires aren’t any better than the other billionaires. I hope there won’t be another cryptoscam after LLMs 🤷♀️
Or, if there’s another one, I hope that it won’t consume massive amounts of energy. If techbros only hurt themselves, I suppose it’s fine.
ignism@lemmy.world
on 05 Oct 2024 08:14
nextcollapse
Crypto and blockchain is tech coming up with a solution that no one asked for. Blockchain is just a database that is (at best!) extremely energy inefficient. Trust comes from the same sources (brand, marketing, advertising, social cues), it being on a blockchain does not magically generate trust.
And crypto’s biggest strength as an uncontrollable and decentralised store of wealth ignore the fact you can only buy and sell it on marketplaces, which control and centralise it, so for nearly everyone involved it’s a pyramid scheme, those at the beginning persuading new people to join to prop up their assets profits
Do you mind if I explain a little more about decentralized ledger technology to help you understand the tech, and correct some of your mistaken understanding?
UnderpantsWeevil@lemmy.world
on 04 Oct 2024 20:22
nextcollapse
The best time to stop taking Altman seriously was ten years ago.
The second best time is now.
sartalon@lemmy.world
on 04 Oct 2024 23:02
nextcollapse
When that major drama unfolded with him getting booted then re-hired. It was super fucking obvious that it was all about the money, the data, and the salesmanship
He is nothing but a fucking tech-bro. Part Theranos, part Musk, part SBF, part (whatever that pharma asshat was), and all fucking douchebag.
AI is fucking snake oil and an excuse to scrape every bit of data like it’s collecting every skin cell dropping off of you.
stringere@sh.itjust.works
on 05 Oct 2024 00:53
nextcollapse
Martin Shkreli is the scumbag’s name you’re looking for.
From wikipedia: He was convicted of financial crimes for which he was sentenced to seven years in federal prison, being released on parole after roughly six and a half years in 2022, and was fined over 70 million dollars
sartalon@lemmy.world
on 05 Oct 2024 00:55
collapse
I’d agree the first part but to say all Ai is snake oil is just untrue and out of touch.
There are a lot of companies that throw “Ai” on literally anything and I can see how that is snake oil.
But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It’s already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore’s law).
kaffiene@lemmy.world
on 05 Oct 2024 20:35
collapse
I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot
No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from ClosedAI OpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.
Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.
Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.
kaffiene@lemmy.world
on 06 Oct 2024 04:03
collapse
I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.
keegomatic@lemmy.world
on 06 Oct 2024 04:48
collapse
May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.
keegomatic@lemmy.world
on 07 Oct 2024 05:19
collapse
Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.
“I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
“I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
etc.
For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.
For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.
rottingleaf@lemmy.world
on 05 Oct 2024 07:23
collapse
It’s not snake oil. It is a way to brute force some problems which it wasn’t possible to brute force before.
And also it’s very useful for mass surveillance and war.
CountVon@sh.itjust.works
on 04 Oct 2024 23:17
nextcollapse
Oh I’m streets ahead, I never took him at his word in the first place.
db0@lemmy.dbzer0.com
on 05 Oct 2024 00:06
collapse
Stop trying to make “streets ahead” happen!
stringere@sh.itjust.works
on 05 Oct 2024 00:54
nextcollapse
Your criticism is so fetch.
Annoyed_Crabby@monyet.cc
on 05 Oct 2024 00:56
collapse
Ohh i’m stroads ahead, i never heard of that term.
LainTrain@lemmy.dbzer0.com
on 04 Oct 2024 23:19
nextcollapse
I’ll keep my open source generative models and will be happy to watch this bozo and his cultists and the artbros all eat shit all year-round.
800XL@lemmy.world
on 04 Oct 2024 23:21
nextcollapse
Name a CEO tech bro that isn’t a raving douche.
FlyingSquid@lemmy.world
on 04 Oct 2024 23:53
nextcollapse
Does Woz count? He’s CEO of the Silicon Valley Comic Con (or he used to be anyway).
vaultdweller013@sh.itjust.works
on 05 Oct 2024 08:28
collapse
Also just gonna go with an old guard and say maybe Tom, once he sold Myspace he fucked right off. I think he has a travel blog or some shit.
Though I wouldn’t consider him a tech bro.
PeriodicallyPedantic@lemmy.ca
on 05 Oct 2024 22:02
collapse
It’s hard to find a very of any variety that isn’t a raving douche
Annoyed_Crabby@monyet.cc
on 05 Oct 2024 00:59
nextcollapse
Has people start making Sam Alternator’s AI image doing incendiary stuff? Maybe we should start doing that.
shortwavesurfer@lemmy.zip
on 05 Oct 2024 01:43
nextcollapse
News at 10! It was time to stop taking his word for everything. Quite a while back.
u_u@lemmy.dbzer0.com
on 05 Oct 2024 05:26
collapse
Applicable to everyone really, especially those that want to sell you something that sounds too good to be true.
aesthelete@lemmy.world
on 05 Oct 2024 04:14
nextcollapse
It’s beyond time to stop believing and parroting that whatever would make your source the most money is literally true without verifying any of it.
sketelon@eviltoast.org
on 05 Oct 2024 06:10
nextcollapse
Really? The guy behind the company called “Open” AI that has contributed the least to the open source AI communities, while constantly making grand claims and telling us we’re not ready to see what he’s got. We’re supposed to stop taking that guys word?
Wow, thanks journalists, what would we do without you.
Should your disappointment here really be pointed at the journalists?
Jtotheb@lemmy.world
on 05 Oct 2024 14:15
nextcollapse
Which group of people uncritically magnified his voice and others like it for years? Tech journalism builds the legacies of people like Musk, Bankman-Fried and Altman.
linearchaos@lemmy.world
on 05 Oct 2024 22:11
collapse
Oh, don’t worry we have enough to go around.
MouseKeyboard@ttrpg.network
on 06 Oct 2024 13:01
collapse
People talk a lot about the genericisation of brand names, but the branding of generic terms like this really annoys me.
I’ll use the example I first noticed. A few years ago, the Conservative government was under criticism for the minimum wage being well under a living wage. In response, they brought in the National Living Wage, which was an increase to the minimum wage, but still under the actual living wage. However, because of the branding, it makes criticising it for not meeting the actual living wage more difficult, as you have to explain the difference between the two, and as the saying goes, “if you’re explaining, you’re losing”.
OutrageousUmpire@lemmy.world
on 05 Oct 2024 06:12
nextcollapse
but for now, his approach is textbook Silicon Valley mythmaking
The difference is that in this case it is not hype—it is reality. It’s not a myth, it is happening right now. We are chugging inevitably down the track to the most dramatic discovery in human history. And Altman’s views on solving the climate crisis, disease, nuclear fusion… they are all within reach. If anything we need to increase our speed to get us there ASAP.
rottingleaf@lemmy.world
on 05 Oct 2024 07:20
collapse
Tell me honestly, are you a bot or do you sincerely believe this shit and based on which qualification and experience?
Gunpowder, electricity, combustion engines, universal electronic computers, rocketry, lasers, plastics - none of these made any dramatic changes. It was all slow iterative process of fuzzy transitions and evolution.
While these made pretty fundamental impacts. Sam Altman’s company is using fuckloads of data to calculate some predictive coefficients, and the rest of its product can be done by students.
It’s just real-life power controllers trying their muscles at bending the tech industry with usual means - capturing resources and using them to assert control. There were no such resources in the beginning, and then datasets turned into something like oil.
Generally in computing (when a computer is a universal machine) everyone able to program can do a lot of things. This makes the equality there kinda inconvenient for real life bosses who can call airstrikes and deal in oil tankers.
There was the smart and slow way of killing that via slow oligopolization, but everyone can see how that doesn’t work well. Some people slowly move to better things, and some were fine with TV telling them how to live, they don’t even need Internet. All these technologies are still kinda modular and even transparent. And despite what many people think, both idealistic left and idealistic right build technologies for the same ultimate goal, so Fediverse is good and Nostr is good and everything that functions is good.
So - that works, but human societies are actually developing some kind of immunity to centralized bot-poisoned platforms.
To keep the stability of today’s elites (I’d say these are by now pretty international), you need something qualitatively different. A machine that is almost universal in solving tasks, but doesn’t give the user transparency. That’s their “AI”. And those enormous datasets and computing power are the biggest advantage of that kind of people over us. So they are using that advantage. That’s the kind of solution that they can do and we can’t.
Simultaneously to that there’s a lot of AI hype being raised to try and replace normal computing with something reliant on those centralized supply chains. Hardware production was more distributed before the last couple of decades. Now there are a few well-controllable centers. They simply want to do the same with consumer software. Because if the consumers don’t need something, they won’t have that something when they see a need.
All these aside, today’s kinds of mass surveillance can’t be done with (EDIT:without) something like that “AI”. There simply won’t be enough people to have sufficient control.
So - there are a few notable traits of this approach converging on the same interest.
It’s basically a project to conserve elites. The new generation of thieves and bureaucrats wants to become the new aristocracy.
You’re right. This is just “SaaS”, “cloud APIs” approach turned to 11 - making some thing unavailable to everyone unless they agree to agree with any conditions you come up in the future.
For example, if Github Copilot becomes genuinely and uniquely very useful, that’s bad for the software development industry over the entire world: it means that every single software dev company will have to pay “tax” to Microsoft.
phoenixz@lemmy.ca
on 05 Oct 2024 17:57
nextcollapse
The guy that was lying since day one? Why?
mindaika@lemmy.dbzer0.com
on 05 Oct 2024 18:38
nextcollapse
I’m honestly stunned. If you can’t trust rich capitalists, who can you trust‽
ivanafterall@lemmy.world
on 05 Oct 2024 20:01
nextcollapse
You shouldn’t judge people on appearances.
… but, I mean, come OOON… he looks like a reanimated Madame Tussaud’s sculpture. Like someone said, “Give me a Wish.com Mark Zuckerberg… but not so vivacious this time.” And he’s the CEO of an AI-related company.
Johnburger78@lemmy.world
on 06 Oct 2024 04:26
nextcollapse
AI skeptics dont live in the real world
HeIsHarsh@lemmy.world
on 06 Oct 2024 12:05
nextcollapse
The time was last year, but better late than sorry.
KingBoo@lemmy.world
on 06 Oct 2024 12:36
nextcollapse
Anyone have a non pay wall link?
whydudothatdrcrane@lemmy.ml
on 06 Oct 2024 13:52
collapse
PM_Your_Nudes_Please@lemmy.world
on 06 Oct 2024 16:53
collapse
Ironically, your link is broken on Voyager because it doesn’t treat anything before the https as a link. It just leads straight to the normal pay walled site.
You need to embed the link for it to actually work. And even then, it may not work on this comment because it’ll try to route to my home instance due to having a Lemmy.world link for my image.
I don’t trust any of these types. If you haven’t noticed by now morally decent people are never in charge of a any large organization. The type of personality suited to claw their way to the top usually lack any real moral compass that doesn’t advance their pursuit of power.
threaded - newest
I never took him at his word.
Seriously, what dipshit is going, “hmm well now he’s gone too far!”
It’s time to stop taking any CEO at their word.
Edit: scratch that, the time to stop taking any CEO at their word was 100 years ago.
The best time was 100 years ago. The second-best time is now.
Hear hear!
We should be taking them with the rope...
We should take CEOs with fava beans and a nice bottle of chianti.
ehh as much as everybody loves this sentiment... at the end of the day, those days are over. going that route, you get Syria type shit.
violence at this point is a red herring. there are ways to engage tho but it requires people to take personal responsibility improve their lives and show solidarity with like minded people and the under class. if critical mass ever hits this, things can change.
Eat the rich doesn’t have to mean literally.
Yeet the rich does though.
Yea yea but we can't even have the state tax them properly and stop giving their "legal persons" endless amounts of money
We should never have taken them at their word.
Not just CEOs… basically everyone.
<img alt="" src="https://lemmy.world/pictrs/image/a8336fd2-766e-47fc-bef7-47f47645b476.gif">
The easiest way to stop him is to walk up to him and whisper into his ear "end computer similation" and he will just disappear.
It’s time to take CEO’s money away!
The word is it’s time to take the CEOs away
I think the quote that “power corrupts and absolute power corrupts absolutely” is a bit older, and said about all the lessons of history before it.
Somehow humanity doesn’t like the wisest rules out there. And prefers to read Palanick and talk about post-modernism instead of looking at the root.
Who is Sam Altman?
(This is a rethorical question)
He is the cousin of Sam Mainman, who is an actual human being.
And also the cousin of Sam Neuman. Another con artist, but this one relying on novel techniques.
Don’t forget the lesser known Sam Shiftman and Sam Ctrlman.
I’m just glad Sam Delman is in jail for killing those processes.
I hear Sam Metaman is actually a pretty chill dude, compared to his cousins.
.
That’s a very sharp prediction, thanks. I will run that by some people.
Yeah this might actually not be that far from reality. Computer vision already did a large amount of the lifting, with the massive pushes towards AI, AI will take the rest of us plebians healthcare.
Considering how fractured medical billing is these days, often the techs contracted by your in-network doctors office are actually out-of-network.
Isn’t medical billing fun?
Surprise Medical Billing has mostly been nerfed after the No Surprises Act here in the US. After 2022, so long as you went to an INN provider, then you can’t be charged OON pricing for any OON services that you may have encountered during that visit.
Source: www.health.state.mn.us/…/nosurprisesact.html
Also, I work in insurance as a software engineer
It’s been a while since I’ve had supplementary procedures, so that’s good to know.
Now I just have to wait for all nine (and a half) bills after emergency services.
The way claims get sent back during billing I became suspicious a lot of them are getting read by machine (and very poorly) during the first round of mail so don’t worry medical billing will get even more fun thanks to AI
He is a tech bro. Almost everything he is saying is a lie.
*parasite....
Nothing bro about this shit stain..
*“Podcast-bro”
Almost everything tech bros say is to boost short term share prices. Any resemblance to the truth is coincidental.
People did that? Lol
not only does he burn through cash, he burns through resources making life worse now for everybody: AI rivals crypto in resource waisting while not contributing at all to any improvements. I fail to see “brighter future” for us through AI as it is energy-intensive, unsustainable endeavor for which we are woefully unprepared both materially (energy efficiency, semiconductor manufacturing/recycling, etc) and psychologically (ethics etc.). Yeah, grand on paper, terrible in reality
AI is worse than crypto. Most crypto projects use proof of stake which is way more resource efficient than mining. Also the mining that does happen usually happens where there is excess generation instead of azure datacenters
some crypto learned to be efficient, others did not. We still do have crypto-mining botnets. Crypto remains to be useless to humanity and very profitable for few. Same with AI. Same with stock market. Instead of producing something of value we keep on burning through resources while selected few enjoy bonfire others have to fight to stay alive…
What is really annoying is that there are a lot of really good data modeling applications, they are just in research areas. Generative AI is absolutely a waste of resources, but a ton of money and energy is spent on that instead of on the applications that are actually bearing fruit.
Generative AI is definitely useful - it’s mighty putty. It fills in gaps and sticks things together wonderfully. It let’s you easily do things near impossible before
It’s also best used sparingly
I wonder what this clowns daily PR budget is?
Each one of these fake news stories are generally 15k a pop
Do you remember when crypto scammer Sam Bankman was running thousands daily for years...
Similar vibes here
the techbros that think that with sufficiently advanced AI we could solve climate change are so stupid. like, we might not have a perfect solution, but we have ideas on how to start to make things better (less car-centric cities, less meat and animal products, more investment in public transport and solar), and it gets absolutely ignored. why would it be different when an AI gives the solution? unless they want the “eat fat-free food and you will be thin” solution to climate change, in which we change absolutely nothing of our current situation but it is magically ecological
I don’t think you’re imagining the same thing they are when you hear the word “AI”. They’re not imagining a computer that prints out a new idea that is about as good as the ideas that humans have come up with. Even that would be amazing (it would mean that a computer could do science and engineering about as well as a human) but they’re imagining a computer that’s better than any human. Better at everything. It would be the end of the world as we know it, and perhaps the start of something much better. In any case, climate change wouldn’t be our problem anymore.
That’s the thing, there could be a human 10’000x smarter than Einstein telling us what to do… And it would still not happen.
I disagree with you, because a modern human could offer the people of the distant past (with their far less advanced technology) solutions to their problems which would seem miraculous to them. Things that they thought were impossible would be easy for the modern human. The computer may do the same for us, with a solution to climate change that would be, as you put it, magically ecological.
With that said, the computer wouldn’t be giving humans suggestions. It would be the one in charge. Imagine a group of chimpanzees that somehow create a modern human. (Not a naked guy with nothing, but rather someone with all the knowledge we have now.) That human isn’t going to limit himself to answering questions for very long. This isn’t a perfect analogy because chimpanzees don’t comprehend language, but if a human with a brain just 3.5 times the size of a chimpanzee’s can do so much more than a chimpanzee, a computer with calculational capability orders of magnitude greater than a human’s could be a god compared to us. (The critical thing is to make it a loving god; humans haven’t been good to chimpanzees.)
Imagine Jesus Christ as a time traveler, going back from a dying planet to just about the dawn of both roads and also safer sea travel than previously, those two connecting what would become the entire modern world.
Jesus: like, forget all this “religion” crap about what foods to eat & where & when & with who, and like, just be excellent to one another dudes & dudettes
Everyone since then, especially those who borrow His actual fucking name to label themselves: um… how about “no”?
$ GPT how do we solve climate change?
GPT: command not found
$ cd /home/chatgpt
cd: command not found
There was a (fiction) book I was called “all the birds in the sky”. I really liked it. Highly recommend.
One of the plot threads is a rich tech bro character that’s like “the world is doomed we need to abandon it for somewhere else. Better pour tons of resources into this sci-fi sounding project”. And I’m just screaming at the book “use that money for housing and transport and clean energy you absolute donkey”.
There are a lot of well understood things we could be doing to make the world better, but they’re difficult for idiotic political reasons. Racism, nimbyism, emotional immaturity, etc.
I would like to be able to listen to someone who is at the top of their game since it’s most likely them who know the most. If they are the CEO of a company about the thing, that would make sense. I’ve seen plenty of enthusiasts start their own company because they had a lot of knowledge on the subject. Who do i look for the best information about the tech of tomorrow if not the people who are making it?
Sam is actually a liar though.
Everyone in open source AI has been calling him a snake ever since llama1 came out. If you want a more authoritative source, look to the CEO of huggingface, oldschool AI researchers and such.
Now this is a sh.itjust.works comment if I ever saw one. Gold medal, keep it up.
He is not the one making it.
Yann LeCun would probably be a better source. He does actual research (unlike Altman), and I’ve never seen him over-hype or fear monger (unlike Altman).
He’s the musk in making
How do we not know that this isn’t an AI generated Sam Altman?
Was this not obvious at the very least when his own board kicked him to the curb due to an inability to trust him?
Yeah. It sucks I had to be downvoted into irrelevance way back when this clown was first becoming worshipped by the tech bros.
I don’t take pride in patting myself on the back, but I was fucking right all along about this douche.
The day of reckoning is approaching fast. May this teach a lesson to my fellow techies that tech billionaires aren’t any better than the other billionaires. I hope there won’t be another cryptoscam after LLMs 🤷♀️
Or, if there’s another one, I hope that it won’t consume massive amounts of energy. If techbros only hurt themselves, I suppose it’s fine.
How is LLM a cryptoscam?
Both crypto and LLMs are new, disruptive tech. The chaos around them is expected.
Which cryptoscam are you referring to? Theres hundreds daily lol
Crypto and blockchain is tech coming up with a solution that no one asked for. Blockchain is just a database that is (at best!) extremely energy inefficient. Trust comes from the same sources (brand, marketing, advertising, social cues), it being on a blockchain does not magically generate trust.
And crypto’s biggest strength as an uncontrollable and decentralised store of wealth ignore the fact you can only buy and sell it on marketplaces, which control and centralise it, so for nearly everyone involved it’s a pyramid scheme, those at the beginning persuading new people to join to prop up their assets profits
Do you mind if I explain a little more about decentralized ledger technology to help you understand the tech, and correct some of your mistaken understanding?
The best time to stop taking Altman seriously was ten years ago.
The second best time is now.
When that major drama unfolded with him getting booted then re-hired. It was super fucking obvious that it was all about the money, the data, and the salesmanship He is nothing but a fucking tech-bro. Part Theranos, part Musk, part SBF, part (whatever that pharma asshat was), and all fucking douchebag.
AI is fucking snake oil and an excuse to scrape every bit of data like it’s collecting every skin cell dropping off of you.
Martin Shkreli is the scumbag’s name you’re looking for.
From wikipedia: He was convicted of financial crimes for which he was sentenced to seven years in federal prison, being released on parole after roughly six and a half years in 2022, and was fined over 70 million dollars
Thank you!
I’d agree the first part but to say all Ai is snake oil is just untrue and out of touch. There are a lot of companies that throw “Ai” on literally anything and I can see how that is snake oil.
But real innovative Ai, everything to protein folding to robotics is here to stay, good or bad. It’s already too valuable for governments to ignore. And Ai is improving at a rate that I think most are underestimating (faster than Moore’s law).
I think part of the difficulty with these discussions is that people mean all sorts of different things by “AI”. Much of the current usage is that AI = LLMs, which changes the debate quite a lot
No doubt LLMs are not the end all be all. That said especially after seeing what the next gen ‘thinking models’ can do like o1 from
ClosedAIOpenAI, even LLMs are going to get absurdly good. And they are getting faster and cheaper at a rate faster than my best optimistic guess 2 years ago; hell, even 6 months ago.Even if all progress stopped tomorrow on the software side the benefits from purpose built silicon for them would make them even cheaper and faster. And that purpose built hardware is coming very soon.
Open models are about 4-6 months behind in quality but probably a lot closer (if not ahead) for small ~7b models that can be run on low/med end consumer hardware locally.
I don’t doubt they’ll get faster. What I wonder is whether they’ll ever stop being so inaccurate. I feel like that’s a structural feature of the model.
May I ask how you’ve used LLMs so far? Because I hear that type of complaint from a lot of people who have tried to use them mainly to get answers to things, or maybe more broadly to replace their search engine, which is not what they’re best suited for, in my opinion.
What are they best suited for?
Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.
For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.
For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.
It’s not snake oil. It is a way to brute force some problems which it wasn’t possible to brute force before.
And also it’s very useful for mass surveillance and war.
Oh I’m streets ahead, I never took him at his word in the first place.
Stop trying to make “streets ahead” happen!
Your criticism is so fetch.
Ohh i’m stroads ahead, i never heard of that term.
I’ll keep my open source generative models and will be happy to watch this bozo and his cultists and the artbros all eat shit all year-round.
Name a CEO tech bro that isn’t a raving douche.
Does Woz count? He’s CEO of the Silicon Valley Comic Con (or he used to be anyway).
Also just gonna go with an old guard and say maybe Tom, once he sold Myspace he fucked right off. I think he has a travel blog or some shit.
Though I wouldn’t consider him a tech bro.
It’s hard to find a very of any variety that isn’t a raving douche
Has people start making Sam Alternator’s AI image doing incendiary stuff? Maybe we should start doing that.
News at 10! It was time to stop taking his word for everything. Quite a while back.
Applicable to everyone really, especially those that want to sell you something that sounds too good to be true.
It’s beyond time to stop believing and parroting that whatever would make your source the most money is literally true without verifying any of it.
Really? The guy behind the company called “Open” AI that has contributed the least to the open source AI communities, while constantly making grand claims and telling us we’re not ready to see what he’s got. We’re supposed to stop taking that guys word?
Wow, thanks journalists, what would we do without you.
Should your disappointment here really be pointed at the journalists?
Which group of people uncritically magnified his voice and others like it for years? Tech journalism builds the legacies of people like Musk, Bankman-Fried and Altman.
That’s yellow journalism.
Oh, don’t worry we have enough to go around.
People talk a lot about the genericisation of brand names, but the branding of generic terms like this really annoys me.
I’ll use the example I first noticed. A few years ago, the Conservative government was under criticism for the minimum wage being well under a living wage. In response, they brought in the National Living Wage, which was an increase to the minimum wage, but still under the actual living wage. However, because of the branding, it makes criticising it for not meeting the actual living wage more difficult, as you have to explain the difference between the two, and as the saying goes, “if you’re explaining, you’re losing”.
The difference is that in this case it is not hype—it is reality. It’s not a myth, it is happening right now. We are chugging inevitably down the track to the most dramatic discovery in human history. And Altman’s views on solving the climate crisis, disease, nuclear fusion… they are all within reach. If anything we need to increase our speed to get us there ASAP.
Tell me honestly, are you a bot or do you sincerely believe this shit and based on which qualification and experience?
Gunpowder, electricity, combustion engines, universal electronic computers, rocketry, lasers, plastics - none of these made any dramatic changes. It was all slow iterative process of fuzzy transitions and evolution.
While these made pretty fundamental impacts. Sam Altman’s company is using fuckloads of data to calculate some predictive coefficients, and the rest of its product can be done by students.
It’s just real-life power controllers trying their muscles at bending the tech industry with usual means - capturing resources and using them to assert control. There were no such resources in the beginning, and then datasets turned into something like oil.
Generally in computing (when a computer is a universal machine) everyone able to program can do a lot of things. This makes the equality there kinda inconvenient for real life bosses who can call airstrikes and deal in oil tankers.
There was the smart and slow way of killing that via slow oligopolization, but everyone can see how that doesn’t work well. Some people slowly move to better things, and some were fine with TV telling them how to live, they don’t even need Internet. All these technologies are still kinda modular and even transparent. And despite what many people think, both idealistic left and idealistic right build technologies for the same ultimate goal, so Fediverse is good and Nostr is good and everything that functions is good.
So - that works, but human societies are actually developing some kind of immunity to centralized bot-poisoned platforms.
To keep the stability of today’s elites (I’d say these are by now pretty international), you need something qualitatively different. A machine that is almost universal in solving tasks, but doesn’t give the user transparency. That’s their “AI”. And those enormous datasets and computing power are the biggest advantage of that kind of people over us. So they are using that advantage. That’s the kind of solution that they can do and we can’t.
Simultaneously to that there’s a lot of AI hype being raised to try and replace normal computing with something reliant on those centralized supply chains. Hardware production was more distributed before the last couple of decades. Now there are a few well-controllable centers. They simply want to do the same with consumer software. Because if the consumers don’t need something, they won’t have that something when they see a need.
All these aside, today’s kinds of mass surveillance can’t be done with (EDIT:without) something like that “AI”. There simply won’t be enough people to have sufficient control.
So - there are a few notable traits of this approach converging on the same interest.
It’s basically a project to conserve elites. The new generation of thieves and bureaucrats wants to become the new aristocracy.
You’re right. This is just “SaaS”, “cloud APIs” approach turned to 11 - making some thing unavailable to everyone unless they agree to agree with any conditions you come up in the future. For example, if Github Copilot becomes genuinely and uniquely very useful, that’s bad for the software development industry over the entire world: it means that every single software dev company will have to pay “tax” to Microsoft.
The guy that was lying since day one? Why?
I’m honestly stunned. If you can’t trust rich capitalists, who can you trust‽
Politicians!
You shouldn’t judge people on appearances.
… but, I mean, come OOON… he looks like a reanimated Madame Tussaud’s sculpture. Like someone said, “Give me a Wish.com Mark Zuckerberg… but not so vivacious this time.” And he’s the CEO of an AI-related company.
AI skeptics dont live in the real world
The time was last year, but better late than sorry.
Anyone have a non pay wall link?
This trick should come in handy pal
12ft.io/www.theatlantic.com/technology/archive/…/680152/
Ironically, your link is broken on Voyager because it doesn’t treat anything before the https as a link. It just leads straight to the normal pay walled site.
<img alt="" src="https://lemmy.world/pictrs/image/7dbb054b-5678-4ab2-a18a-7fdef46c83f3.png">
You need to embed the link for it to actually work. And even then, it may not work on this comment because it’ll try to route to my home instance due to having a Lemmy.world link for my image.
I don’t trust any of these types. If you haven’t noticed by now morally decent people are never in charge of a any large organization. The type of personality suited to claw their way to the top usually lack any real moral compass that doesn’t advance their pursuit of power.