Academia to Industry
from fossilesque@mander.xyz to science_memes@mander.xyz on 24 Jun 13:20
https://mander.xyz/post/14536836

#science_memes

threaded - newest

jBlight@lemmy.world on 24 Jun 13:36 next collapse

Ah yes, ph.d intelligence, but the wisdom of a toddler.

shneancy@lemmy.world on 24 Jun 13:50 next collapse

mmm, unironically sounds like me. According to my iq test i had PhD level intelligence at 18, and what am i doing at 24? unemployed, playing video games, and crying

Noodle07@lemmy.world on 24 Jun 14:04 next collapse

It’s alright, you can keep going for a bit, I’m about to hit 30 playing video-games and crying

AlolanYoda@mander.xyz on 25 Jun 11:52 collapse

I’m 30. Almost have a PhD.

Also playing video games and crying

blazeknave@lemmy.world on 25 Jun 14:17 collapse

Wait until you’re 40 talking about that IQ test like the Al Bundy of brains

shneancy@lemmy.world on 25 Jun 16:26 collapse

nah IQ tests are more or less bullshit, they’re incredibly flawed and biased, only situation you’d see me talking about them at lenght is bashing the entire concept of trying to quantify general intelligence

fossilesque@mander.xyz on 24 Jun 14:00 next collapse

Book Smart Street Smart

Big_Boss_77@lemmynsfw.com on 24 Jun 14:07 next collapse

Why yes, you CAN put a tomatoes in your fruit salad. It is a fruit after all.

Gork@lemm.ee on 24 Jun 15:16 next collapse

By this logic I can finally add pizza to my salad!

mriormro@lemmy.world on 24 Jun 18:09 collapse

You laugh but

npr.org/…/pizza-as-a-vegetable-it-depends-on-the-…

PopShark@lemmy.world on 25 Jun 18:29 collapse

“Kids aren’t going to eat a slice of pizza drenched in tomato sauce”

Lmao what a quote, as a former kid I’m going to have to agree with that haha

whereBeWaldo@lemmy.dbzer0.com on 25 Jun 08:59 collapse

After all… why not? Why shouldn’t I eat one small rock a day?

cm0002@lemmy.world on 24 Jun 15:00 next collapse

but the wisdom of a toddler.

Sounds like an improvement to me lol

SlopppyEngineer@lemmy.world on 24 Jun 18:31 next collapse

They’ll give it all the social interaction and wisdom scraped from YouTube clips.

Wooki@lemmy.world on 25 Jun 10:01 collapse

Wisdom implys it has intelligence, it does not. It’s a word predictor.

skillissuer@discuss.tchncs.de on 24 Jun 14:08 next collapse

the only thing this chatbot will be able to simulate is unreasonable persistence

[deleted] on 24 Jun 14:12 collapse

.

Mango@lemmy.world on 24 Jun 14:14 next collapse

No it won’t.

vane@lemmy.world on 25 Jun 12:57 collapse

Yes it will

Mango@lemmy.world on 25 Jun 13:11 collapse

NO IT WON’T

vane@lemmy.world on 25 Jun 13:24 collapse

YES IT WILL

Mango@lemmy.world on 25 Jun 13:30 collapse

NO IT WON’T

vane@lemmy.world on 25 Jun 14:06 collapse

YES IT WILL
twice_twotimes@sh.itjust.works on 25 Jun 14:34 collapse

Don’t fight, guys. This is academia. You’re both wrong.

vane@lemmy.world on 25 Jun 16:08 collapse

No we’re not

Mango@lemmy.world on 25 Jun 17:52 collapse

😘

fubarx@lemmy.ml on 24 Jun 14:22 next collapse

NFTs will keep their value forever…

SonicDeathTaco@lemm.ee on 24 Jun 14:31 next collapse

I can’t imagine looking at the world and thinking we need more industry. Also, I know a lot of PhDs. Knowing a lot of things about a particular subject in know way correlates with intelligence.

clearedtoland@lemmy.world on 24 Jun 14:36 next collapse

Is it weird that I still want to go for my PhD despite all the feedback about the process? I don’t think I’ve ever met a PhD or candidate that’s enthusiastically said “do it!”

bluemellophone@lemmy.world on 24 Jun 14:56 next collapse

It’s a lot of fucking work. If you enjoy hard work, learning about the latest advancements in your field, and can handle disappointment / criticism well, then it’s something to look into.

skillissuer@discuss.tchncs.de on 24 Jun 15:03 collapse

that and if you can find lab/group with recent publications and funding. not sticking too hard to failed ideas also helps

homesweethomeMrL@lemmy.world on 24 Jun 16:26 collapse

Having a shitload of money is also quite handy for such endeavors.

skillissuer@discuss.tchncs.de on 24 Jun 16:28 collapse

ideally not your money, but money from grants, that’s why i mentioned it

homesweethomeMrL@lemmy.world on 24 Jun 17:11 collapse

Ideally. Also if you’re in social science . . . ehh. It’s your money.

skillissuer@discuss.tchncs.de on 24 Jun 17:27 collapse

i mean, that’s the point that in the process grant money becomes your money

homesweethomeMrL@lemmy.world on 24 Jun 17:38 collapse

And best of luck to anyone who wants to try.

veganpizza69@lemmy.world on 24 Jun 14:58 next collapse

It’s like being drafted to a war while you only receive vague orders and you slowly realize what the phrase “war is a racket” means. You suffer and learn things that you didn’t plan on learning.

doctordevice@lemmy.ca on 24 Jun 15:08 next collapse

I generally tell people the only reason to do it is if your career pursuits require it, and even then I warn them away unless they’re really sure. Not every research advisor is abusive, but many are. Some without even realizing it. I ended up feeling like nothing more than a tool to pump up my research advisor’s publication count.

It was so disillusioning that I completely abandoned my career goal of teaching at a university because I didn’t want to go anywhere near that toxic culture again. Nevertheless, I did learn some useful skills that helped me pivot to another career earning pretty good money.

So I guess I’m saying it’s a really mixed bag. If you’re sure it’s what you want, go for it. But changing your mind is always an option.

Neuromancer49@midwest.social on 24 Jun 16:16 collapse

+1 to all of this. See also: phdcomics.com/comics.php?f=1296

drre@feddit.de on 24 Jun 15:18 next collapse

no it’s not. but you should know what you’re getting into.

in the beginning of my PhD i really loved what i was doing. from an intellectually point of view i still do. but later, i.e. after 3 years doing a shitty postdoc, i realized that I was not cut out for academia but nevertheless loved doing science.

however, i was lucky to find a place in industry doing what i like.

so i guess my 2c is: think about what comes after the PhD and work towards that goal. a PhD is usually not a goal in itself. hth

Contramuffin@lemmy.world on 24 Jun 17:20 next collapse

No, not weird at all. PhD’s are pain, but certain people like the pain. If you’re good with handling stress, and also OK with working in a fast-paced, high-impact environment (for real, not business talk BS), then it may be the right decision for you. The biggest thing that I would say is that you should really, really think about whether this is what you want, since once you start a PhD, you’ve locked the next 6 years of your life into it with no chance of getting out

Edit: Also, you need to have a highly sensitive red-flag radar. As a graduate student, you are highly susceptible to abuse from your professor. There is no recourse for abuse. The only way to avoid abuse is by not picking an abusive professor from the get-go. Which is hard, since professors obviously would never talk badly about themselves. Train that red-flag radar, since you’ll need to really read between every word and line to figure out if a professor is right for you

beefbot@lemmy.blahaj.zone on 24 Jun 17:48 next collapse

Allow me to just simply say DON’T DO IT. DON’T FUCKING DO IT. There are very few examples of regretting something you haven’t tried but this is one

twice_twotimes@sh.itjust.works on 25 Jun 14:55 collapse

If you have a good understanding of what grad school actually is, you know it’s not going to be college+, and you’re still excited? Go for it! Just go in with the attitude that this is the start of a career path (not school) with many branches along the way. Most people you’ll work with will act like your options are 1) aim for TT at an R1 or 2) cut your losses and go into industry. Those are both legit paths, but pay attention to what you’re loving and hating about the experience.

Maybe you absolutely love teaching or mentorship or grant-writing or data analysis or giving conference talks or science communication or managing a lab or any of the other billion things you have to be responsible for at some point. There are career paths between the extremes that can let do so the stuff you actually like doing, and they exist both in and outside of academia. If you go in letting yourself get excited about whatever the hell you actually get excited about, you can figure out what the path you actually want could look like and prioritize those things that don’t make you miserable.

  • a PhD who voluntarily pursued an instructional faculty track at an R1 where I never again have to backseat the needs of my students and my love of pedagogy behind desperately looking for research funding because publish-or-perish even though o have at bare minimum 3 months a year to devote entirely to whatever research I am excited about in the moment…or play video games if I prefer
jalsk@lemmy.world on 24 Jun 14:48 next collapse

It will have a post talking about the pivot to industry, but will not have actually done the transition itself.

plinky@hexbear.net on 24 Jun 14:58 next collapse

could chatbots approximate shareholder behavior? <img alt="thonk" src="https://www.hexbear.net/pictrs/image/922fe80d-74f6-43b6-a4e1-c74dbc4df512.png">

A_Chilean_Cyborg@feddit.cl on 25 Jun 02:18 collapse

Could flies?

plinky@hexbear.net on 25 Jun 16:06 next collapse

probably that fancy mold can, if we approximate it with some gradient of food. Retvrn to modeling economy with waterpipes

rcbrk@lemmy.ml on 25 Jun 17:01 collapse

💩🤑🤖🚀

Rubisco@slrpnk.net on 24 Jun 15:23 next collapse

So when it helps out with a recipe, we won’t get a suggestion specifically for Elmer’s, but rather the IUPAC name for superglue?

Iheartcheese@lemmy.world on 24 Jun 15:36 next collapse

Literally the only thing I’ve seen this used for that seems impressive and useful is that Skyrim companion

BananaOnionJuice@lemmy.dbzer0.com on 24 Jun 16:17 next collapse

Will GPT-7 then be a burntout startup founder?

homesweethomeMrL@lemmy.world on 24 Jun 16:27 collapse

Ah. The synchronicity.

scytale@lemm.ee on 24 Jun 16:25 next collapse

Now it can not only tell you to eat rocks, but also what type of rock would be best for your digestion.

I_am_10_squirrels@beehaw.org on 24 Jun 20:01 collapse

I prefer moon rocks, they’re a little meatier

bronzle@lemm.ee on 25 Jun 04:56 collapse

One of these days Alice, bang, zoom, straight to the moon.

merc@sh.itjust.works on 24 Jun 17:58 next collapse

PhD level intelligence? Sounds about right.

Extremely narrow field of expertise ✔️
Misplaced confidence in its abilities outside its area of expertise ✔️
A mind filled with millions of things that have been read, and near zero from interactions with real people✔️
An obsession over how many words can get published over the quality and correctness of those words ✔️
A lack of social skills ✔️
A complete lack of familiarity of how things work in the real world ✔️

repungnant_canary@lemmy.world on 24 Jun 20:32 next collapse

“Never have I been so offended by something I 100% agree with!”

SoleInvictus@lemmy.blahaj.zone on 25 Jun 23:37 collapse

You just listed out the reasons I stopped at MSc! I’m all of those but only like half as much.

kn0wmad1c@programming.dev on 24 Jun 19:01 next collapse

Translation: GPT-5 will (most likely illegally) be fed academic papers that are currently behind a paywall

pineapplelover@lemm.ee on 24 Jun 20:14 next collapse

I guess then we would be able to tell it to recite a paper for free and it may do it.

A_Chilean_Cyborg@feddit.cl on 25 Jun 02:14 collapse

Or hallucinate it, did you know that large ammounts of arsenic can cure cancer and the flu?

And009@lemmynsfw.com on 25 Jun 08:34 collapse

Death can cure Debt

PopShark@lemmy.world on 25 Jun 18:38 collapse

This is not a hallucination

twice_twotimes@sh.itjust.works on 25 Jun 14:31 collapse

I mean, GPT 3.5 consistently quotes my dissertation and conference papers back to me when I ask it anything related to my (extremely niche, but still) research interests. It’s definitely had access to plenty of publications for a while without managing to make any sense of them.

Alternatively, and probably more likely, my papers are incoherent and it’s not GPT’s fault. If 8.0 gets tenure track maybe it will learn to ignore desperate ramblings of PhD students. Once 9.0 gets tenured though I assume it will only reference itself.

iAvicenna@lemmy.world on 24 Jun 19:40 next collapse

I like how they have no road map on how to achieve general artificial intelligence (apart from lets train LLMs with a gazillion parameters and the equivalent of yearly energy consumed by ten large countries) but yet pretend chatgpt 4 is only two steps away from it

ignotum@lemmy.world on 24 Jun 20:24 collapse

Hard to make a roadmap when people can’t even agree on what the destination is not how to get there.

But if you have enough data on how humans react to stimulus, and you have a good enough model, then you will be able to train it to behave exactly like a human. The approach is sound even though in practice there prooobably doesn’t exist enough usable training data in the world to reach true AGI, but the models are already good enough to be used for certain tasks

iAvicenna@lemmy.world on 24 Jun 21:11 next collapse

The approach is not sound when all the other factors are considered. If AI continues along this approach it is likely that big AI companies will need to usurp next possible tech breakthroughs like quantum computing and fusion energy to be able to keep growing and produce more profit instead of these techs being used for better purposes (cheaper and cleaner household energy, scientific advances etc). All things considered excelling at image analysis, creative writing and digital arts wont be worth all the damage its going to cause.

ignotum@lemmy.world on 25 Jun 06:38 collapse

Usurp? They won’t be the ones to develop quantum computers, nor will they be developing fusion, if those technologies become available they might start using them but that won’t somehow mean it won’t be available for other uses.

And seeing as they make money from “renting out” the models, they can easily be “used for better purposes”

ChatGPT is currently free to use for anyone, this isn’t some technology they’re hoarding and keeping for themselves

iAvicenna@lemmy.world on 25 Jun 08:36 next collapse

By usurp I mean fill out all the available capacity for their own use (along with other tech giants who will be running the same moon race), assuming by that time they will be the largest tech giants of the time and have the financial means to do so.

Don’t get me wrong the things that chatgpt can do are amazing. Even if hallucinates or cant really reason logically, it is still beyond what I would have expected. But when the time I mentioned above comes, people wont be given a choice between AI or cheaper energy/better health care. All that technological advancements will be bought to full capacity by AI companies and AI will be shoved down people’s throats.

And yes chatgpt is free but it is only a business decision not a “for the good of the humanity” act. free chatgpt helps testing and generating popularity which in turn brings investment. I am not saying anything negative (or positive) about their business plan but dont think for a second that they will have any ethical concerns about leeching upcoming technological innovations for the sake of generating profit. And this is just one company. There will be others too, Amazon, Google, Microsoft etc etc. They will all aggressively try to own as much as possible of these techs as possible leaving only scraps for other uses (therefore making it very expensive to utilise basically).

ignotum@lemmy.world on 25 Jun 09:27 next collapse

Not sure i’m fully understanding your point, are you saying that the large AI companies will create AIs that will create technologies beyond what everyone else is capable of, thus outcompeting everyone, effectively monopolizing every market and from there basically become the umbrella corporation?

I would be very impressed if anyone managed to make an AI capable of innovation to that degree, but sure, in that case we would have to fall back on something like government oversight and regulations to keep the companies in check i suppose

iAvicenna@lemmy.world on 25 Jun 10:27 collapse

No, other people will generate technologies like quantum computing, fusion energy. Big AI companies will try to own (by buying them out) as much of these as possible because the current model of AI they are using requires these techs to be able to deliver anything significantly better than what they have now. So these tech advancements will basically be owned by AI companies leaving very little room for other uses.

For these AI companies trying to go toward general AI is risky, as you said above it is not even well defined. On the other hand scaling up their models massively is a well defined goal which however requires major compute and energy innovations like those mentioned above. If these ever happen during like the next ten years or so big tech involved in AI will jump on these and buy as much of it as possible for themselves. And the rest will be mostly bought by governments for military and security applications leaving very little for other public betterment uses.

Rekorse@lemmy.dbzer0.com on 25 Jun 10:59 next collapse

General AI is a good goal for them because its poorly defined not in spite of it.

Grifts usually do have vague and shifting goal lines. See star citizen and the parallels between its supporters/detractors vs the same groups with AI: essentially if you personally enjoy/benefit from the system, you will overlook the negatives.

People are a selfish bunch and once they get a fancy new tool and receive praise for it, they will resist anyone telling them otherwise so they can keep their new tool, and the status they think it gives them (i.e. expert programmer, person who writes elegant emails, person who can create moving art, etc.)

AI is a magic trick to me, everyone thinks they see one thing, but really if you showed them how it works they would say, “well that’s not real magic after all, is it?”

ignotum@lemmy.world on 25 Jun 16:28 collapse

Idiots thinking a new thing is magic that will solve all the worlds problems doesn’t mean the thing doesn’t have merit. Someone calling themselves an exper carpenter because they have a nailgun doesn’t make nailguns any less useful.

If you see a person doing a data entry job, do you walk over to them, crack their head open and say “aww man, it’s not magic, it’s just a blob of fat capable of reading and understanding a document, and to then plot the values into a spreadsheet”?

It’s not magic, it’s not a super intelligence that will solve everything, it’s the first tool we’ve been able to make that can be told what to do in human language, and able to then perform tasks with a degree of intelligence and critical thinking, something that normally would require either a human or potentially years of development to automate programmatically. That alone makes it very useful

Rekorse@lemmy.dbzer0.com on 26 Jun 00:52 collapse

Why isnt it being sold as just a new coding language then?

ignotum@lemmy.world on 26 Jun 05:46 collapse

Because it isn’t

Rekorse@lemmy.dbzer0.com on 27 Jun 03:42 collapse

What other practical use for it is there?

ignotum@lemmy.world on 25 Jun 15:25 collapse

What if i say big fusion companies will take over the ai market since they have the energy to train better models, seems exactly as likely.

Remember when GPUs stopped being available because openAI bought nvidia and AMD and took all the gpus for themselves?
No? Weird, since gpus are needed for them to be able to deliver anything significantly better than what we have now 🤔

iAvicenna@lemmy.world on 25 Jun 19:34 collapse

I guess the end result would be the same. But at large the economic system and human nature would be to blame which is actually what I am trying to blame here too, not AI but people in power who abuse AI and steer it towards hype and profit

VirtualOdour@sh.itjust.works on 25 Jun 10:18 collapse

It’s ai and cheaper healthcare or no ai and spiraling costs to healthcare - especially with falling birthrate putting a burden on the system.

AI healthcare tools are already making it easier to provide healthcare, I’m in the uk so it’s different math who benefits but tools for early detection of tumors not only cuts costs but increases survivability too, and its only one of many similar tech already in use.

Akinator style triage could save huge sums and many lives, especially in underserved areas - as could rapid first aid advice, we have a service for non-emergency medical advice, they basically tell you if you need to go to a&e, the doctor, or wait it out - it’s helped allocate resources and save lives in cases where people would have waited out something that needs urgent care. Having your phone able to respond to ‘my arm feels funny’ by asking a series of questions that determines the medically correct response could be a real life saver ‘alexia I’ve fallen and can’t get up’ has already save people’s elderly parents lives ‘clippy why is there blood in my poop’ or ‘Hey Google, does this mole look weird’ will save even more.

Medical admin is a huge overhead, having a 24/7 running infinite instances of medically trained clerical staff would be a game changer - being able to call and say ‘this is the new situation’ and get appointments changed or processes started would be huge.

Further down the line we’re looking at being able to get basic tests done without needing a trained doctor or nurse to do it, decreasing their workload will allow them to provide better care where it’s needed - a machine able to take blood and run tests on it then update the GP with results as soon as they’re done would cut costs and wasted time - especially if the system is trained with various sensors to perform healthchecks of the patient while taking blood, it’s a complex problem to spot things out of the ordinary for a patient but one ai could be much better at than humans, especially rover worked humans.

As for them owning everything that can only happen if the anti ai people continue to support stronger copyright protections against training, if we agreed that training ai is a common good and information should be fair use over copyright then any government, NGO, charity, or open source crazy could train their own - It’s like electricity, Edison made huge progress and cornered the market but once the principles are understood anyone can use them so as tech increased it became increasingly easy for anyone to fabricate a thermopile or turbine so there isn’t a monopoly on electricity, there are companies who have local monopolies by cornering markets but anyone can make an off grid system with cheap bits from eBay.

Rekorse@lemmy.dbzer0.com on 25 Jun 10:53 next collapse

Thats all well and good but here in America thats just a long list of stuff I can’t afford, and won’t be used to drive down costs. If it will for you, then I’m happy you live in a place that gives a shit about its populations health.

I know there will be people who essentially do the reverse of profiteering and will take advantage of AI for genuinely beneficial reasons, although even in those cases a lot of the time profit is the motive. Unfortunately the American for profit system has drawn in some awful people with bad motives.

If, right now, the two largest AI companies were healthcare nonprofits, I dont think people would be nearly as upset at the massive waste of energy, money, and life current AI is.

iAvicenna@lemmy.world on 25 Jun 11:02 next collapse

I feel like all the useful stuff you have listed here is more like advanced machine learning and different than the AI that is being advertised to the public and being mostly invested in. These are mostly stuff we can already do relatively easily with the available AI (i.e highly sophisticated regression) for relatively low compute power and low energy requirements (not counting more outlier stuff like alpha fold which still requires quite a lot of compute power). It is not related to why the AI companies will need to own most of the major computational and energy innovations in the future.

It is the image/text generative part of AI that looks more sexy to the public and that is therefore mostly being hyped/advertised/invested on by big AI companies. It is also this generative part of AI that will require much bigger computation and energy innovations to keep delivering significantly more than it can now. The situation is very akin to the moon race. It feels like trying to develop the AI on this “brute force” course will deliver relatively low benefits for the cost it will incur.

iAvicenna@lemmy.world on 25 Jun 11:37 collapse

For instance, I would be completely fine with this if they said “We will train it on a very large database of articles and finding relevant scientific information will be easier than before”. But no they have to hype it up with nonsense expectations so they can generate short term profits for their fucking shareholders. This will either come at the cost of the next AI winter or senseless allocation of major resources to a model of AI that is not sustainable in the long run.

VirtualOdour@sh.itjust.works on 01 Jul 11:03 collapse

Well get your news about it from scientific papers and experts instead of tabloids and advertisements.

iAvicenna@lemmy.world on 01 Jul 16:04 collapse

I mean the person who said this is the CTO of OpenAI and an engineer working in this project. I would imagine she could be considered an expert.

VirtualOdour@sh.itjust.works on 25 Jun 09:40 collapse

So.many people have conspiracy theories about how chat gpt is stealing things and whatever, people in this threat crowing that it’s immoral if they teach it with paywalled journal articles - though I bet I can guess who their favorite reddit founder is…

I use gpt to help coding my open source project and it’s fantastic, everyone else I know that contributes to floss is doing the same - it’s not magic but for a lot of tasks it can cut 90% of the time out especially prototyping and testing. I’ve been able to add more and better functionality thaks to a free service, I think that’s a great thing.

What I’m really looking forward to is CAD getting generative tools, refining designs into their most efficient forms and calculating strengths would be fantastic for the ecosystem of freely shared designs, text2printable would be fantastic ‘design a bit to fix this problem’ could shift a huge amount of production back to local small industry or bring it into the home.

The positive possibilities of people having access to these technologies is huge, all the groups that currently can’t compete with the big corporations suddenly have a huge wall pulled down for them - being able to make custom software tools for niche tasks is fantastic for small charities or community groups, small industry, eco projects, etc.

It’ll take time for people to learn how to use the tools effectively just like when computers were new but as it becomes more widely understood I think we’ll see a lot of positive innovation which it enables.

Rekorse@lemmy.dbzer0.com on 25 Jun 10:46 next collapse

Your position is: “I like AI because it makes my job/hobbies easier. Also my coworkers do the same, because they are in almost the same position as me. I understand why people don’t like AI, they must just be reading fake-news about it and believing it. Why can’t they see that AI is a benefit for society?”

Not once did you mention any of the reasons people are opposed to AI, just that you hope one day they will get over it and learn how to use the tools to bring down big business.

I like how you imply that only programmers at large corporations know how to build things. If they would just use the AI tools I bet you could hire in a bunch more developers for cheap to boost productivity!

Here’s a clue: no one gives a shit about making it slightly easier to code, make pictures, or write emails. The costs for maintaining the system and developing it are absurd when we have actual problems affecting people right now. This is all a waste of time, and is Americas latest scam. Before that was crypto currency, medical investment fraud, and a hundred other get rich quick/save the world schemes designed to do one thing: generate profit for a small group of people so they can ride off into the sunset, their American dream complete.

iAvicenna@lemmy.world on 25 Jun 11:24 collapse

I am not denying the positive use cases being employed now and possibly being employed in the future. I am not opposing the use/development of AI tools now and in in future too.

However the huge negative possibilities are very real too and is/will be effecting humanity. I am against the course big AI companies seem to be taking and against the possible future allocation of most of major tech innovations to their cause.

It is of course very hard to predict how the positives and negatives will balance out but I think big tech companies don’t have any interest in balancing this out. They seem to be very short sighted for anything other than direct profits. I think they will take the easiest way to more profit/AI dominance which is a short term investment. So I am not very optimistic on how it will pan out. Maybe I am wrong and like computers it will open up a whole new world of possibilities. But the landscape then and landscape now is also quite different in terms of how big tech companies and richest people act.

LANIK2000@lemmy.world on 25 Jun 07:16 collapse

Thing is we’re not feeding it how humans react to stimulus. For that you’d need it hooked up to a brain directly. It’s too filtered and biased by getting text only, this approach naively ignores things like memory and assumes text messages exist in a vacuum. Throwing a black box into an analytical prediction machine, only works as long as you’re certain it’ll generally throw out the same output with the same input, not if your black box can suddenly experience 5 years of development and emerge a different entity. It’s skipping too many steps to become intelligent, I mean it literally skips the entire process between reading and writing.

ignotum@lemmy.world on 25 Jun 09:38 collapse

Yeah that was a hypothetical, if you had thoae things you would be able to create a true AGI (or what i would consider a true AGI at least)

Text is basically just a proxy, but to become proficient at predicting text you do need to develop many of the cognitive abilities that we associate with intelligence, and it’s also the only type of data we have literal terrabytes of laying around, so it’s the best we’ve got 🤷‍♂️

Regarding memory, the human mind can be viewed as taking in stimuli, associating that with existing memories, condensing that into some high level representation, then storing that, a llm could, with a long enough context window, look back at past input and output and use that information to influence it’s current output, to mostly the same effect.

What do you mean throwing a black box into an analytical prediction machine? And what do you mean 5 years of development?

LANIK2000@lemmy.world on 25 Jun 15:27 collapse

The black box is the human that reads and outputs text and the analytical prediction machine is the AI. 5 years of development is the human living their life before retuning to continue writing. It is an extreme example, but I’m just tyring to point out that the context of what a person might write can change drastically between individual messages because anything can happened in between, and thus the data is fundamentally flawed for training intelligence, as that step is fully missing, the thought process.

As to why I called the AI an analytical prediction machine, that’s because that’s essentially what it does. It has analyzed an unholy amount of random text from the internet, meaning conversations/blogs/books and so on, to predict what could follow the text you gave it. It’s why prompt injection is so hard to combat and why if you give it a popular riddle and change it slightly like “with a boat, how can a man and goat get across the river”, it’ll fail spectacularly trying to shove in the original answer somehow. I’d say that’s proof it didn’t learn to understand (cognition), because it can’t use logic to reason about a deviation from the dataset.

As for memory, we can kind of simulate it with text, but it’s not perfect. If the AI doesn’t write it down, it didn’t happen and thus any thoughts, feelings or mental analysis stops existing upon each generation. The only way it could possibly develop intelligence, is if we made it needlessly ramble and describe everything like a very bad book.

And thus to reach the beginning of your comment, I don’t belive it’s necessary to posses any cognitive abilities to generate text and in turn I don’t see it as evidence of us getting any closer to AGI.

ignotum@lemmy.world on 25 Jun 20:12 collapse

Prompt:

Alice and Bob are identical twins, Bob is twice the age of Alice. How old are they? Show your thought process step by step.

Answer:

Given the problem statement that Alice and Bob are identical twins and Bob is twice the age of Alice, we need to determine their ages. Let’s analyze the information step by step.

  1. Identical Twins: Identical twins are born at the same time, so they are always the same age.
  2. Statement Analysis:
  • Bob is twice the age of Alice.
  • Since they are identical twins, Bob and Alice must have the same age.
  1. Logical Conclusion:
  • The only scenario where a person’s age can be twice their own age is if their age is zero. Thus, the only solution that satisfies both the condition of them being identical twins and the age relationship given in the problem is: Alice’s age = 0 Bob’s age = 0 Therefore, Alice and Bob are both 0 years old.

Are there biases due to the training data? Yes
Does that mean it is totally incapable of reason? No why would it?

And the models aren’t trying to act like a specific person, but humans in general, so variations in writing styles in the data is quite irrelevant, as we’ve already seen it’ll usually adopt the writing style of the prompt, much like a writer will usually stick to their writing style throughout a book.

Memories are not required for intelligence, and letting a model ramble to itself will just cause the entropy of the output to increase until it’s spewing gibberish, akin fo a human locked in solitary for long enough.

LANIK2000@lemmy.world on 26 Jun 10:02 collapse

Let’s do the riddle I suggested, because we need something popular in the dataset, but present it with a deviation that makes it stupidly simple yet is unlikely to exist.

Prompt:

I have a riddle for you. With a boat, how can a man and a goat get across the river?

Answer:

Ah, that’s a classic riddle! The answer is: the man takes the goat across the river first and leaves it on the other side. Then he goes back, picks up the boat, and brings it across to the side where he left the goat. Finally, he leaves the goat there and takes himself across. That way, both the man and the goat safely cross the river without any issues. Do you have any other riddles or questions you’d like to discuss today?

A normal human wouldn’t be fooled by this and say that they can just go across and maybe ask where the riddle is. They’d be likely confused or expect more. The AI doesn’t because it completely lacks the ability to reason. At least it ends up solved, that’s probably the best response I got when trying to make this point. Let’s continue.

Follow up prompt:

Why would he need to take the boat? Wouldn’t he need to have used the boat to get the goat across? And once the boat is there, why would he return to take him self?

Answer:

Ah, I see where the confusion lies! In this classic riddle scenario, the man and the goat can actually swim across the river without using the boat. The key is to ensure that either the man or the goat is always present on each side of the river while they are crossing together. Thank you for pointing out that oversight in my previous response! If you have any more riddles or questions to discuss, feel free to share them. How else can I be of assistance today?

Final prompt:

Why is that the key? What difference does it make? How can they be both present on both sides while crossing together?

Final answer:

I apologize for the confusion in my previous responses. Allow me to clarify the solution to this riddle: The key factor in this scenario is ensuring that either the man or the goat remains on each side of the river at all times to prevent any issues that may arise from leaving them alone together. Here’s how they can successfully cross together without using a boat: 1. The man takes the goat across the river first and leaves it on the other side. 2. The man returns alone and brings himself across to join the goat on that side. 3. The man then picks up the goat and carries it back across to the original side of the river. 4. Finally, he goes back one last time to bring himself across. By following this sequence, both can be present on each side while crossing together, ensuring that there are no conflicts or risks during their journey. Thank you for your patience, and I appreciate your attention to detail in understanding this riddle! If you have any more questions or topics you’d like to discuss, feel free to share them with me. How else can I assist you today?

I think that’s quite enough, it’s starting to ramble like you said it would (tho much earlier than expected) and unlike the first solution, it doesn’t even end up solved anymore xD I’d argue this is a scenario that should be absolutely trivial and yet the AI is trying to assert information that I didn’t present and continues to fail to apply logic correctly. The only time it knows how to reason is when someone in its dataset already spelled out the reasoning to a certain question. If the logic doesn’t exits in the dataset, it has great difficulty making heads or tails of it.

And yes, I’d argue memories are indeed absolutely vital to inteligence. If we want cognition, aka the process of acquiring knowledge and understanding, we need it to remember. And if it immediately loses that information or it erodes so quickly, it’s essentially worthless.

ignotum@lemmy.world on 26 Jun 10:40 collapse

Tried the same prompt:

I have a riddle for you. With a boat, how can a man and a goat get across the river?

This riddle is reminiscent of classic river-crossing puzzles. The exact solution depends on the additional conditions or constraints that might be implied (like if there are other characters involved, such as a wolf or cabbage). Assuming it’s just the man and the goat, here’s a straightforward solution:

  1. The man takes the goat across the river.
  2. The man leaves the goat on the other side.
  3. The man returns alone to the original side to get the boat. So, in short, the man and the goat cross together in the boat, and then the man returns to the original side to get the boat. If there are additional details or constraints to consider, please let me know!

The goal is to get both to the other side, not just the goat

I see! If the goal is to get both the man and the goat to the other side of the river, here’s how it can be done:

  1. The man and the goat both get into the boat.
  2. The man rows the boat with the goat across the river.
  3. They both disembark on the other side of the river. In this way, both the man and the goat successfully reach the other side of the river together.

Asking questions because you know the dataset is biased towards a particular solution isn’t showing the fault in the syatem, much like asking a human a trick question isn’t proving humans are stupid. If you want to test the logical reasoning you should try questions it is unlikely to have ever heard before, where it needs to actually reason on its own to come to the answer.

And i guess people with anterograde amnesia cannot be intelligent, are incapable of cognition and are worthless, since they can’t form new memories

LANIK2000@lemmy.world on 26 Jun 11:22 collapse

It’s not much of a trick question, if it’s absolutely trivial. It’s cherry picked to show that the AI tries to associate things based on what they look like, not based on the logic and meaning behind them. If you gave the same prompt to a human, they likely wouldn’t even think of the original riddle.

Even in your example it starts off by doing absolute nonsense and upon you correcting it by spelling out the result, it finally manages, but still presents it in the format of the original riddle.

You can notice, in my example I intentionally avoid telling it what to do, rather just question the bullshit it made, and instead of thinking “I did something wrong, let’s learn”, it just spits out more garbage with absolute confidence. It doesn’t reason. Like just try regenerating the last answer, but rather ask it why it sent the man back, don’t do any of the work for it, treat it like a child you’re trying to teach something, not a machine you’re guiding towards the correct result.

And yes, people with memory issues immediately suffer on the inteligence side, their lives a greatly impacted by it and it rarely ends well for them. And no, they are not worthless, I never said that they or AI is worthless, just that “machine learning” in its current state (as in how the technology works), doesn’t get us any closer to AGI. Just like a person with severe memory loss wouldn’t be able to do the kind of work we’d expect from an AGI.

TheFriar@lemm.ee on 24 Jun 19:58 next collapse

Would these even matter if it also randomly spits out nonsense they can’t figure out how to stop it from producing?

phoenixz@lemmy.ca on 24 Jun 20:33 next collapse

PhD level of intelligence

No it won’t. At some point, some AI will, but that point is still far away.

I’m sure it’ll know how to string words and sentences together real nice, even to the point where it makes sense. It will still not have a clue what it’s talking about, it’ll still not understand basic concepts as “understanding” requires a whole lot more than just an advanced ability of pushing words together.

RememberTheApollo_@lemmy.world on 24 Jun 20:48 next collapse

So copying everyone else’s work and rehashing it as your own is what makes a PhD level intelligence? (Sarcastic comments about post-grad work forthcoming, I’m sure)

Unless AI is able to come up with original, testable, verifiable, repeatable previously unknown associations, facts, theories, etc. of sufficient complexity it’s not PhD level…using big words doesn’t count either.

Knock_Knock_Lemmy_In@lemmy.world on 25 Jun 06:57 next collapse

I look forward to when AI can extrapolate, rather than interpolate.

sukhmel@programming.dev on 25 Jun 07:55 next collapse

I think they had some specific metric in mind when they said this. But on the other hand, this is kind of a “you’re here” situation, AI can’t do that now, there’s no telling that they can’t make it do that later. Probably it would be a much more useful AI at that point, too

And009@lemmynsfw.com on 25 Jun 08:34 collapse

Precisely, AI is far away from creativity.

kemsat@lemmy.world on 24 Jun 20:58 next collapse

Having a PhD doesn’t say you’re intelligent. It says you’re determined & hardworking.

cabron_offsets@lemmy.world on 24 Jun 21:49 next collapse

Eh. Maybe. but don’t discount those phds who were pushed through the process because their advisors were just exhausted by them. i have known too many 10th year students. They weren’t determined or hardworking. They simply couldn’t face up to their shit decisions, bad luck, or intellectual limits.

Knock_Knock_Lemmy_In@lemmy.world on 25 Jun 06:56 collapse

Pushed through? In my experience those candidates are encouraged to drop out.

Rekorse@lemmy.dbzer0.com on 25 Jun 10:30 next collapse

Depends if its profitable or not, including in the long run.

Knock_Knock_Lemmy_In@lemmy.world on 25 Jun 18:26 collapse

Yes. Short term easy passes come at the expense of sacrificing long term reputation.

sukhmel@programming.dev on 25 Jun 10:35 next collapse

Not everywhere, I guess. The advisor needs to show good numbers to the stakeholders board, after all

Knock_Knock_Lemmy_In@lemmy.world on 25 Jun 18:27 collapse

I think that’s on the entrance selection committee, not the supervisor. (Although they are probably part of that committee)

sukhmel@programming.dev on 25 Jun 23:16 collapse

I don’t know for sure, but my advisor used to worry about the amount of successful PhD graduates, because of the department quota, iirc

on the other hand

his quota was not too high, and he allowed me to enroll and was okay with me, although I said in advance that I will likely become a drop out

cabron_offsets@lemmy.world on 25 Jun 12:32 collapse

There are practical matters, some quite controversial, that you aren’t considering.

Knock_Knock_Lemmy_In@lemmy.world on 25 Jun 18:25 collapse

Probably. What are you referring to exactly?

cabron_offsets@lemmy.world on 25 Jun 18:46 collapse

Let’s just say that demographics play a critical role in future grant applications.

VirtualOdour@sh.itjust.works on 25 Jun 09:04 next collapse

A scientist says Britney is really pretty, the press reports scientist thinks Britney is hot, lemmy gets mad because her core temperature is the same as most humans.

What they’re really claiming is it’ll have PhD level proficiency at certain tasks, that is if you asked an average PhD student to code a pathfinder algorithm GPT would produce similar level output. Likewise if you want it to write an explanation of centrifugal force it could output the same quality essay as the average PhD student.

They’re not saying that it’ll have agency, emotion, or self-awareness. They’re not saying it’ll have the colloquial understanding of intelligence or wisdom, they’re using intelligence in its normal use in animal biology and computer science where it refers to an organism changing its behavior in response to stimulus in a way that benefits the organism - a worm moving away from light because this will increase its survivability is intelligence, a program selecting word order that earns it a higher score is intelligence.

Rekorse@lemmy.dbzer0.com on 25 Jun 10:29 collapse

Ah right, everyone was wrong the whole time. See everyone! This right here makes all of it make sense! We can all stop making fun of the statement for being ridiculous, because clearly we are just bad readers. Thank you man likely wearing a cape!

Promethiel@lemmy.world on 25 Jun 22:57 collapse

Your comment, but without irony or sarcastic pretention. What exactly do you think semantics are?

sukhmel@programming.dev on 25 Jun 23:19 collapse

Something unnecessary, certainly

[deleted] on 25 Jun 11:16 collapse

.

HexesofVexes@lemmy.world on 24 Jun 20:58 next collapse

Wow… They want to give AI even more mental illness and crippling imposter syndrome to make it an expert in one niche field?

Sounds like primary school drop-out level thinking to me.

Min7_f43sh_j5@lemmy.dbzer0.com on 24 Jun 21:26 next collapse

I’m planning to defend in October and I can say that getting a Ph.D. is potentially the least intelligent thing I’ve ever done.

cabron_offsets@lemmy.world on 24 Jun 21:47 collapse

Same, bruh. But I transitioned from biophysics to money & people management and shit’s pretty okay.

SturgiesYrFase@lemmy.ml on 25 Jun 08:31 collapse

I know three people that dropped out of primary and did quite well. They all ended up taking remedial studies later in life. Two were in trades and the other was a postie. All three were smart as fuck. Just because life gets in the way of going to school doesn’t mean a person is dumb, just uneducated.

Kolanaki@yiffit.net on 25 Jun 06:40 next collapse

It would have to actually have intelligence, period, for it to have PhD level intelligence. These things are not intelligent. They just throw everything at the wall and see what would stick.

sukhmel@programming.dev on 25 Jun 12:52 next collapse

You are correct, but there’s a larger problem with intelligence, we don’t have a practical definition, and we keep shifting the goalpost. Then there’s always a question of a philosophical zombie, if someone acts as a human and has a human body you won’t be able to tell apart if they don’t really have intelligence, so we only need to put LLM into humanlike body (it’s not so, but you get the point)

reminds me of this, although the comic is on a different matter

www.smbc-comics.com/comic/2014-11-25

Spacehooks@reddthat.com on 26 Jun 13:06 collapse

I believe guardian of galaxy 3 did a take of that. Rocket was special because he innovated while the others just mimiced.

sassypablo@lemmy.blahaj.zone on 25 Jun 06:44 next collapse

Oh… that’s the same person (in the image at least) who said “Yeah AI is going to take those creative jobs, but those jobs maybe shouldn’t have existed in the first place”.

zloubida@lemmy.world on 25 Jun 07:22 next collapse

The fact that I have a PhD while I knew that I wouldn’t use it quickly after I begun, thus loosing years of my life is the proof that I’m dumb as a rock. Fitting for ChatGPT.

Prandom_returns@lemm.ee on 25 Jun 09:04 next collapse

GPT-7 will have full-self-driving.
But that’s next year.

Tryptaminev@lemm.ee on 25 Jun 10:19 collapse

You know the Chineese? They talk about this ChatPT 7. But we Americans. My uncle, very smart man. Smartest in every room except on Thanks Giving. I always had Thanks Giving and my Turkey, everyone loved my Turkey. He said we will soon have Chat 8 and the Chineese they know nothing like it.

thebardingreen@lemmy.starlightkel.xyz on 26 Jun 11:02 collapse

TIL, Trump is just autocomplete.

jas0n@lemmy.world on 25 Jun 09:44 next collapse

All aboard the hype train! We need to stop using the term “AI” for advanced auto complete. There is not even a shred of intelligence in this. I know many of the people here already know this, but how do we get this message to journalists?! The amount of hype being repeated by respectable journalists is sickening.

Zacryon@lemmy.wtf on 25 Jun 11:19 next collapse

What is intelligence?

embed_me@programming.dev on 25 Jun 14:41 collapse

My organic neural network (brain) > yours (smooth brain)

I jest

Ashen44@lemmy.ca on 25 Jun 13:00 next collapse

people have been calling literal pathfinding algorithms in video games AI for decades. This is what AI is now and I think it’s going to be significantly easier to just accept this and clarify when talking about actual intelligence than trying to fight the already established language.

jas0n@lemmy.world on 26 Jun 18:24 collapse

While you’re not wrong, I don’t ever recall people en masse believing a game AI was truly intelligent. Everyone was always aware of the truth. There just isn’t a great name for the computer players. I think it’s an important distinction here because people do believe ChatGPT is intelligent.

dutchkimble@lemy.lol on 25 Jun 16:11 next collapse

Too late, the journalists have been replaced by advanced auto completes already.

LeFantome@programming.dev on 25 Jun 17:47 next collapse

Well, the “journalists” have not been replaced. But most of the content creating industry were not really that and have, as you say, started to be replaced.

Neon@lemmy.world on 27 Jun 06:21 collapse

If you can be replaced by advanced autocomplete, you’re not really a “journalist”

thebardingreen@lemmy.starlightkel.xyz on 26 Jun 11:01 collapse

I know many of the people here already know this, but how do we get this message to journalists?!

Journalists have this information, but articles about it probably generate 10% of the clicks, shares and eyeballs->ad revenue that either the hype or the scaremongering does.

abbiistabbii@lemmy.blahaj.zone on 25 Jun 09:56 next collapse

PHD in what?

LoreleiSankTheShip@lemmy.ml on 25 Jun 09:59 next collapse

In intelligence. Duh!

abbiistabbii@lemmy.blahaj.zone on 25 Jun 10:05 collapse

Knowing academics, I’m expecting it to start a Tumblr blogs and post academic shit posts.

cabron_offsets@lemmy.world on 25 Jun 15:36 collapse

Wanking

Sam_Bass@lemmy.world on 25 Jun 10:36 next collapse

More like kindergarten

Matriks404@lemmy.world on 25 Jun 11:24 next collapse

What a bunch of bullshit. I’ve asked ChatGPT recently to do a morphological analysis of some Native American language’s very simple sentences, and it gave absolute nonsense as an answer.

And let’s be clear: It was an elementary linguistics task. Something that I did learn to do on my own by just doing a free course online.

intensely_human@lemm.ee on 25 Jun 14:49 next collapse

Was it GPT 5?

jubilationtcornpone@sh.itjust.works on 26 Jun 13:14 collapse

Yesterday, I asked it to help me create a DAX measure for an Excel pivot table. The answers it gave were completely wrong. Each time, I would tell it the error that Excel was displaying and it would respond with “Sorry about that. You can’t use that function there for [x] reasons.”

So it knows the reason why a combination of DAX functions won’t work but recommends them anyways. That’s real fucking useful.

[deleted] on 25 Jun 11:36 next collapse

.

ZILtoid1991@lemmy.world on 25 Jun 11:58 next collapse

GPT-10 will do my mom

LeFantome@programming.dev on 25 Jun 17:44 collapse

If she wants it to. At some point, all the chat bots are going to be given bodies. We all know it.

intensely_human@lemm.ee on 25 Jun 14:48 next collapse

A PhD makes a person knowledgeable, not intelligent. And GPT-4 was already extremely knowledgeable.

LeFantome@programming.dev on 25 Jun 18:06 next collapse

We agree 100% on that. I wonder if we agree on the implications.

nickwitha_k@lemmy.sdf.org on 25 Jun 23:46 collapse

It’s even better at seeming knowledgeable about things that it may or may not be knowledgeable about.

Lucidlethargy@sh.itjust.works on 25 Jun 15:08 next collapse

Which GPT will take my job? I would imagine it’s only a year out, at the most.

Then what? I leave my tech job and go find menial labor?

Fuck our government for not laying down rules on this. I knew it would happen, but goddamn…

expr@programming.dev on 25 Jun 16:20 next collapse

sigh

I’m so tired of repeating this ad nauseum. No, it’s not going to take your job. It’s hype train bullshit full of grifters. There is no intelligence or understanding, nor have we come anywhere close to achieving that. That is still entirely within the realm of science fiction.

LeFantome@programming.dev on 25 Jun 17:40 collapse

ChatGPT is already taking people’s jobs. You overestimate the complexity of what some people get paid for.

GenerativeAI cannot do anything on its own. However, it is a productivity amplifier in the right hands. What those “more productive” people do is reduce the demand for other labour.

Chatbots are performing marketing communication, marketing automation, cloud engineering, simple coding, recruitment screening, tech support, security monitoring, editorial content and news, compliance verification, lead development, accounting, investor relations, visual design, tax preparation, curriculum development, management consulting, legal research, and more. Should it be? Many ( I am guessing you ) would argue no. Is it though? Absolutely.

All of the above is happening now. This train is going to accelerate before it hits equilibrium. The value of human contribution is shifting but not coming back to where it was.

Jobs will be created. Jobs are absolutely being lost.

You are correct that ChatGPT is not intelligent. You are right that it does not “understand” anything. What does that have to do with taking people’s jobs? There are many, many jobs where intelligence and understanding are under-utilized or even discouraged. Boiler-plate content creation is more common than you think.

figjam@midwest.social on 26 Jun 01:52 collapse

Horray! More marketing!

LeFantome@programming.dev on 25 Jun 18:00 next collapse

People have the wrong idea about how advanced AI has to be to take people’s jobs.

The loom was not intelligent. It did not “understand” weaving. It still eliminated so many jobs that human society was altered forever and so significantly that we are still experiencing the effects.

As an analogy ( not saying this is how the world will choose to go ), you do not need a self-driving car that is superior to humans in all cases in order for Uber to eliminate drivers. If the AI can handle 95% of cases, you need 5 drivers for 100 cars. They can monitor, supervise, guide, and fully take over when required.

Many fields will be like this. I do not need an AI with human level intelligence to get rid of the Marcom dept. I need one really skilled person to drive 6 people’s worth of output using AI. How many content creators and headline writers do I need to staff an online “news” room? The lack of person number two may surprise you.

Getting rid of jobs is not just a one for one replacement of every individual with a machine. It is a systemic reduction in demand. It is a shifting of geographic dependence.

Many of the tasks we all do are less novel and high-quality than we think they are. Many of us can be “largely” replaced and that is all it takes. We may not lose our jobs but there will certainly be many fewer new jobs in certain areas than there would have been.

nossaquesapao@lemmy.eco.br on 26 Jun 01:38 collapse

To add to your comment, there’s also the corp’s willingness to make things more precarious, as long as it gets cheaper to run and people keep consuming, so the situation might be even worse. In your uber example, they could simply not care for the 5%, stop providing them the service and go full self-driving.

Dyskolos@lemmy.zip on 26 Jun 01:10 collapse

Not meant as an insult, but if you really think that, you aren’t really great at that “tech job”. But you’re still better than any A"I"

librejoe@lemmy.world on 25 Jun 15:50 next collapse

She looks like that girl from bored ape yacht club

werefreeatlast@lemmy.world on 25 Jun 19:46 next collapse

<img alt="" src="https://lemmy.world/pictrs/image/815c35a2-0278-4437-83bc-8f84d903b7d3.png"> Yet Chat-GPT 4 sucks at coding.

Etterra@lemmy.world on 26 Jun 11:27 next collapse

If AI was that capable then using human workers would eventually become cost prohibitive. If we’re still stuck having to work to live under a capitalist system by then, there’s gonna be serious problems. A post-labor economy doesn’t need to charge for even a modestly comfortable standard of living, and the overwhelming majority of people will go looking for things to do no matter how many politicians swear otherwise.

gjoel@programming.dev on 26 Jun 12:08 next collapse

Yeah? Well… GPT-7 will be a CEO on LinkedIn!

Spacehooks@reddthat.com on 26 Jun 13:03 collapse

Oh no it’s gonna murder us.

Spacehooks@reddthat.com on 26 Jun 13:07 next collapse

I’m more interested if it can have empathy and give it something like a soul. Mostly so we don’t get murdered.

[deleted] on 26 Jun 21:04 collapse

.