Researchers claim GPT-4 passed the Turing test (bgr.com)
from vegeta@lemmy.world to technology@lemmy.world on 15 Jun 2024 04:02
https://lemmy.world/post/16543472

#technology

threaded - newest

vegeta@lemmy.world on 15 Jun 2024 04:03 next collapse

The Study

arxiv.org/html/2405.08007v1

massive_bereavement@fedia.io on 15 Jun 2024 07:37 collapse

The interrogators seem completely lost and clearly haven't talk with an NLP chatbot before.

That said, this gives me the feeling that eventually they could use it to run scams (or more effective robocalls).

treefrog@lemm.ee on 15 Jun 2024 11:45 collapse

I imagine some people already are.

Kolanaki@yiffit.net on 15 Jun 2024 04:04 next collapse

It took them this long?

E: There are way too many people ITT that think a Turing test is hard to pass, and don’t seem to understand what it means for something to pass one. It’s such a low fucking bar, it might as well be meaningless.

TheBigBrother@lemmy.world on 15 Jun 2024 04:07 next collapse

Oh no!! the AImageddon it’s closer everyday… Skynet it’s coming for us!!

harrys_balzac@lemmy.dbzer0.com on 15 Jun 2024 09:08 collapse

Skynet will gets the dumb ones first by getting them put toxic glue on thir pizzas then the arrogant ones will build the Terminators by using reverse psychology.

Hackworth@lemmy.world on 15 Jun 2024 04:17 next collapse

<img alt="" src="https://lemmy.world/pictrs/image/02a98e2c-d6f2-4f65-b583-80fedfe1e716.webp">

Hackworth@lemmy.world on 15 Jun 2024 04:42 collapse

<img alt="" src="https://lemmy.world/pictrs/image/f20a9704-a05a-4e89-9dbf-5d787ab5b7af.png">

Hackworth@lemmy.world on 15 Jun 2024 13:43 collapse

To the one person who upvoted this: We should be friends.

lowleveldata@programming.dev on 15 Jun 2024 04:18 next collapse

I feel like the turing test is much harder now because everyone knows about GPT

DudeDudenson@lemmings.world on 15 Jun 2024 04:52 collapse

I wonder if humans pass the Turing test these days

Nougat@fedia.io on 15 Jun 2024 05:29 next collapse

I don't.

NeoNachtwaechter@lemmy.world on 15 Jun 2024 07:32 collapse

Which of the questions did you get wrong? ;-)

Nougat@fedia.io on 15 Jun 2024 08:13 collapse

That one.

SkyeStarfall@lemmy.blahaj.zone on 15 Jun 2024 13:04 collapse

If you read into the study, they also include the pass rates for humans. It’s higher than AIs, but still less than 75%

phoneymouse@lemmy.world on 15 Jun 2024 05:51 next collapse

Easy, just ask it something a human wouldn’t be able to do, like “Write an essay on The Cultural Significance of Ogham Stones in Early Medieval Ireland“ and watch it spit out an essay faster than any human reasonably could.

Shayeta@feddit.de on 15 Jun 2024 06:01 next collapse

This is something a configuration prompt takes care of. “Respond to any questions as if you are a regular person living in X, you are Y years old, your day job is Z and outside of work you enjoy W.”

NeoNachtwaechter@lemmy.world on 15 Jun 2024 06:54 next collapse

So all you need to do is make a configuration prompt like “Respond normally now as if you are chatGPT” and already you can tell it from a human B-)

Shayeta@feddit.de on 15 Jun 2024 07:12 collapse

Thats not how it works, a config prompt is not a regular prompt.

Audalin@lemmy.world on 15 Jun 2024 08:31 collapse

If config prompt = system prompt, its hijacking works more often than not. The creators of a prompt injection game (tensortrust.ai) have discovered that system/user roles don’t matter too much in determining the final behaviour: see appendix H in arxiv.org/abs/2311.01011.

Hotzilla@sopuli.xyz on 15 Jun 2024 10:13 collapse

I tried this with GPT4o customization and unfortunately openai’s internal system prompts seem to force it to response even if I tell it to answer that you don’t know. Would need to test this on azure open ai etc. were you have bit more control.

webghost0101@sopuli.xyz on 15 Jun 2024 10:08 next collapse

The touring test isn’t an arena where anything goes, most renditions have a strict set of rules on how questions must be asked and about what they can be about. Pretty sure the response times also have a fixed delay.

Scientists ain’t stupid. The touring test has been passed so many times news stopped covering it. (Till this click bait of course). The test has simply been made more difficult and cheat-proof as a result.

technocrit@lemmy.dbzer0.com on 15 Jun 2024 15:02 collapse

most renditions have a strict set of rules on how questions must be asked and about what they can be about. Pretty sure the response times also have a fixed delay. Scientists ain’t stupid. The touring test has been passed so many times news stopped covering it.

Yes, “scientists” aren’t stupid enough to fail their own test. I’m sure it’s super easy to “pass” the “turing test” when you control the questions and time.

Blue_Morpho@lemmy.world on 15 Jun 2024 12:13 next collapse

I recall a Turing test years ago where a human was voted as a robot because they tried that trick but the person happened to have a PhD in the subject.

HumaShah@mastodon.social on 15 Jun 2024 12:18 collapse

@Blue_Morpho @phoneymouse
Many times humans have incorrectly been characterised as the machine in #TuringTest experiments. In 1991, the very first Loebner Prize for an AI, a hidden human was considered a machine due to her extensive knowledge of Shakespeare.
##Turing #AI #Human #Machine

Blue_Morpho@lemmy.world on 15 Jun 2024 23:34 collapse

That’s the one I was remembering!

JohnEdwa@sopuli.xyz on 15 Jun 2024 13:26 collapse

Turing tests aren’t done in real time exactly to counter that issue, so the only thing you could judge would be “no human would bother to write all that”.

However, the correct answer to seem human, and one which probably would have been prompted to the AI anyway, is “lol no.”
It’s not about what the AI could do, it’s what it thinks is the correct answer to appear like a human.

technocrit@lemmy.dbzer0.com on 15 Jun 2024 14:59 collapse

Turing tests aren’t done in real time exactly to counter that issue

To counter the issue of a completely easy and obvious fail? I could see how that would be an issue for AI hucksters.

CabbageRelish@midwest.social on 15 Jun 2024 06:34 next collapse

Chatbots passed the Turing test ages ago, it’s not a good test.

NeoNachtwaechter@lemmy.world on 15 Jun 2024 07:00 collapse

it’s not a good test.

Of course you can’t use an old set of questions. It’s useless.

The turing test is an abstract concept. The actual questions need to be adapted with every new technology. Maybe even with every execution of a test.

NeoNachtwaechter@lemmy.world on 15 Jun 2024 06:43 next collapse

Turing test? LMAO.

I asked it simply to recommend me a supermarket in our next bigger city here.

It came up with a name and it told a few of it’s qualities. Easy, I thought. Then I found out that the name does not exist. It was all made up.

You could argue that humans lie, too. But only when they have a reason to lie.

Chozo@fedia.io on 15 Jun 2024 07:01 next collapse

The Turing test doesn't factor for accuracy.

Lmaydev@programming.dev on 15 Jun 2024 08:22 collapse

That’s not what LLMs are for. That’s like hammering a screw and being irritated it didn’t twist in nicely.

The turing test is designed to see if an AI can pass for human in a conversation.

NeoNachtwaechter@lemmy.world on 15 Jun 2024 08:54 collapse

turing test is designed to see if an AI can pass for human in a conversation.

I’m pretty sure that I could ask a human that question in a normal conversation.

The idea of the Turing test was to have a way of telling humans and computers apart. It is NOT meant for putting some kind of ‘certified’ badge on that computer, and …

That’s not what LLMs are for.

…and you can’t cry ‘foul’ if I decide to use a question for which your computer was not programmed :-)

webghost0101@sopuli.xyz on 15 Jun 2024 10:22 next collapse

In a normal conversation sure.

In this kind Turing tests you may be disqualified as a jury for asking that question.

Good science demands controlled areas and defined goals. Everyone can organize a homebrew touring tests but there also real proper ones with fixed response times, lengths.

Some touring tests may even have a human pick the best of 5 to provide to the jury. There are so many possible variations depending on test criteria.

NeoNachtwaechter@lemmy.world on 15 Jun 2024 11:21 collapse

you may be disqualified as a jury for asking that question.

You want to read again about the scientific basics of the Turing test (hint: it is not a tennis match)

webghost0101@sopuli.xyz on 15 Jun 2024 13:34 collapse

There is no competition in science (or at least there shouldn’t be). You are subjectively disqualified from judging llm’s if you draw your conclusions on an obvious trap which you yourself have stated is beyond the scope of what it was programmed to do.

Lmaydev@programming.dev on 15 Jun 2024 11:05 collapse

It wasn’t programmed for any questions. It was trained hehe

HerzogVonWiesel@sh.itjust.works on 15 Jun 2024 07:05 next collapse

ITT: nobody understands what the Turing Test really is

webghost0101@sopuli.xyz on 15 Jun 2024 10:28 collapse

To clarify:

People seem to legit think the jury talks to the bot in real time and can ask about literally whatever they want.

Its rather insulting to the scientist that put a lot of thought into organizing a controlled environment to properly test defined criteria.

technocrit@lemmy.dbzer0.com on 15 Jun 2024 14:57 collapse

Its rather insulting to the scientist that put a lot of thought into organizing a controlled environment to properly test defined criteria.

lmao. These “scientists” are frauds. 500 people is not a legit sample site. 5 minutes is a pathetic amount of time. 54% is basically the same as guessing. And most importantly the “Turing Test” is not a scientific test that can be “passed” with one weak study.

Instead of bootlicking “scientists”, we should be harshly criticizing the overwhelming tide of bad science and pseudo-science.

webghost0101@sopuli.xyz on 15 Jun 2024 17:04 next collapse

The reporting are big clickbait but that doesn’t mean there is nothing left to learn from the old touring tests.

I dont know what the goal was they had in mind. It could just as well be “testing how overhyped the touring tests is when manipulated tests are shared with the media”

I sincerely doubt it but i do give them benefits of the doubt.

kogasa@programming.dev on 17 Jun 2024 01:00 collapse

I don’t think the methodology is the issue with this one. 500 people can absolutely be a legitimate sample size. Under basic assumptions about the sample being representative and the effect size being sufficiently large you do not need more than a couple hundred participants to make statistically significant observations. 54% being close to 50% doesn’t mean the result is inconclusive. With an ideal sample it means people couldn’t reliably differentiate the human from the bot, which is presumably what the researchers believed is of interest.

doodle967@lemdro.id on 15 Jun 2024 07:49 next collapse

The Turing test is about tricking people into believing that LLMs are humans, and given that the public still doesn’t use LLMs, it’s much easier to fool them. Over time, this deception will decrease as people interact with LLMs.

tourist@lemmy.world on 15 Jun 2024 08:10 next collapse

The participants judged GPT-4 to be human a shocking 54 percent of the time.

ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time

Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.

Try talking to the thing: web.njit.edu/~ronkowit/eliza.html

I refuse to believe that 22% didn’t misunderstand the task or something.

Downcount@lemmy.world on 15 Jun 2024 08:46 next collapse

Okay, 22% is ridiculously high for ELIZA. I feel like any half sober adult could clock it as a bot by the third response, if not immediately.

I did some stuff with Eliza back then. One time I set up an Eliza database full of insults and hooked it up to my AIM account.

It went so well, I had to apologize to a lot of people who thought I was drunken or went crazy.

Eliza wasn’t thaaaaat bad.

KISSmyOSFeddit@lemmy.world on 15 Jun 2024 09:00 next collapse

14% of people can’t do anything more complicated than deleting an email on a computer.
26% can’t use a computer at all.

www.nngroup.com/articles/computer-skill-levels/

So right off the bat, 40% probably don’t even know what a chatbot is.

webghost0101@sopuli.xyz on 15 Jun 2024 10:02 next collapse

The public versions of the ais used in Turing tests usually have less computing power. The test itself is often also highly specific in what and how questions can be asked.

This hardly news because models have passed the test before and as a result the test is made more difficult. It says nothing about intelligence and only about the ability to convincingly simulate a human conversation.

CaptainBasculin@lemmy.ml on 15 Jun 2024 11:58 next collapse

This is the same bot. There’s no way this passed the test.

<img alt="" src="https://lemmy.ml/pictrs/image/ce2fa109-b52a-48ea-8c46-f2f1d92f3b96.jpeg">.

catastrophicblues@lemmy.ca on 20 Jun 2024 22:33 collapse

Forget psychotherapy, it can barely understand anything:

> Hello, I am Eliza. I'll be your therapist today. 
* Hello! How are you?
> Would you prefer if I were not ? 
* Not what?
> Have you asked such questions before? 
* No
> You are being a bit negative and that won't help us move forward. 
* What are you smoking?
> Perhaps in your fantasies I am smoking?
HumaShah@mastodon.social on 15 Jun 2024 12:14 next collapse

@tourist @vegeta

In the 1970s, psychiatrists couldn't distinguish between PARRY, a chatbot simulating paranoia, and a human patient suffering from schizophrenia. So this 'bot convinces judges that it's a human' phenomenon is not new and tells us more about how humans think.
#AI #DeceptionDetection #Chatbots #TuringTest #LLMs #GPT4

dustyData@lemmy.world on 15 Jun 2024 13:54 next collapse

You underestimate how dumb some people can be.

technocrit@lemmy.dbzer0.com on 15 Jun 2024 14:48 next collapse

It was a 5 minute test. People probably spent 4 of those minutes typing their questions.

This is pure pseudo-science.

catastrophicblues@lemmy.ca on 20 Jun 2024 22:31 collapse

> Hello, I am Eliza. I'll be your therapist today. 
* Hello! How are you?
> Would you prefer if I were not ? 
* Not what?
> Have you asked such questions before? 
* No
> You are being a bit negative and that won't help us move forward. 
* What are you smoking?
> Perhaps in your fantasies I am smoking?

Yeah, it took me one message lol

[deleted] on 15 Jun 2024 08:45 next collapse

.

MonkderDritte@feddit.de on 15 Jun 2024 09:04 next collapse

So it’s good enough to fool people into thinking it’s a human?

Cool. Anyway…

harrys_balzac@lemmy.dbzer0.com on 15 Jun 2024 09:09 next collapse

So…GPT-4 is gay? Or are we talking about a different kind of test?

bandwidthcrisis@lemmy.world on 15 Jun 2024 09:32 next collapse

Did they try asking how to stop cheese falling off pizza?

Edit: Although since that idea came from a human, maybe I’ve failed.

foggy@lemmy.world on 15 Jun 2024 10:46 next collapse

Meanwhile, me:

(Begin)

[Prints error statement showing how I navigated to a dir, checked to see a files permissions, ran whoami, triggered the error]

Chatgpt4: First, make sure you’ve navigated to the correct directory.

cd /path/to/file

Next, check the permissions of the file

ls -la

Finally, run the command

[exact command I ran to trigger the error]>

Me: stop telling me to do stuff that I have evidently done. My prompt included evidence of me having do e all of that already. How do I handle this error?

(return (begin))

NutWrench@lemmy.world on 15 Jun 2024 12:02 next collapse

Each conversation lasted a total of five minutes. According to the paper, which was published in May, the participants judged GPT-4 to be human a shocking 54 percent of the time. Because of this, the researchers claim that the large language model has indeed passed the Turing test.

That’s no better than flipping a coin and we have no idea what the questions were. This is clickbait.

SkyeStarfall@lemmy.blahaj.zone on 15 Jun 2024 12:56 next collapse

While I agree it’s a relatively low percentage, not being sure and having people pick effectively randomly is still an interesting result.

The alternative would be for them to never say that gpt-4 is a human, not 50% of the time.

Hackworth@lemmy.world on 15 Jun 2024 13:02 collapse

Participants only said other humans were human 67% of the time.

SkyeStarfall@lemmy.blahaj.zone on 15 Jun 2024 13:05 collapse

Which makes the difference between the AIs and humans lower, likely increasing the significance of the result.

Hackworth@lemmy.world on 15 Jun 2024 13:14 collapse

Aye, I’d wager Claude would be closer to 58-60. And with the model probing Anthropic’s publishing, we could get to like ~63% on average in the next couple years? Those last few % will be difficult for an indeterminate amount of time, I imagine. But who knows. We’ve already blown by a ton of “limitations” that I thought I might not live long enough to see.

dustyData@lemmy.world on 15 Jun 2024 17:24 collapse

The problem with that is that you can change the percentage of people who identify correctly other humans as humans. Simply by changing the way you setup the test. If you tell people they will be, for certain, talking to x amount of bots, they will make their answers conform to that expectation and the correctness of their answers drop to 50%. Humans are really bad at determining whether a chat is with a human or a bot, and AI is no better either. These kind of tests mean nothing.

Hackworth@lemmy.world on 15 Jun 2024 17:37 collapse

Humans are really bad at determining whether a chat is with a human or a bot

Eliza is not indistinguishable from a human at 22%.

Passing the Turing test stood largely out of reach for 70 years precisely because Humans are pretty good at spotting counterfeit humans.

This is a monumental achievement.

dustyData@lemmy.world on 15 Jun 2024 22:22 collapse

First, that is not how that statistic works, like you are reading it entirely wrong.

Second, this test is intentionally designed to be misleading. Comparing ChatGPT to Eliza is the equivalent of me claiming that the Chevy Bolt is the fastest car to ever enter a highway by comparing it to a 1908 Ford Model T. It completely ignores a huge history of technological developments. There have been just as successful chatbots before ChatGPT, just they weren’t LLM and they were measured by other methods and systematic trials. Because the Turing test is not actually a scientific test of anything, so it isn’t standardized in any way. Anyone is free to claim to do a Turing Test whenever and however without too much control. It is meaningless and proves nothing.

[deleted] on 15 Jun 2024 22:47 collapse

.

Hackworth@lemmy.world on 15 Jun 2024 13:01 next collapse

On the other hand, the human participant scored 67 percent, while GPT-3.5 scored 50 percent, and ELIZA, which was pre-programmed with responses and didn’t have an LLM to power it, was judged to be human just 22 percent of the time.

54% - 67% is the current gap, not 54 to 100.

Cybermonk_Taiji@r.nf on 15 Jun 2024 13:12 next collapse

No matter how many times I flip a coin, I don’t think the coin will convince me it is sentient.

NutWrench@lemmy.world on 15 Jun 2024 14:17 next collapse

The whole point of the Turing test, is that you should be unable to tell if you’re interacting with a human or a machine. Not 54% of the time. Not 60% of the time. 100% of the time. Consistently.

They’re changing the conditions of the Turing test to promote an AI model that would get an “F” on any school test.

bob_omb_battlefield@sh.itjust.works on 15 Jun 2024 16:37 collapse

But you have to select if it was human or not, right? So if you can’t tell, then you’d expect 50%. That’s different than “I can tell, and I know this is a human” but you are wrong… Now that we know the bots are so good, I’m not sure how people will decide how to answer these tests. They’re going to encounter something that seems human-like and then essentially try to guess based on minor clues… So there will be inherent randomness. If something was a really crappy bot then it wouldn’t ever fool anyone and the result would be 0%.

dustyData@lemmy.world on 15 Jun 2024 23:09 collapse

No, the real Turing test has a robot trying to convince an interrogator that they are a female human, and a real female human trying to help the interrogator to make the right choice. This is manipulative rubbish. The experiment was designed from the start to manufacture these results.

BrianTheeBiscuiteer@lemmy.world on 15 Jun 2024 15:34 collapse

It was either questioned by morons or they used a modified version of the tool. Ask it how it feels today and it will tell you it’s just a program!

KairuByte@lemmy.dbzer0.com on 17 Jun 2024 05:29 collapse

The version you interact with on their site is explicitly instructed to respond like that. They intentionally put those roadblocks in place to prevent answers they deem “improper”.

If you take the roadblocks out, and instruct it to respond as human like as possible, you’d no longer get a response that acknowledges it’s an LLM.

werefreeatlast@lemmy.world on 15 Jun 2024 13:37 next collapse

It does great at Python programming… everything it tries is wrong until I try and I tell tell it to do it again.

A_A@lemmy.world on 15 Jun 2024 14:46 collapse

Edit :
oops : you were saying it is like a human since it does errors ? maybe i “wooshed”.


Hi @werefreeatlast,
i had successes asking LLaMA 3 70B with simple specific questions …
Context : i am bad at programming and it help me at least to see how i could use a few function calls in C from Python … or simply drop Python and do it directly in C.
Like you said, i have to re-write & test … but i have a possible path forward. Clearly you know what you do on a computer but i’m not really there yet.

werefreeatlast@lemmy.world on 16 Jun 2024 01:23 collapse

But people don’t just know code when you ask them. The llms fo because they got trained on that code. It’s robotic in nature, not a natural reaction yet.

dhork@lemmy.world on 15 Jun 2024 13:56 next collapse

In order for an AI to pass the Turing test, it must be able to talk to someone and fool them into thinking that they are talking to a human.

So, passing the Turing Test either means the AI are getting smarter, or that humans are getting dumber.

zbyte64@awful.systems on 15 Jun 2024 14:35 next collapse

Detecting an LLM is a skill.

Kolrami@lemmy.world on 15 Jun 2024 14:57 collapse

Humans are as smart as they ever were. Tech is getting better. I know someone who was tricked by those deepfake Kelly Clarkson weight loss gummy ads. It looks super fake to me, but it’s good enough to trick some people.

dustyData@lemmy.world on 15 Jun 2024 14:08 next collapse

Turing test isn’t actually meant to be a scientific or accurate test. It was proposed as a mental exercise to demonstrate a philosophical argument. Mainly the support for machine input-output paradigm and the blackbox construct. It wasn’t meant to say anything about humans either. To make this kind of experiments without any sort of self-awareness is just proof that epistemology is a weak topic in computer science academy.

Specially when, from psychology, we know that there’s so much more complexity riding on such tests. Just to name one example, we know expectations alter perception. A Turing test suffers from a loaded question problem. If you prompt a person telling them they’ll talk with a human, with a computer program or announce before hand they’ll have to decide whether they’re talking with a human or not, and all possible combinations, you’ll get different results each time.

Also, this is not the first chatbot to pass the Turing test. Technically speaking, if only one human is fooled by a chatbot to think they’re talking with a person, then they passed the Turing test. That is the extend to which the argument was originally elaborated. Anything beyond is alterations added to the central argument by the author’s self interests. But this is OpenAI, they’re all about marketing aeh fuck all about the science.

EDIT: Just finished reading the paper, Holy shit! They wrote this “Turing originally envisioned the imitation game as a measure of intelligence” (p. 6, Jones & Bergen), and that is factually wrong. That is a lie. “A variety of objections have been raised to this idea”, yeah no shit Sherlock, maybe because he never said such a thing and there’s absolutely no one and nothing you can quote to support such outrageous affirmation. This shit shouldn’t ever see publication, it should not pass peer review. Turing never, said such a thing.

kogasa@programming.dev on 17 Jun 2024 02:00 collapse

Your first two paragraphs seem to rail against a philosophical conclusion made by the authors by virtue of carrying out the Turing test. Something like “this is evidence of machine consciousness” for example. I don’t really get the impression that any such claim was made, or that more education in epistemology would have changed anything.

In a world where GPT4 exists, the question of whether one person can be fooled by one chatbot in one conversation is long since uninteresting. The question of whether specific models can achieve statistically significant success is maybe a bit more compelling, not because it’s some kind of breakthrough but because it makes a generalized claim.

Re: your edit, Turing explicitly puts forth the imitation game scenario as a practicable proxy for the question of machine intelligence, “can machines think?”. He directly argues that this scenario is indeed a reasonable proxy for that question. His argument, as he admits, is not a strongly held conviction or rigorous argument, but “recitations tending to produce belief,” insofar as they are hard to rebut, or their rebuttals tend to be flawed. The whole paper was to poke at the apparent differences between (a futuristic) machine intelligence and human intelligence. In this way, the Turing test is indeed a measure of intelligence. It’s not to say that a machine passing the test is somehow in possession of a human-like mind or has reached a significant milestone of intelligence.

academic.oup.com/mind/article/LIX/236/433/986238

dustyData@lemmy.world on 17 Jun 2024 10:44 collapse

Turing never said anything of the sort, “this is a test for intelligence”. Intelligence and thinking are not the same. Humans have plenty of unintelligent behaviors, that has no bearing on their ability to think. And plenty of animals display intelligent behavior but that is not evidence of their ability to think. Really, if you know nothing about epistemology, just shut up, nobody likes your stupid LLMs and the marketing is tiring already, and the copyright infringement and rampant privacy violations and property theft and insatiable power hunger are not worthy.

kogasa@programming.dev on 17 Jun 2024 13:35 collapse

U good?

technocrit@lemmy.dbzer0.com on 15 Jun 2024 14:47 collapse

  • 500 people - meaningless sample
  • 5 minutes - meaningless amount of time
  • The people bootlicking “scientists” obviously don’t understand science.
yetAnotherUser@lemmy.ca on 15 Jun 2024 21:19 collapse

Add in a test that wasn’t made to be accurate and was only used to make a point, like what other comments mention