We're all on the spectrum
from fossilesque@mander.xyz to science_memes@mander.xyz on 19 Oct 15:06
https://mander.xyz/post/19552956

#science_memes

threaded - newest

OrnateLuna@lemmy.blahaj.zone on 19 Oct 15:54 next collapse

The fun part is that we don’t

marcos@lemmy.world on 19 Oct 16:06 next collapse

We don’t. We keep just doing things and good things keep happening afterwards.

We don’t even know if those two facts are linked in any way.

degen@midwest.social on 19 Oct 16:39 next collapse

Nearly irrelevant xkcd

<img alt="" src="https://imgs.xkcd.com/comics/dependency.png">

At least in software we know where the linchpins are on some level.

Azuth@lemmy.today on 20 Oct 04:51 collapse

Descartes said it best. The only thing I can know for sure is that I do, in fact, exist.

taiyang@lemmy.world on 19 Oct 18:43 collapse

Frequentist statistics are really… silly in a way. And this coming from someone who has to teach it. Sure, p is less than 5%, but you sampled 100,000 people-- an effect size of 0.05 would be significant at this rate. “bUt ItS sIgNiFiCaNt”… Oy.

Contramuffin@lemmy.world on 19 Oct 19:44 collapse

I get very suspicious if a paper samples multiple groups and still uses p. You would use q in that case, and the fact that they didn’t suggests that nothing came up positive.

Still, in my opinion it’s generally OK if they only use the screen as a starting point and do follow-up experiments afterwards

taiyang@lemmy.world on 19 Oct 20:33 collapse

Yeah, I used to work in a field with huge samples so significance wasn’t really all that useful. I usually just report significant coefficients and try to make clear what changes by model. For instance, if a type of curriculum showed improvements on test scores, you simply say how much and, possibly, illustrate it by saying if a person went from 50th percentile to 55th percentile.

Every field varies, though. I find it crazy how much psychologists I’ve worked with cared about r-squared. To each their own, I guess.