I was right, it is
from Lukstru@fedia.io to science_memes@mander.xyz on 28 Sep 10:04
https://fedia.io/m/science_memes@mander.xyz/t/2778665

Watched Primer’s new video (www.youtube.com/watch?v=GkiITbgu0V0 watch it!) and stumbled over this gem

#science_memes

threaded - newest

abbadon420@sh.itjust.works on 28 Sep 11:11 next collapse

Took me 17 minutes to get the joke. Good video though.

UnRelatedBurner@sh.itjust.works on 29 Sep 11:58 collapse

can I get my instant gratification please?

abbadon420@sh.itjust.works on 29 Sep 12:09 collapse

Sure, here’s a TLDW.
Imagine you’re training an AI model. You feed it data and test if it comes up with a good answer. Of course it doesn’t do that right away, thats why you.re training it. You have to correct it.

If you correct the model by correcting the errors, you get overcompensation problems. If you correct it on the differences between the errors, you get a much better correction.

The term for that is LOSS. You correct on LOSS in stead of on pure ERROR.

UnRelatedBurner@sh.itjust.works on 29 Sep 12:51 collapse

haha, loss

sniggleboots@europe.pub on 28 Sep 16:18 next collapse

I watched that video mere hours ago!

Septimaeus@infosec.pub on 28 Sep 20:36 next collapse

Gradient descent? It’s loss.

AtariDump@lemmy.world on 29 Sep 15:22 collapse

<img alt="" src="https://lemmy.world/pictrs/image/db669661-f77f-489e-9297-56c1105a1c4b.png">