Google releases VaultGemma, its first privacy-preserving LLM (arstechnica.com)
from sabreW4K3@lazysoci.al to technology@beehaw.org on 16 Sep 07:05
https://lazysoci.al/post/34099222

#technology

threaded - newest

ryannathans@aussie.zone on 16 Sep 08:02 next collapse

Privacy preserving? More like avoiding lawsuits due to copyrighted information

7eter@feddit.org on 16 Sep 08:37 collapse

This! Plus opening up the possibility for Google to use private user data with even less concern. So not a privacy win at all.

tal@olio.cafe on 16 Sep 08:04 next collapse

LLMs have non-deterministic outputs, meaning you can't exactly predict what they'll say.

I mean...they can have non-deterministic outputs. There's no requirement for that to be the case.

It might be desirable in some situations; randomness can be a tactic to help provide variety in a conversation. But it might be very undesirable in others: no matter how many times I ask "What is 1+1?", I usually want the same answer.

kassiopaea@lemmy.blahaj.zone on 16 Sep 21:42 collapse

In theory, it’s just an algorithm that will always produce the same output given the exact same inputs. In practice it’s nearly impossible to have fully deterministic outputs because of the limited precision and repeatability we get with floating point numbers on GPUs.

AmanitaCaesarea@slrpnk.net on 16 Sep 09:13 next collapse

Google and privacy in the same sentence… Lol

HappyFrog@lemmy.blahaj.zone on 16 Sep 19:54 next collapse

What does people use a 1b model for?

Fyrnyx@kbin.melroy.org on 16 Sep 23:31 collapse

Google and Privacy can't exist in the same sentence.