Open source AI models favor men for hiring, study finds (www.theregister.com)
from sabreW4K3@lazysoci.al to technology@beehaw.org on 02 May 15:35
https://lazysoci.al/post/25674453

cross-posted from: lazysoci.al/post/25674400

#technology

threaded - newest

30p87@feddit.org on 02 May 15:45 next collapse

Thing echoing the internet’s average opinion echoes the internet’s average opinion, completely obsolete study finds

MagicShel@lemmy.zip on 02 May 15:51 next collapse

I think researchers are trying to make AI models more aware, but they are trained on a whole lot of human history, and that is going to be predominantly told from white male perspectives. Which means AI is going to act like that.

Women and people of color, you should probably treat AI like it’s that white guy who means well and thinks he’s woke but lacks the self-awareness to see he is 100% part of the problem. (I say this as a white guy who is 100% part of the problem, just hopefully with more self-awareness.)

nesc@lemmy.cafe on 02 May 16:10 next collapse

Everyone should treat ‘ai’ like a program that it is. Your guilt compex is irrelevant here.

MagicShel@lemmy.zip on 02 May 16:21 next collapse

Has nothing to do with guilt-complex. Why would I feel guilty for being privileged? I feel fortunate, and obliged to remain aware of that.

Treating AI like a “program,” however, is a pretty useless lead in to what you really posted to say.

nesc@lemmy.cafe on 02 May 16:26 collapse

Right, only you can dictate how people should treat chat bots, I will siphon your knowledge into my brain.

GammaGames@beehaw.org on 02 May 16:47 next collapse

The program is statistically an average white guy that knows about a lot of things but doesn’t understand any of it soooooo I’m not even sure what point you thought you had

nesc@lemmy.cafe on 02 May 17:37 next collapse

Chat bot will impersonate whoever you’ll tell them to impersonate (as stated in the article), my point is pretty simple, people don’t need a guide when using a chat bot that tells them how they should treat and interact with it.

I get it, that was just perfunctory self depreciation with intended audience being other first worlders.

SaltSong@startrek.website on 02 May 17:43 next collapse

people don’t need a guide when using a chat bot that tells them how they should treat and interact with it.

Then why are people always surprised to find out that chat bots will make shit up to answer their questions?

People absolutely need a guide for using a chat bot, because people are idiots.

chicken@lemmy.dbzer0.com on 02 May 22:54 collapse

Not even just because people are idiots, but also because a LLM is going to have quirks you will need to work around or exploit to get the best results out of it. Like how it’s better to edit your question to clarify a misunderstanding and regenerate the response than it is to respond again with the correction, because there is more of a risk it gets stuck on its mistake that way. Or how it can be useful in some situations to (if the interface allows this) manually edit part of the LLM output to be more in line with what you want it to be saying before generating the rest.

moomoomoo309@programming.dev on 02 May 17:50 collapse

Sure, who will it impersonate if you don’t? That’s where the bias comes in.

And yes, they do need a guide, because the way chatbots behave is not intuitive or clear, there’s lots of weird emergent behavior in them even experts don’t fully understand (see OpenAI’s 4o sycophancy articles today). Chatbots’ behavior looks obvious, and in many cases it is…until it isn’t. There’s lots of edge cases.

nesc@lemmy.cafe on 02 May 18:02 collapse

They will impersonate ‘helpful assistant made by companyname (following hundreds of lines of invisible rules and what to say and when)’. Experts that don’t have an incentive to understand and at least partially in the cult who would have guessed!

moomoomoo309@programming.dev on 02 May 18:53 collapse

And you think there’s not bias in those rules that’s notable, and that the edge cases I mentioned won’t be an issue, or what?

You seem to have sidestepped what I’ve said to rant about how OpenAI sucks when that was just meant to be an example of how even those best informed about AI in the world right now don’t really understand it.

nesc@lemmy.cafe on 02 May 21:19 collapse

That’s not ‘bias’, that’s intended behaviour, iirc meta published some research on it. Returning to my initial point, viewing chat bots as ‘white male who lacks self-awareness’ is dumb as fuck.

As for not understanding, they are paid to not understand.

Powderhorn@beehaw.org on 02 May 18:53 collapse

I resent your impugnment of copyeditors.

GammaGames@beehaw.org on 03 May 05:30 collapse

🤣

valkyrieangela@lemmy.blahaj.zone on 02 May 17:12 collapse

If you feel guilty about this, you may be part of the problem

Kichae@lemmy.ca on 02 May 19:22 collapse

There is no reason to even suggest that AI ‘means well’. It doesn’t mean anything, let alone well.

MagicShel@lemmy.zip on 02 May 19:26 collapse

Of course. It’s an analogy. It is like someone who means well. It generates text from the default perspective, which is white guy with a bunch of effort to make it more diverse with a similar end result. The responses might sound woke but take a closer look and you’ll find the underlying bias.

magnetosphere@fedia.io on 02 May 18:07 next collapse

So disheartening. There was never a good reason for this to be an issue in the first place.

match@pawb.social on 02 May 19:17 next collapse

ai automates existing biases 🌈

randomname@scribe.disroot.org on 04 May 18:01 collapse

“These biases stem from entrenched gender patterns in the training data as well as from an agreeableness bias induced during the reinforcement learning from human feedback stage.”

No surprise.