MelodiousFunk@slrpnk.net
on 23 Apr 2024 14:45
nextcollapse
He’s got to get them from somewhere. They certainly aren’t coming from his little piggy brain.
nightwatch_admin@feddit.nl
on 23 Apr 2024 14:55
collapse
Please don’t insult the pigs, they’re smart and sensitive creatures
Hubi@lemmy.world
on 23 Apr 2024 15:08
nextcollapse
Reddit is past the point of no return. He might as well speed it up a little.
paraphrand@lemmy.world
on 23 Apr 2024 18:13
collapse
Like a built in brand dashboard where brands can monitor keywords for their brand and their competitors? And then deploy their sanctioned set of accounts to reply and make strategic product recommendations?
Sounds like something that must already exist. But it would have been killed or hampered by API changes… so now Spez has a chance to bring it in-house.
They will just call it brand image management. And claim that there are so many negative users online that this is the only way to fight misinformation about their brand.
That would be an unmarked ad. I don’t think that’s legal in many places
FinishingDutch@lemmy.world
on 23 Apr 2024 20:24
collapse
Probably.
So, we complain to a regulatory body, they investigate, they tell a company to do better or, waaaay down the road, attempt to levy a fine. Which most companies happily pay, since the profits from he shady business practices tend to far outweigh the fines.
Legal or illegal really only means something when dealing with an actual person. Can’t put a corporation in jail, sadly.
Th4tGuyII@kbin.social
on 23 Apr 2024 14:44
nextcollapse
It's gross, but also inevitable. If there's an untapped niche to make money from, somebody's going to try it -- plus if they want to waste their money on generating accounts only to have them be banned, then so be it.
Makes me kinda thankful that this community is smaller and less likely to be targeted by this sort of crap.
SlopppyEngineer@lemmy.world
on 23 Apr 2024 17:44
nextcollapse
grrgyle@slrpnk.net
on 23 Apr 2024 22:12
nextcollapse
What’s funny is I think it would be profitable for maybe, like, a year, before everyone starts doing it and then even normal people stop trusting reddit comments.
It’s like pissing in a pool to sell people soap. What’s the plan once people stop using the pool?
Croquette@sh.itjust.works
on 24 Apr 2024 02:06
collapse
Buy a new pool and piss in again to sell new soaps.
By the time that the cow is bled dry, someone is stuck holding the bag while some people made out like bandits.
That is the stock market for you. Create no value, just wealth transfer.
In this case it’s creating a kind of anti-value - harm, I guess.
Also I bow to your superior and brazen use of mixed metaphors. You got double what I did. “Bleeding” a cow dry? It adds impact over the usual “milking” even!
Croquette@sh.itjust.works
on 24 Apr 2024 12:08
collapse
Milking assume that you don’t kill the cow, which isn’t the case here.
Some people are specialized at being hired at startups to prop up the startup to be sold and make a quick buck.
Then they move on to the next startup, wash rinse and repeat. It tells a lot about the state of innovation.
Even_Adder@lemmy.dbzer0.com
on 23 Apr 2024 16:16
nextcollapse
Group chats.
SlopppyEngineer@lemmy.world
on 23 Apr 2024 17:54
nextcollapse
Peer-to-peer systems? Systems where you have to do physically be at the location to get data maybe, so cyber cafe like things. Or back to the old system and go to the regular bars, repair cafés or hobby places.
paraphrand@lemmy.world
on 23 Apr 2024 18:21
collapse
Synchronous spaces.
Social VR does not have a lot of the ills of social media. You only have to deal with people much like you would IRL.
ininewcrow@lemmy.ca
on 23 Apr 2024 14:50
nextcollapse
Doesn’t mean that the fediverse is immune.
News stories and narratives are still fought over by actors on all sides and sometimes by entities that might be bots. And there are a lot of auto-generating content bots that post stuff or repost old content from other sites like Reddit.
agressivelyPassive@feddit.de
on 23 Apr 2024 15:14
collapse
Especially since being immune to censorship is kind of the point of the fediverse.
If you’re even a tiny bit smart about it, you can start hundreds of sock puppet instances and flood other instances with bullshit.
Gullible@sh.itjust.works
on 23 Apr 2024 15:58
nextcollapse
I try to avoid talking about how indefensibly terrible Lemmy’s anti-spam and anti-brigading measures are for fear of someone doing something with the information. I imagine the only thing keeping subtle disinfo and spam from completely overtaking Lemmy is how small its reach would be. Doing the same thing to Reddit is a hundred times more effective, and systemically accepted. Reddit’s admins like engagement.
homesweethomeMrL@lemmy.world
on 23 Apr 2024 16:09
nextcollapse
Put in those tickets. It’s a community effort y’know.
MysticKetchup@lemmy.world
on 23 Apr 2024 16:13
nextcollapse
I feel the same about a lot of Fediverse apps right now. They’re kinda just coasting on the fact that they’re not big enough for most spammers to care about. But they need to put in solid defenses and moderation tools before that happens
nickwitha_k@lemmy.sdf.org
on 23 Apr 2024 16:23
collapse
Another reason to block federation with Threads.
paraphrand@lemmy.world
on 23 Apr 2024 18:19
nextcollapse
Meta has the most resources to combat spam and abuse.
nickwitha_k@lemmy.sdf.org
on 23 Apr 2024 22:36
collapse
And the least demonstrated desire to do so.
roguetrick@lemmy.world
on 23 Apr 2024 18:19
collapse
Meta will likely actually moderate against spambots because they want you to fucking pay them for that service. The problem is, they aren’t too interested in moderating hate speech.
nickwitha_k@lemmy.sdf.org
on 23 Apr 2024 22:38
collapse
So, you’re suggesting that it is better that they are profiting from helping state actors and hate groups?
Edit: No, they are not suggesting that. I misunderstood their meaning.
roguetrick@lemmy.world
on 23 Apr 2024 22:47
collapse
I don’t think I made a value statement whatsoever. I think calling it a problem and hate speech would’ve been enough of a clue as to how I felt about it, however.
It’s actually why I support most instances defederating from them
nickwitha_k@lemmy.sdf.org
on 23 Apr 2024 23:03
collapse
Ah. I clearly misunderstood your meaning. Sorry about that.
It’s an arms race and Lemmy is only a small player right now so no one really pays attention to our little corner. But as soon as we get past a certain threshold, we’ll be dealing with the same problems as well.
old_machine_breaking_apart@lemmy.dbzer0.com
on 23 Apr 2024 18:41
collapse
Can’t some instances make some sort of agreement and have a whitelist of instances to not block? People would need to register to add their instances to the list, and some common measures would be applied to restrict someone from registering several instances at once, and banning people who misuse the system.
That wouldn’t solve the problem, but perhaps would make things more manageable.
agressivelyPassive@feddit.de
on 23 Apr 2024 18:51
collapse
You can’t block people. Who would you know, who registered the domain?
What you’re proposing is pretty similar to the current state of email. It’s almost impossible to set up your own small mail server and have it communicate the “mailiverse” since everyone will just assume you’re spam. And that lead to a situation where 99% of people are with one of the huge mail providers.
old_machine_breaking_apart@lemmy.dbzer0.com
on 23 Apr 2024 18:55
collapse
you’re right, the matter is more complicated than I thought…
agressivelyPassive@feddit.de
on 23 Apr 2024 19:12
collapse
It’s extremely complicated and I don’t really see a solution.
You’d need gigantic resources and trust in those resources to vet accounts, comments, instances. Or very in depth verification processes, which in turn would limit privacy.
What I actually found interesting was bluesky’s invite system. Each user got a limited number of invite links and if a certain amount of your invitees were banned, you’d be banned/flagged to. That creates a web of trust, but of course also makes anonymous accounts impossible.
dumples@kbin.social
on 23 Apr 2024 15:00
nextcollapse
The only reason reddit was valuable was because it was from real people who weren't paid off. Well that's ruined now.
eronth@lemmy.world
on 23 Apr 2024 16:13
nextcollapse
Yeah, I’ve noticed that a bit lately anyways. Maybe I’m looking up stuff that has less of a community on Reddit, and thus has less discussion, but I have absolutely noticed some comments have a single product name-drop with little clarity for why they liked the product. It starts to feel like they’re just ads (generated or otherwise) meant to trick you into thinking Reddit users are liking the product.
AI is going to just make it worse, and cause Reddit to not be a good goto for actual reviews and discussion on pros/cons.
dumples@kbin.social
on 23 Apr 2024 16:38
nextcollapse
Exactly. Usually there's a conversation or a quick consensus on one or two things. But I've been seeing lots of single answers or just ads
Jordan117@lemmy.world
on 23 Apr 2024 17:00
nextcollapse
There’s an excellent chance that even some of the “authentic” discussions you see are word-for-word reposts of old posts and comments, created by bots to build up karma in order to be sold to spammers and influence peddlers down the line.
paraphrand@lemmy.world
on 23 Apr 2024 18:16
collapse
The first obvious wave of this stuff, to me, was the video conversion ripoff software and similar. They had people looking around for questions their software was possibly a solution for. Sometimes they would act like users, other times it was more neutral info, but still clear it was self promotion because of what was recommended.
I wanted to figure out what game hosting sites were good and Google pointed me to reddit…every thread was full of boilerplate ads for different sites. The comments were the most obvious, marketing-approved sentences I’ve ever seen
Everything I can find online seems to be advertisements or paid reviews (Also advertisements) when looking for anything anymore. Businesses are terrified of an open honest conversation about what is good and what is not
glimse@lemmy.world
on 23 Apr 2024 17:34
nextcollapse
If you’re terrified of honest conversations, your product is probably shit.
Marques Brownlee had a video recently about the question “do bad reviews kill products?” that highlights the issue well
Spend $Billions shoving advertising down everyone’s throats? Absolutely!
Just make a good product and provide good customer support? It will never work!
Nikelui@piefed.social
on 24 Apr 2024 06:25
collapse
Option 1 is easy and any idiot can throw money at it to solve the problem. Option 2 requires talented people and real effort.
Drinvictus@discuss.tchncs.de
on 23 Apr 2024 15:10
nextcollapse
If only people moved to an open and federated platform. I mean I don’t have to say that I hate reddit since I’m here but still whenever I Google a problem reddit answers are one of the most useful places. Especially about something local.
circuscritic@lemmy.ca
on 23 Apr 2024 16:00
collapse
This isn’t a problem that can be solved with a technical solution that isn’t itself extremely dystopian in nature.
This is a problem that requires legislation and criminal liability, or genuine punitive civil liability that pierces the corporate legal shields.
Don’t hold your breath for a serious solution to present itself.
paraphrand@lemmy.world
on 23 Apr 2024 18:28
collapse
Do you think legislation and laws would be reasonable for trolls who ban evade and disrupt and destroy synchronous online social spaces too?
The same issue happens there. Zero repercussions, ban evasion is almost always possible, and the only fool proof solutions seem to quickly turn dystopian too.
Ban evasion and cheating are becoming a bigger and bigger issue in online games/social spaces. And all the nerds will agree it’s impossible to fix. And many feel it’s just normal culture. But it’s not sustainable, and with AI and an ever escalating cat and mouse game, it’s going to continue to get worse.
Can anyone suggest a solution that is on the horizon?
circuscritic@lemmy.ca
on 23 Apr 2024 18:44
collapse
No, I’m a free speech absolutist when it comes to private citizens. Be they communists, Nazis, Democrats, trolls, assholes or furries, the government should have no role in regulating their speech outside of reasonable exceptions i.e. yelling fire in a crowded theater, threats of physical violence, etc.
My moral conviction on relative free speech absolutism ends at the articles of incorporation, or other nakedly profit driven speech e.g. market manipulation.
So if the trolls and ban evaders are acting on behalf of a company, or for profit driven interests, their speech should be regulated. If they’re just assholes or trolls, that’s a problem for the website and mod teams.
paraphrand@lemmy.world
on 23 Apr 2024 18:59
collapse
Thanks for replying. As far as speech goes, I agree with you. And I agree that moderation, using tools like blocking or muting and social mores should take care of things.
Setting speech aside. What about people who hack the spaces, break things, blast ear piercing sounds, crash other users or otherwise do not use symbols or speech to destroy or harm a synchronous social space? And I am assuming these are mostly always individuals. Not some corporate scheme.
sirspate@lemmy.ca
on 23 Apr 2024 15:12
nextcollapse
If the rumor is true that a reddit/google training deal is what led to reddit getting boosted in search results, this would be a direct result of reddit’s own actions.
homesweethomeMrL@lemmy.world
on 23 Apr 2024 16:10
nextcollapse
I appreciate the mostly benign neglect we had for awhile. Now that they’re paying attention it’s just all bad. Or would be, if I was there. HA.
heavy@sh.itjust.works
on 23 Apr 2024 16:31
nextcollapse
Wow this is gross. I’m gonna wash it down with some MOUNTAIN DEW ™
Boomkop3@reddthat.com
on 23 Apr 2024 16:45
nextcollapse
Well, that was the last bit of usefulness I used to get out of google. I’ve been on yahoo for a while now
n3m37h@sh.itjust.works
on 23 Apr 2024 17:57
nextcollapse
Yahoo is still alive?
Boomkop3@reddthat.com
on 23 Apr 2024 21:52
collapse
Yep, it’s sort of what google used to be. It took me a bit of setup tho. They really like to default to showing you a ton of news and crap. But after turning that all off I’m left with a super clean ui and useful search results
Boomkop3@reddthat.com
on 24 Apr 2024 06:10
collapse
Absolutely, I am definitely not human
owatnext@lemmy.world
on 23 Apr 2024 16:50
nextcollapse
I was about ready to downvote out of pure annoyance lol.
PrincessLeiasCat@sh.itjust.works
on 23 Apr 2024 17:03
nextcollapse
The creator of the company, Alexander Belogubov, has also posted screenshots of other bot-controlled accounts responding all over Reddit. Begolubov has another startup called “Stealth Marketing” that also seeks to manipulate the platform by promising to “turn Reddit into a steady stream of customers for your startup.” Belogubov did not respond to requests for comment.
What an absolute piece of shit. Just a general trash person to even think of this concept.
andrew_bidlaw@sh.itjust.works
on 23 Apr 2024 19:04
collapse
His surname translates from russian as ‘white lips’. No wonder he is a ghoul.
TropicalDingdong@lemmy.world
on 23 Apr 2024 17:17
nextcollapse
Yeah this isn’t new.
Ever wonder why you are such a fan of shitty played out franchises?
ChaoticEntropy@feddit.uk
on 23 Apr 2024 17:27
nextcollapse
Well that’s certainly one way for your brand to lose a lot of respect once it becomes apparent. Much like I when want to lose respect for myself, I use Chum brand dog food. Chum, it’s still food, alright?
Milk_Sheikh@lemm.ee
on 23 Apr 2024 17:30
nextcollapse
I still haven’t seen a use of AI that doesn’t serve state or corporate interests first, before the general public. AI medical diagnostics comes the closest, but that’s being leveraged to justify further staffing reductions, not an additional check.
The AI-captcha wars are on, and no matter who wins we lose.
FaceDeer@fedia.io
on 23 Apr 2024 18:40
nextcollapse
TimeSquirrel@kbin.social
on 23 Apr 2024 19:27
collapse
AI is helping me learn and program C++. It's built into my IDE. Much more efficient than searching stackoverflow. Whenever it comes up with something I've never seen before, I learn what that thing does and mentally store it away for future use. As time goes on, I'm relying on it less and less. But right now it's amazing. It's like having a tutor right there with you who you can ask questions anytime, 24/7.
I hope a point comes where my kid can just talk to a computer, tell it the specifics of the program he wants to create, and have the computer just program the entire thing. That's the future we are headed towards. Ordinary folks being able to create software.
I’ll agree there’s huge potential for ‘assistant’ roles (exactly like you’re using) to give a concise summary for quick understanding. But LLMs aren’t knowledgeable like an accredited professor or tutor is, understanding the context and nuance of the topic. LLMs are very good at scraping together data and presenting the shallowest of information, but their limits get exposed quickly when you try to go into a topic.
For instance, I was working a project that required very long term storage (+10 years) with intermittent exposure to open air, and was concerned about oxidation and rust. ChatGPT was very adamant that desiccant alone was sufficient (wrong) and that VCI packs would last (also wrong). It did a great job of repackaging corporate ad-copy and industrial white papers written by humans, but not of providing an objective answer to a semi complex question.
TimeSquirrel@kbin.social
on 23 Apr 2024 21:04
collapse
I guess it's not great for things requiring domain knowledge. Programming seems to be easy for it, as programs are very structured, predictable, and logical. That's where its pattern-matching-and-prediction abilities shine.
ILikeBoobies@lemmy.ca
on 23 Apr 2024 17:31
nextcollapse
This market; expected to replace the same market that just used bots to achieve the same thing
I don’t understand how Lemmy/Mastodon will handle similar problems. Spammers crafting fake accounts to give AI generated comments for promotions
FeelThePower@lemmy.dbzer0.com
on 23 Apr 2024 18:03
nextcollapse
The only thing we reasonably have is security through obscurity. We are something bigger than a forum but smaller than Reddit, in terms of active user size. If such a thing were to happen here, mods could handle it more easily probably (like when we had the spammer of the Japanese text back then), but if it were to happen on a larger scale than what we have it would be harder to deal with.
roguetrick@lemmy.world
on 23 Apr 2024 18:16
nextcollapse
Mostly it seems to be handled here with that URL blacklist automod.
linearchaos@lemmy.world
on 23 Apr 2024 18:34
nextcollapse
I think the real danger here is subtlety. What happens when somebody asks for recommendations on a printer, or complains about their printer being bad, and all of a sudden some long established account recommends a product they’ve been happy with for years. And it turns out it’s just an AI bot shilling for brother.
deweydecibel@lemmy.world
on 23 Apr 2024 22:25
collapse
For one, well established brands have less incentives to engage in this.
Second, in this example, the account in question being a “long established user” would seem to indicate you think these spam companies are going to be playing a long game. They won’t. That’s too much effort and too expensive. They will do all of this on the cheap, and it will be very obvious.
This is not some sophisticated infiltration operation with cutting edge AI. This is just auto generated spam in a new upgraded form. We will learn to catch it, like we’ve learned to catch it before.
linearchaos@lemmy.world
on 24 Apr 2024 06:25
collapse
I mean, it doesn’t have to be expensive. And also doesn’t have to be particularly cutting edge. Start throwing some credits into an LLM API, haven’t randomly read and help people out in different groups. Once it reaches some amount of reputation have it quietly shill for them. Pull out posts that contain keywords. Have the AI consume the posts and figure out if they have to do with what they sound like they do. Have it subtly do product placement. None of this is particularly difficult or groundbreaking. But it could help shape our buying habits.
old_machine_breaking_apart@lemmy.dbzer0.com
on 23 Apr 2024 18:34
nextcollapse
There’s one advantage on the fediverse. We don’t have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit. This alone makes using the fediverse worth for me.
When it comes to problems involving the users themselves, things aren’t that different, and we don’t have much to do.
MinFapper@lemmy.world
on 23 Apr 2024 18:41
nextcollapse
We don’t have corporations manipulating our feeds
yet. Once we have enough users that it’s worth their effort to target, the bullshit will absolutely come.
old_machine_breaking_apart@lemmy.dbzer0.com
on 23 Apr 2024 18:50
nextcollapse
they can perhaps create instances, pay malicious users, try some embrace, extend, extinguish approach or something, but they can’t manipulate the code running on the instances we use, so they can’t have direct power over it. Or am I missing something? I’m new to the fediverse.
BarbecueCowboy@kbin.social
on 23 Apr 2024 20:22
collapse
There's very little to prevent them just pretending to be average users and very little preventing someone from just signing up a bunch of separate accounts to a bunch of separate instances.
No great automated way to tell whether someone is here legitimately.
bitfucker@programming.dev
on 23 Apr 2024 22:16
collapse
Yeah, and that is true for a lot of service. Sybil attack is indeed quite hard to prevent since malicious users can blend with legitimate ones.
bitfucker@programming.dev
on 23 Apr 2024 20:10
collapse
Federation means if you are federated then sure you get some BS. Otherwise, business as usual. Now, making sure there is no paid user or corporate bot is another matter entirely since it relies on instance moderators.
deweydecibel@lemmy.world
on 23 Apr 2024 22:16
collapse
We don’t have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit.
Corporations aren’t the only ones with incentives to do that. Reddit was very hands off for a good long while, but don’t expect that same neutral mentality from fediverse admins.
BarbecueCowboy@kbin.social
on 23 Apr 2024 20:19
collapse
mods could handle it more easily probably
I kind of feel like the opposite, for a lot of instances, 'mods' are just a few guys who check in sporadically whereas larger companies can mobilize full teams in times of crisis, it might take them a bit of time to spin things up, but there are existing processes to handle it.
I think spam might be what kills this.
FeelThePower@lemmy.dbzer0.com
on 23 Apr 2024 20:35
nextcollapse
Hmm, good point.
deweydecibel@lemmy.world
on 23 Apr 2024 22:31
collapse
If a community is so small that the mod team can be so inactive, there’s no incentive for the company to put any effort into spamming it like you’re suggesting.
And if they do end up getting a shit ton of spam in there, and it sits around for a bit until a moderator checks in, so what? They’ll just clean it up and keep going.
I’m not sure why people are so worried about this. It’s been possible for bad actors to overrun small communities with automated junk for a very long time, across many different platforms, some that predate Reddit. It just gets cleaned up and things keep going.
It’s not like if they get some AI produced garbage into your community, it infects it like a virus that cannot be expelled.
deweydecibel@lemmy.world
on 23 Apr 2024 22:07
collapse
The same way it’s handled on Reddit: moderators.
Some will get through and sit for a few days but eventually the account will make itself obvious and get removed.
It’s not exactly difficult to spot these things. If an account is spending the majority of its existence on a social media site talking about products, even if they add some AI generated bullshit here and there to make it seem like it’s a regular person, it’s still pretty obvious.
If the account seems to show up pretty regularly in threads to suggest the same things, there’s an indicator right there.
Hell, you can effectively bait them by making a post asking for suggestions on things.
They also just tend to have pretty predictable styles of speak, and never fail to post the URL with their suggestion.
n3m37h@sh.itjust.works
on 23 Apr 2024 17:56
nextcollapse
That’s like adding caustic soda to bleach. Just made the poison stronger
Aggravationstation@feddit.uk
on 23 Apr 2024 19:06
nextcollapse
Haaaaaaaaaaaaaaa!
Enjoy your open, impartial platform Reditards.
laverabe@lemmy.world
on 23 Apr 2024 19:08
nextcollapse
I just consider any comment after Jun 2023 to be compromised. Anyone who stayed after that date either doesn’t have a clue, or is sponsored content.
KillingTimeItself@lemmy.dbzer0.com
on 23 Apr 2024 19:09
nextcollapse
“i remember when reply guy was a term used for someone notorious for replying to things in a specific manner”
“take your meds grandpa, it’s getting late”
vegaquake@lemmy.world
on 23 Apr 2024 19:13
nextcollapse
yeah, the internet is doomed to be unusable if AI just keeps getting more insidious like this
yet more companies tie themselves to online platforns, websites, and other models of operation depending on being always connected.
maybe the world needs a reboot, just get rid of it all and start from scratch
BarbecueCowboy@kbin.social
on 23 Apr 2024 20:14
nextcollapse
I do kind of feel like this part of the experiment might just be coming to a close.
There's no "if AI just keeps getting more insidious", the barrier for entry is too small. AI is going to keep doing the things it's already doing, just more efficiently, and it doesn't matter that much how we feel about whether those things are good or bad. I feel like the things it is starting to ruin are probably just going to be ruined.
UnderpantsWeevil@lemmy.world
on 23 Apr 2024 20:27
collapse
maybe the world needs a reboot, just get rid of it all and start from scratch
That would destroy all the old good vintage stuff and leave us with machines that immediately fill the vacant space with pure trash.
vegaquake@lemmy.world
on 24 Apr 2024 01:25
collapse
rapture but with technology would be pretty funny
save the good old stuff and burn the rest
nytrixus@lemmy.world
on 23 Apr 2024 19:30
nextcollapse
Correction - AI is poisoning everything when it is not regulated and moderated.
Reddit has been poisoning itself for a while, what’s the difference? Just AI borrowing from the shithead behavior?
Xephonian@retrolemmy.com
on 23 Apr 2024 20:16
collapse
Lol, you think regulation and moderation aren’t poison themselves.
UnderpantsWeevil@lemmy.world
on 23 Apr 2024 20:32
nextcollapse
The regulations we implement are written by the Sam Bankman Frieds and Elon Musks who can capture the regulatory agencies. The moderation is itself increasingly automated, for the purpose of inflating perceived quality and quantity of interactions on the website.
Get back to a low-population IIRC or Discord server, a small social media channel, or a… idfk… Lemmy instance? Suddenly regulation and moderation by, of, and for the user base starts looking much nicer.
Lol, you think allowing people and businesses to do whatever the fuck they want is a good thing.
tearsintherain@leminal.space
on 23 Apr 2024 19:31
nextcollapse
So the human shills that already destroyed good faith in forums and online communities over time are now being fully outsourced to AI. Amazon itself a prime source of enshittification. From fake reviews to everyone with a webpage having affiliate links trying to sell you some shit or other. Including news outlets. Turned everyone into a salesperson.
UnderpantsWeevil@lemmy.world
on 23 Apr 2024 20:25
collapse
“I only do this because I have no other options,” he says. “Other people who go slower just end up getting fired.” I let Christian leave, and hail some more drivers. They all confirm that this is, largely speaking, how their life looks. I hear about how female drivers often develop urinary tract infections from holding it in for too long. Then a dispatch manager I bump into by chance confirms that the “disgusting” bottles of urine outside of fulfilment centres are from Amazon drivers. “We have a point system where, if you pee in a bottle and leave it in the car, you get a point for that,” they tell me. I ask: How many bottles until they’re in trouble? “Ten bottles.”
ColeSloth@discuss.tchncs.de
on 23 Apr 2024 20:41
nextcollapse
I called this shit out like a year ago. It’s the end of any viable online searching having much truth to it. All we’ll have left is youtube videos from project farm to trust.
It kinda seems like the end of the Google era. What will we search Google for when the results are all crap? This is the death gasps of the internet I/we grew up with.
Wiz@midwest.social
on 23 Apr 2024 22:25
nextcollapse
Maybe web rings of the 90s were not such a bad idea! Let’s bring 'em back!
blusterydayve26@midwest.social
on 23 Apr 2024 22:38
nextcollapse
Gemini webrings are the future?
Croquette@sh.itjust.works
on 23 Apr 2024 22:44
collapse
They would poison that shit as well unfortunately. The concept is great though.
rottingleaf@lemmy.zip
on 25 Apr 2024 12:40
collapse
Eh, how’d you do that?
Croquette@sh.itjust.works
on 25 Apr 2024 22:57
collapse
Do what? Webrings?
rottingleaf@lemmy.zip
on 26 Apr 2024 05:30
collapse
How do you poison them.
Croquette@sh.itjust.works
on 26 Apr 2024 11:28
collapse
Create sites that look like legit websites, then slowly ramp-up the bullshit. Same tactic as always.
rottingleaf@lemmy.zip
on 26 Apr 2024 12:48
collapse
So? Every part of a web ring is a site the webmaster of which can remove that banner at any moment.
Hugh_Jeggs@lemm.ee
on 24 Apr 2024 05:47
nextcollapse
Remember when you could type a vague plot of a film you’d heard about into Google and it’d be the first result?
Nah doesn’t work anymore
Saw a trailer for a french film so I searched “french film 2024 boys live in woods seven years”
Google - 2024 BEST FRENCH FILMS/TOP TEN FRENCH FILMS YOU MUST SEE THIS YEAR/ALL TIME BEST FRENCH MOVIES
Absolute fucking gash
I’ve not been too impressed with Kagi search, but at least the top result there was “Frères 2024”
Remember when you could type a vague plot of a film you’d heard about into Google and it’d be the first result?
I honestly don’t remember this at all. I remember priding myself on my “google-fu” and how to search it to get what i, or other people, needed. Which usually required understanding the precise language that you would need to use, not something vague. But over the years it’s gotten harder and harder, and now I get frustrated with how hard it has become to find something useful. I’ve had to go back to finding places I trust for information and looking through them.
Although, ironically, I can do what you’re talking about with ai now.
rottingleaf@lemmy.zip
on 25 Apr 2024 12:39
collapse
I’m feeling myself old and I’m 28.
Cause in my early childhood in 2003-2007 we would resort to search engines only when we couldn’t find something by better (but more manual and social) means.
Because - mwahahaha - most of the results were machine-generated crap.
So I actually feel very uplift due to people promising the Web to get back to norm in this sense.
BurningnnTree@lemmy.one
on 24 Apr 2024 00:35
collapse
I ran into this issue while researching standing desks recently. There are very few places on the internet where you can find verifiably human-written comparisons between standing desk brands. Comments on Reddit all seem to be written by bots or people affiliated with the brands. Luckily I managed to find a YouTube reviewer who did some real comparisons.
KingThrillgore@lemmy.ml
on 23 Apr 2024 20:50
nextcollapse
Generative AI has really become a poison. It’ll be worse once the generative AI is trained on its own output.
blusterydayve26@midwest.social
on 23 Apr 2024 22:33
nextcollapse
You’re two years late.
Maybe not for the reputable ones, that’s 2026, but these sheisters have been digging out the bottom of the swimming pool for years.
New models already train on synthetic data. It’s already a solved solution.
blusterydayve26@midwest.social
on 25 Apr 2024 15:58
collapse
Is it really a solution, though, or is it just GIGO?
For example, GPT-4 is about as biased as the medical literature it was trained on, not less biased than its training input, and thereby more inaccurate than humans:
All the latest models are trained on synthetic data generated on got4. Even the newer versions of gpt4. Openai realized it too late and had to edit their license after Claude was launched. Human generated data could only get us so far, recent phi 3 models which managed to perform very very well for their respective size (3b parameters) can only achieve this feat because of synthetic data generated by AI.
I didn’t read the paper you mentioned, but recent LLM have progressed a lot in not just benchmarks but also when evaluated by real humans.
Simon@lemmy.dbzer0.com
on 24 Apr 2024 01:45
collapse
Here’s my prediction. Over the next couple decades the internet is going to be so saturated with fake shit and fake people, it’ll become impossible to use effectively, like cable television. After this happens for a while, someone is going to create a fast private internet, like a whole new protocol, and it’s going to require ID verification (fortunately automated by AI) to use. Your name, age, and country and state are all public to everybody else and embedded into the protocol.
The new ‘humans only’ internet will be the new streaming and eventually it’ll take over the web (until they eventually figure out how to ruin that too). In the meantime, they’ll continue to exploit the infested hellscape internet because everybody’s grandma and grampa are still on it.
treadful@lemmy.zip
on 24 Apr 2024 05:20
nextcollapse
I would rather wade with bots than exist on a fully doxxed Internet.
rottingleaf@lemmy.zip
on 25 Apr 2024 12:33
collapse
Yup. I have my own prediction - that humanity will finally understand the wisdom of PGP web of trust, and using that for friend-to-friend networks over Internet. After all, you can exchange public keys via scanning QR codes, it’s very intuitive now.
That would be cool. No bots. Unfortunately, corps, govs and other such mythical demons really want to be able to automate influencing public opinion. So this won’t happen until the potential of the Web for such influence is sucked dry. That is, until nobody in their right mind would use it.
Baylahoo@sh.itjust.works
on 24 Apr 2024 06:35
nextcollapse
That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.
coarse@startrek.website
on 25 Apr 2024 16:41
collapse
I think we’ll just go back to valuing in-person interactions way more than digital ones.
merthyr1831@lemmy.world
on 23 Apr 2024 21:42
nextcollapse
This shit isnt new, companies have been exploiting reddit to push products as if they’re real people for years. The “put reddit after your search to fix it!!!” thing was a massive boon for these shady advertisers who no doubt benefitted from random people assuming product placements were genuine.
Mastengwe@lemm.ee
on 23 Apr 2024 23:34
nextcollapse
AI Is Poisoning Reddit to Promote Products and Game Google With ‘Parasite SEO’
Ai is a tool. It can be used for good and it can be used for poison. Just because you see it being used for poison more often doesn’t mean you should be against ai. Maybe lay the blame on the people using it for poison
The problem is the magnitude, but yeah, even before 2020 Google was becoming shit and being overrun by shitty blogspam trying to sell you stuff with articles clearly written by machines. The only difference is that it was easier to spot and harder to do. But they did it anyway
rottingleaf@lemmy.zip
on 25 Apr 2024 12:25
nextcollapse
These things became shit around 2009. Or immediately after becoming sufficiently popular to press out LiveJournal and other such (the original Web 2.0, or maybe Web 1.9 one should call them) platforms.
What does this have to do with search engines - well, when they existed alongside web directories and other alternative, more social and manual ways of finding information, you’d just go to that if search engines would become too direct in promotion and hiding what they don’t want you to see. You’d be able to compare one to another and feel that Google works bad in this case. You wouldn’t be influenced in the end result.
Now when what Google gives you became the criterion for what you’re supposed to associate with such a request, and same for social media, then it was decided.
coarse@startrek.website
on 25 Apr 2024 16:38
collapse
Search: How to do X
“First, what is X”
“Why would you want to do X”
“Finally, here’s how you do X.”
Just gotta repeat X as much as possible under as many different contexts to ensure your results end up at the top.
It’s really disgusting and I’m saddened by how we constantly reward people like this for making the world a worse place.
TheFriar@lemm.ee
on 24 Apr 2024 12:29
nextcollapse
“New poison has been added to arsenic. Should you stop drinking it? Subscribe to find out.”
dynamojoe@lemmy.world
on 24 Apr 2024 15:11
nextcollapse
When googling something, append -site:reddit.com
catch22@programming.dev
on 24 Apr 2024 15:11
nextcollapse
This is a direct consequence of Google targeting Reddit posts in its search results. Hopefully forum groups like Lemmy don’t go get buried under a mountain of garbage as well. As long as advertisers are able to destroy public forums and communities with ads, with ad based revenue sites like Google directing who to target. We will always be creating something great while constantly trying to keep advertisers from turning it into a pile of crap.
NeptuneOrbit@lemmy.world
on 24 Apr 2024 18:34
collapse
The history of TV, in reverse. And then forward again.
At first, it was an impossibly expensive medium rules by a cartel of agencies and advertisers. Eventually, HBO comes along and shows you don’t have to just make a bunch of lowest common denominator drivel.
Netflix eventually shows that the internet can be a way cheaper model than cable. Finally, money shows up in the streaming model, remaking advertiser friendly cable in the internet age. All in about 2.5 decades.
coarse@startrek.website
on 25 Apr 2024 16:40
collapse
The ebb and flow of consumerism.
I hope one day we have enough data to recognize it and stop it.
threaded - newest
Oh, I would have thought Reddit themselves would offer such a service
.
He’s got to get them from somewhere. They certainly aren’t coming from his little piggy brain.
Please don’t insult the pigs, they’re smart and sensitive creatures
Reddit is past the point of no return. He might as well speed it up a little.
Like a built in brand dashboard where brands can monitor keywords for their brand and their competitors? And then deploy their sanctioned set of accounts to reply and make strategic product recommendations?
Sounds like something that must already exist. But it would have been killed or hampered by API changes… so now Spez has a chance to bring it in-house.
They will just call it brand image management. And claim that there are so many negative users online that this is the only way to fight misinformation about their brand.
Or something. It’s all so tiring.
That would be an unmarked ad. I don’t think that’s legal in many places
Probably.
So, we complain to a regulatory body, they investigate, they tell a company to do better or, waaaay down the road, attempt to levy a fine. Which most companies happily pay, since the profits from he shady business practices tend to far outweigh the fines.
Legal or illegal really only means something when dealing with an actual person. Can’t put a corporation in jail, sadly.
It's gross, but also inevitable. If there's an untapped niche to make money from, somebody's going to try it -- plus if they want to waste their money on generating accounts only to have them be banned, then so be it.
Makes me kinda thankful that this community is smaller and less likely to be targeted by this sort of crap.
Wayne’s World vibes
Here is an alternative Piped link(s):
Wayne’s World vibes
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
What’s funny is I think it would be profitable for maybe, like, a year, before everyone starts doing it and then even normal people stop trusting reddit comments.
It’s like pissing in a pool to sell people soap. What’s the plan once people stop using the pool?
Buy a new pool and piss in again to sell new soaps.
By the time that the cow is bled dry, someone is stuck holding the bag while some people made out like bandits.
That is the stock market for you. Create no value, just wealth transfer.
In this case it’s creating a kind of anti-value - harm, I guess.
Also I bow to your superior and brazen use of mixed metaphors. You got double what I did. “Bleeding” a cow dry? It adds impact over the usual “milking” even!
Milking assume that you don’t kill the cow, which isn’t the case here.
Some people are specialized at being hired at startups to prop up the startup to be sold and make a quick buck.
Then they move on to the next startup, wash rinse and repeat. It tells a lot about the state of innovation.
Innovation’t 😒
Just wait, in a near future there will be floods of bots quelling and stoking tempers to control opinions online, and in the real world.
We already get some of this, but the scale is going to become many times worse.
When the internet is eventually oversaturated with smartbots, where will the humans go.
To a new social media platform where you have to send in a DNA sample to create an account.
That creates a market for morticians and midwifes creating preauthenticated accounts to sell to bot farms
( ͡°╭͜ʖ╮͡° )
“The Matrix”, obviously.
.
The usefulness of Captchas is being destroyed by “AI” too. And ironically they were used to train certain types of Machine Learning.
.
Group chats.
Peer-to-peer systems? Systems where you have to do physically be at the location to get data maybe, so cyber cafe like things. Or back to the old system and go to the regular bars, repair cafés or hobby places.
Synchronous spaces.
Social VR does not have a lot of the ills of social media. You only have to deal with people much like you would IRL.
Doesn’t mean that the fediverse is immune.
News stories and narratives are still fought over by actors on all sides and sometimes by entities that might be bots. And there are a lot of auto-generating content bots that post stuff or repost old content from other sites like Reddit.
Especially since being immune to censorship is kind of the point of the fediverse.
If you’re even a tiny bit smart about it, you can start hundreds of sock puppet instances and flood other instances with bullshit.
I try to avoid talking about how indefensibly terrible Lemmy’s anti-spam and anti-brigading measures are for fear of someone doing something with the information. I imagine the only thing keeping subtle disinfo and spam from completely overtaking Lemmy is how small its reach would be. Doing the same thing to Reddit is a hundred times more effective, and systemically accepted. Reddit’s admins like engagement.
Put in those tickets. It’s a community effort y’know.
I feel the same about a lot of Fediverse apps right now. They’re kinda just coasting on the fact that they’re not big enough for most spammers to care about. But they need to put in solid defenses and moderation tools before that happens
Another reason to block federation with Threads.
Meta has the most resources to combat spam and abuse.
And the least demonstrated desire to do so.
Meta will likely actually moderate against spambots because they want you to fucking pay them for that service. The problem is, they aren’t too interested in moderating hate speech.
So, you’re suggesting that it is better that they are profiting from helping state actors and hate groups?
Edit: No, they are not suggesting that. I misunderstood their meaning.
I don’t think I made a value statement whatsoever. I think calling it a problem and hate speech would’ve been enough of a clue as to how I felt about it, however.
It’s actually why I support most instances defederating from them
Ah. I clearly misunderstood your meaning. Sorry about that.
It’s an arms race and Lemmy is only a small player right now so no one really pays attention to our little corner. But as soon as we get past a certain threshold, we’ll be dealing with the same problems as well.
Can’t some instances make some sort of agreement and have a whitelist of instances to not block? People would need to register to add their instances to the list, and some common measures would be applied to restrict someone from registering several instances at once, and banning people who misuse the system.
That wouldn’t solve the problem, but perhaps would make things more manageable.
You can’t block people. Who would you know, who registered the domain?
What you’re proposing is pretty similar to the current state of email. It’s almost impossible to set up your own small mail server and have it communicate the “mailiverse” since everyone will just assume you’re spam. And that lead to a situation where 99% of people are with one of the huge mail providers.
you’re right, the matter is more complicated than I thought…
It’s extremely complicated and I don’t really see a solution.
You’d need gigantic resources and trust in those resources to vet accounts, comments, instances. Or very in depth verification processes, which in turn would limit privacy.
What I actually found interesting was bluesky’s invite system. Each user got a limited number of invite links and if a certain amount of your invitees were banned, you’d be banned/flagged to. That creates a web of trust, but of course also makes anonymous accounts impossible.
The only reason reddit was valuable was because it was from real people who weren't paid off. Well that's ruined now.
Yeah, I’ve noticed that a bit lately anyways. Maybe I’m looking up stuff that has less of a community on Reddit, and thus has less discussion, but I have absolutely noticed some comments have a single product name-drop with little clarity for why they liked the product. It starts to feel like they’re just ads (generated or otherwise) meant to trick you into thinking Reddit users are liking the product.
AI is going to just make it worse, and cause Reddit to not be a good goto for actual reviews and discussion on pros/cons.
Exactly. Usually there's a conversation or a quick consensus on one or two things. But I've been seeing lots of single answers or just ads
There’s an excellent chance that even some of the “authentic” discussions you see are word-for-word reposts of old posts and comments, created by bots to build up karma in order to be sold to spammers and influence peddlers down the line.
The first obvious wave of this stuff, to me, was the video conversion ripoff software and similar. They had people looking around for questions their software was possibly a solution for. Sometimes they would act like users, other times it was more neutral info, but still clear it was self promotion because of what was recommended.
I wanted to figure out what game hosting sites were good and Google pointed me to reddit…every thread was full of boilerplate ads for different sites. The comments were the most obvious, marketing-approved sentences I’ve ever seen
Everything I can find online seems to be advertisements or paid reviews (Also advertisements) when looking for anything anymore. Businesses are terrified of an open honest conversation about what is good and what is not
If you’re terrified of honest conversations, your product is probably shit.
Marques Brownlee had a video recently about the question “do bad reviews kill products?” that highlights the issue well
Exactly. Every company is terrified of honest conversation since it makes putting out shit harder.
I so don’t understand how to run a business.
Spend $Billions shoving advertising down everyone’s throats? Absolutely!
Just make a good product and provide good customer support? It will never work!
Option 1 is easy and any idiot can throw money at it to solve the problem. Option 2 requires talented people and real effort.
If only people moved to an open and federated platform. I mean I don’t have to say that I hate reddit since I’m here but still whenever I Google a problem reddit answers are one of the most useful places. Especially about something local.
This isn’t a problem that can be solved with a technical solution that isn’t itself extremely dystopian in nature.
This is a problem that requires legislation and criminal liability, or genuine punitive civil liability that pierces the corporate legal shields.
Don’t hold your breath for a serious solution to present itself.
Do you think legislation and laws would be reasonable for trolls who ban evade and disrupt and destroy synchronous online social spaces too?
The same issue happens there. Zero repercussions, ban evasion is almost always possible, and the only fool proof solutions seem to quickly turn dystopian too.
Ban evasion and cheating are becoming a bigger and bigger issue in online games/social spaces. And all the nerds will agree it’s impossible to fix. And many feel it’s just normal culture. But it’s not sustainable, and with AI and an ever escalating cat and mouse game, it’s going to continue to get worse.
Can anyone suggest a solution that is on the horizon?
No, I’m a free speech absolutist when it comes to private citizens. Be they communists, Nazis, Democrats, trolls, assholes or furries, the government should have no role in regulating their speech outside of reasonable exceptions i.e. yelling fire in a crowded theater, threats of physical violence, etc.
My moral conviction on relative free speech absolutism ends at the articles of incorporation, or other nakedly profit driven speech e.g. market manipulation.
So if the trolls and ban evaders are acting on behalf of a company, or for profit driven interests, their speech should be regulated. If they’re just assholes or trolls, that’s a problem for the website and mod teams.
Thanks for replying. As far as speech goes, I agree with you. And I agree that moderation, using tools like blocking or muting and social mores should take care of things.
Setting speech aside. What about people who hack the spaces, break things, blast ear piercing sounds, crash other users or otherwise do not use symbols or speech to destroy or harm a synchronous social space? And I am assuming these are mostly always individuals. Not some corporate scheme.
If the rumor is true that a reddit/google training deal is what led to reddit getting boosted in search results, this would be a direct result of reddit’s own actions.
I appreciate the mostly benign neglect we had for awhile. Now that they’re paying attention it’s just all bad. Or would be, if I was there. HA.
Wow this is gross. I’m gonna wash it down with some MOUNTAIN DEW ™
How exactly are they poisoning a pool of toxic waste?
Pissing into an ocean of piss.
Now it’s not only toxic, it’s also acidic and instead of killing you, it’ll also melt you.
How visiting Reddit will feel
Here is an alternative Piped link(s):
How visiting Reddit will feel
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Well, that was the last bit of usefulness I used to get out of google. I’ve been on yahoo for a while now
Yahoo is still alive?
Yep, it’s sort of what google used to be. It took me a bit of setup tho. They really like to default to showing you a ton of news and crap. But after turning that all off I’m left with a super clean ui and useful search results
I see the yahoo ai bot is working well. /s
Absolutely, I am definitely not human
I was about ready to downvote out of pure annoyance lol.
What an absolute piece of shit. Just a general trash person to even think of this concept.
His surname translates from russian as ‘white lips’. No wonder he is a ghoul.
Yeah this isn’t new.
Ever wonder why you are such a fan of shitty played out franchises?
Well that’s certainly one way for your brand to lose a lot of respect once it becomes apparent. Much like I when want to lose respect for myself, I use Chum brand dog food. Chum, it’s still food, alright?
I still haven’t seen a use of AI that doesn’t serve state or corporate interests first, before the general public. AI medical diagnostics comes the closest, but that’s being leveraged to justify further staffing reductions, not an additional check.
The AI-captcha wars are on, and no matter who wins we lose.
Not necessarily.
AI is helping me learn and program C++. It's built into my IDE. Much more efficient than searching stackoverflow. Whenever it comes up with something I've never seen before, I learn what that thing does and mentally store it away for future use. As time goes on, I'm relying on it less and less. But right now it's amazing. It's like having a tutor right there with you who you can ask questions anytime, 24/7.
I hope a point comes where my kid can just talk to a computer, tell it the specifics of the program he wants to create, and have the computer just program the entire thing. That's the future we are headed towards. Ordinary folks being able to create software.
I’ll agree there’s huge potential for ‘assistant’ roles (exactly like you’re using) to give a concise summary for quick understanding. But LLMs aren’t knowledgeable like an accredited professor or tutor is, understanding the context and nuance of the topic. LLMs are very good at scraping together data and presenting the shallowest of information, but their limits get exposed quickly when you try to go into a topic.
For instance, I was working a project that required very long term storage (+10 years) with intermittent exposure to open air, and was concerned about oxidation and rust. ChatGPT was very adamant that desiccant alone was sufficient (wrong) and that VCI packs would last (also wrong). It did a great job of repackaging corporate ad-copy and industrial white papers written by humans, but not of providing an objective answer to a semi complex question.
I guess it's not great for things requiring domain knowledge. Programming seems to be easy for it, as programs are very structured, predictable, and logical. That's where its pattern-matching-and-prediction abilities shine.
This market; expected to replace the same market that just used bots to achieve the same thing
I don’t understand how Lemmy/Mastodon will handle similar problems. Spammers crafting fake accounts to give AI generated comments for promotions
The only thing we reasonably have is security through obscurity. We are something bigger than a forum but smaller than Reddit, in terms of active user size. If such a thing were to happen here, mods could handle it more easily probably (like when we had the spammer of the Japanese text back then), but if it were to happen on a larger scale than what we have it would be harder to deal with.
Mostly it seems to be handled here with that URL blacklist automod.
I think the real danger here is subtlety. What happens when somebody asks for recommendations on a printer, or complains about their printer being bad, and all of a sudden some long established account recommends a product they’ve been happy with for years. And it turns out it’s just an AI bot shilling for brother.
For one, well established brands have less incentives to engage in this.
Second, in this example, the account in question being a “long established user” would seem to indicate you think these spam companies are going to be playing a long game. They won’t. That’s too much effort and too expensive. They will do all of this on the cheap, and it will be very obvious.
This is not some sophisticated infiltration operation with cutting edge AI. This is just auto generated spam in a new upgraded form. We will learn to catch it, like we’ve learned to catch it before.
I mean, it doesn’t have to be expensive. And also doesn’t have to be particularly cutting edge. Start throwing some credits into an LLM API, haven’t randomly read and help people out in different groups. Once it reaches some amount of reputation have it quietly shill for them. Pull out posts that contain keywords. Have the AI consume the posts and figure out if they have to do with what they sound like they do. Have it subtly do product placement. None of this is particularly difficult or groundbreaking. But it could help shape our buying habits.
There’s one advantage on the fediverse. We don’t have the corporations like reddit manipulating our feeds, censoring what they dislike, and promoting shit. This alone makes using the fediverse worth for me.
When it comes to problems involving the users themselves, things aren’t that different, and we don’t have much to do.
yet. Once we have enough users that it’s worth their effort to target, the bullshit will absolutely come.
they can perhaps create instances, pay malicious users, try some embrace, extend, extinguish approach or something, but they can’t manipulate the code running on the instances we use, so they can’t have direct power over it. Or am I missing something? I’m new to the fediverse.
There's very little to prevent them just pretending to be average users and very little preventing someone from just signing up a bunch of separate accounts to a bunch of separate instances.
No great automated way to tell whether someone is here legitimately.
Yeah, and that is true for a lot of service. Sybil attack is indeed quite hard to prevent since malicious users can blend with legitimate ones.
Federation means if you are federated then sure you get some BS. Otherwise, business as usual. Now, making sure there is no paid user or corporate bot is another matter entirely since it relies on instance moderators.
Corporations aren’t the only ones with incentives to do that. Reddit was very hands off for a good long while, but don’t expect that same neutral mentality from fediverse admins.
I kind of feel like the opposite, for a lot of instances, 'mods' are just a few guys who check in sporadically whereas larger companies can mobilize full teams in times of crisis, it might take them a bit of time to spin things up, but there are existing processes to handle it.
I think spam might be what kills this.
Hmm, good point.
If a community is so small that the mod team can be so inactive, there’s no incentive for the company to put any effort into spamming it like you’re suggesting.
And if they do end up getting a shit ton of spam in there, and it sits around for a bit until a moderator checks in, so what? They’ll just clean it up and keep going.
I’m not sure why people are so worried about this. It’s been possible for bad actors to overrun small communities with automated junk for a very long time, across many different platforms, some that predate Reddit. It just gets cleaned up and things keep going.
It’s not like if they get some AI produced garbage into your community, it infects it like a virus that cannot be expelled.
The same way it’s handled on Reddit: moderators.
Some will get through and sit for a few days but eventually the account will make itself obvious and get removed.
It’s not exactly difficult to spot these things. If an account is spending the majority of its existence on a social media site talking about products, even if they add some AI generated bullshit here and there to make it seem like it’s a regular person, it’s still pretty obvious.
If the account seems to show up pretty regularly in threads to suggest the same things, there’s an indicator right there.
Hell, you can effectively bait them by making a post asking for suggestions on things.
They also just tend to have pretty predictable styles of speak, and never fail to post the URL with their suggestion.
That’s like adding caustic soda to bleach. Just made the poison stronger
Haaaaaaaaaaaaaaa!
Enjoy your open, impartial platform Reditards.
I just consider any comment after Jun 2023 to be compromised. Anyone who stayed after that date either doesn’t have a clue, or is sponsored content.
“i remember when reply guy was a term used for someone notorious for replying to things in a specific manner”
“take your meds grandpa, it’s getting late”
yeah, the internet is doomed to be unusable if AI just keeps getting more insidious like this
yet more companies tie themselves to online platforns, websites, and other models of operation depending on being always connected.
maybe the world needs a reboot, just get rid of it all and start from scratch
I do kind of feel like this part of the experiment might just be coming to a close.
There's no "if AI just keeps getting more insidious", the barrier for entry is too small. AI is going to keep doing the things it's already doing, just more efficiently, and it doesn't matter that much how we feel about whether those things are good or bad. I feel like the things it is starting to ruin are probably just going to be ruined.
That would destroy all the old good vintage stuff and leave us with machines that immediately fill the vacant space with pure trash.
rapture but with technology would be pretty funny
save the good old stuff and burn the rest
Correction - AI is poisoning everything when it is not regulated and moderated.
Reddit has been poisoning itself for a while, what’s the difference? Just AI borrowing from the shithead behavior?
Lol, you think regulation and moderation aren’t poison themselves.
The regulations we implement are written by the Sam Bankman Frieds and Elon Musks who can capture the regulatory agencies. The moderation is itself increasingly automated, for the purpose of inflating perceived quality and quantity of interactions on the website.
Get back to a low-population IIRC or Discord server, a small social media channel, or a… idfk… Lemmy instance? Suddenly regulation and moderation by, of, and for the user base starts looking much nicer.
Lol, you think allowing people and businesses to do whatever the fuck they want is a good thing.
So the human shills that already destroyed good faith in forums and online communities over time are now being fully outsourced to AI. Amazon itself a prime source of enshittification. From fake reviews to everyone with a webpage having affiliate links trying to sell you some shit or other. Including news outlets. Turned everyone into a salesperson.
I Turned Bottles of Amazon Drivers’ Pee into a #1 Bestseller. Amazon don’t care about their workers’ bladders, but do you know what they do care about? Selling stuff.
I called this shit out like a year ago. It’s the end of any viable online searching having much truth to it. All we’ll have left is youtube videos from project farm to trust.
It kinda seems like the end of the Google era. What will we search Google for when the results are all crap? This is the death gasps of the internet I/we grew up with.
Maybe web rings of the 90s were not such a bad idea! Let’s bring 'em back!
Gemini webrings are the future?
They would poison that shit as well unfortunately. The concept is great though.
Eh, how’d you do that?
Do what? Webrings?
How do you poison them.
Create sites that look like legit websites, then slowly ramp-up the bullshit. Same tactic as always.
So? Every part of a web ring is a site the webmaster of which can remove that banner at any moment.
Remember when you could type a vague plot of a film you’d heard about into Google and it’d be the first result?
Nah doesn’t work anymore
Saw a trailer for a french film so I searched “french film 2024 boys live in woods seven years”
Google - 2024 BEST FRENCH FILMS/TOP TEN FRENCH FILMS YOU MUST SEE THIS YEAR/ALL TIME BEST FRENCH MOVIES
Absolute fucking gash
I’ve not been too impressed with Kagi search, but at least the top result there was “Frères 2024”
I honestly don’t remember this at all. I remember priding myself on my “google-fu” and how to search it to get what i, or other people, needed. Which usually required understanding the precise language that you would need to use, not something vague. But over the years it’s gotten harder and harder, and now I get frustrated with how hard it has become to find something useful. I’ve had to go back to finding places I trust for information and looking through them.
Although, ironically, I can do what you’re talking about with ai now.
It was absolutely a thing and one of the reasons Google became wildly popular at first
When?
TUESDAY
I’m feeling myself old and I’m 28.
Cause in my early childhood in 2003-2007 we would resort to search engines only when we couldn’t find something by better (but more manual and social) means.
Because - mwahahaha - most of the results were machine-generated crap.
So I actually feel very uplift due to people promising the Web to get back to norm in this sense.
I ran into this issue while researching standing desks recently. There are very few places on the internet where you can find verifiably human-written comparisons between standing desk brands. Comments on Reddit all seem to be written by bots or people affiliated with the brands. Luckily I managed to find a YouTube reviewer who did some real comparisons.
Generative AI has really become a poison. It’ll be worse once the generative AI is trained on its own output.
You’re two years late.
Maybe not for the reputable ones, that’s 2026, but these sheisters have been digging out the bottom of the swimming pool for years.
theconversation.com/researchers-warn-we-could-run…
New models already train on synthetic data. It’s already a solved solution.
Is it really a solution, though, or is it just GIGO?
For example, GPT-4 is about as biased as the medical literature it was trained on, not less biased than its training input, and thereby more inaccurate than humans:
www.thelancet.com/journals/landig/…/fulltext
All the latest models are trained on synthetic data generated on got4. Even the newer versions of gpt4. Openai realized it too late and had to edit their license after Claude was launched. Human generated data could only get us so far, recent phi 3 models which managed to perform very very well for their respective size (3b parameters) can only achieve this feat because of synthetic data generated by AI.
I didn’t read the paper you mentioned, but recent LLM have progressed a lot in not just benchmarks but also when evaluated by real humans.
Here’s my prediction. Over the next couple decades the internet is going to be so saturated with fake shit and fake people, it’ll become impossible to use effectively, like cable television. After this happens for a while, someone is going to create a fast private internet, like a whole new protocol, and it’s going to require ID verification (fortunately automated by AI) to use. Your name, age, and country and state are all public to everybody else and embedded into the protocol.
The new ‘humans only’ internet will be the new streaming and eventually it’ll take over the web (until they eventually figure out how to ruin that too). In the meantime, they’ll continue to exploit the infested hellscape internet because everybody’s grandma and grampa are still on it.
I would rather wade with bots than exist on a fully doxxed Internet.
Yup. I have my own prediction - that humanity will finally understand the wisdom of PGP web of trust, and using that for friend-to-friend networks over Internet. After all, you can exchange public keys via scanning QR codes, it’s very intuitive now.
That would be cool. No bots. Unfortunately, corps, govs and other such mythical demons really want to be able to automate influencing public opinion. So this won’t happen until the potential of the Web for such influence is sucked dry. That is, until nobody in their right mind would use it.
That sounds very reasonable as a prediction. I could see it being a pretty interesting black mirror episode. I would love it to stay as fiction though.
I think we’ll just go back to valuing in-person interactions way more than digital ones.
This shit isnt new, companies have been exploiting reddit to push products as if they’re real people for years. The “put reddit after your search to fix it!!!” thing was a massive boon for these shady advertisers who no doubt benefitted from random people assuming product placements were genuine.
AI Is Poison
ing Reddit to Promote Products and Game Google With ‘Parasite SEO’FTFY
Ai is a tool. It can be used for good and it can be used for poison. Just because you see it being used for poison more often doesn’t mean you should be against ai. Maybe lay the blame on the people using it for poison
.
You don’t get to blame AI for this. Reddit was already overrun by corporate and US gov trolls long before AI.
The problem is the magnitude, but yeah, even before 2020 Google was becoming shit and being overrun by shitty blogspam trying to sell you stuff with articles clearly written by machines. The only difference is that it was easier to spot and harder to do. But they did it anyway
These things became shit around 2009. Or immediately after becoming sufficiently popular to press out LiveJournal and other such (the original Web 2.0, or maybe Web 1.9 one should call them) platforms.
What does this have to do with search engines - well, when they existed alongside web directories and other alternative, more social and manual ways of finding information, you’d just go to that if search engines would become too direct in promotion and hiding what they don’t want you to see. You’d be able to compare one to another and feel that Google works bad in this case. You wouldn’t be influenced in the end result.
Now when what Google gives you became the criterion for what you’re supposed to associate with such a request, and same for social media, then it was decided.
Search: How to do X
“First, what is X”
“Why would you want to do X”
“Finally, here’s how you do X.”
Just gotta repeat X as much as possible under as many different contexts to ensure your results end up at the top.
It’s really disgusting and I’m saddened by how we constantly reward people like this for making the world a worse place.
“New poison has been added to arsenic. Should you stop drinking it? Subscribe to find out.”
OMG 😂, so good! Your comment I mean, not arsenic.
Ftfy
Really just “trolls” in general, but
Large chunks of reddit read like US state dep press releases.
Yeah, my point was just that it’d be silly to think it was just us gov doing it and not others.
Logitech decided to include some AI shit in Logi Options+, I uninstalled that crap ASAP.
.
When googling something, append -site:reddit.com
This is a direct consequence of Google targeting Reddit posts in its search results. Hopefully forum groups like Lemmy don’t go get buried under a mountain of garbage as well. As long as advertisers are able to destroy public forums and communities with ads, with ad based revenue sites like Google directing who to target. We will always be creating something great while constantly trying to keep advertisers from turning it into a pile of crap.
The history of TV, in reverse. And then forward again.
At first, it was an impossibly expensive medium rules by a cartel of agencies and advertisers. Eventually, HBO comes along and shows you don’t have to just make a bunch of lowest common denominator drivel.
Netflix eventually shows that the internet can be a way cheaper model than cable. Finally, money shows up in the streaming model, remaking advertiser friendly cable in the internet age. All in about 2.5 decades.
The ebb and flow of consumerism.
I hope one day we have enough data to recognize it and stop it.
That’s just for small players. Big corps probably been doing it for years.