FOSS infrastructure is under attack by AI companies (thelibre.news)
from simple@lemm.ee to technology@lemmy.world on 20 Mar 15:06
https://lemm.ee/post/58981372

#technology

threaded - newest

wjs018@piefed.social on 20 Mar 15:41 next collapse

Really great piece. We have recently seen many popular lemmy instances struggle under recent scraping waves, and that is hardly the first time its happened. I have some firsthand experience with the second part of this article that talks about AI-generated bug reports/vulnerabilities for open source projects.

I help maintain a python library and got a bug report a couple weeks back of a user getting a type-checking issue and a bit of additional information. It didn't strictly follow the bug report template we use, but it was well organized enough, so I spent some time digging into it and came up with no way to reproduce this at all. Thankfully, the lead maintainer was able to spot the report for what it was and just closed it and saved me from further efforts to diagnose the issue (after an hour or two were burned already).

BrianTheeBiscuiteer@lemmy.world on 20 Mar 16:14 next collapse

Testing out a theory with ChatGPT there might be a way, albeit clunky, to detect AI. I asked ChatGPT a simple math question then told it to disregard the rest of the message, then I asked it if it was AI. It answered the math question and told me it was ai. Now a bot probably won’t admit to being AI but it might be foolish enough to consider instruction that you explicitly told it not to follow.

Or you might simply be able to waste its resources by asking it to do something computationally difficult that most people would just reject outright.

Of course all of this could just result in making AI even harder to detect once it learns these tricks. 😬

itsralC@lemm.ee on 21 Mar 17:15 collapse

These aren’t actual LLMs scraping the web, they’re your usual scraping bots used in an industrial scale, disregarding conventions about what they should or shouldn’t scrape.

HubertManne@piefed.social on 20 Mar 16:19 next collapse

Any idea what the point of these are then? Sounds like its reporting a fake bug.

wjs018@piefed.social on 20 Mar 16:31 collapse

The theory that the lead maintainer had (he is an actual software developer, I just dabble), is that it might be a type of reinforcement learning:

  • Get your LLM to create what it thinks are valid bug reports/issues
  • Monitor the outcome of those issues (closed immediately, discussion, eventual pull request)
  • Use those outcomes to assign how "good" or "bad" that generated issue was
  • Use that scoring as a way to feed back into the model to influence it to create more "good" issues

If this is what's happening, then it's essentially offloading your LLM's reinforcement learning scoring to open source maintainers.

HubertManne@piefed.social on 20 Mar 16:36 next collapse

Thats wild. I don't have much hope for llm's if things like this is how they are doing things and I would not be surprised given how well they don't work. Too much quantity over quality in training.

SabinStargem@lemmy.today on 21 Mar 14:05 collapse

Honestly, I would be alright with this if the AI companies paid Github so that the server infrastructure can be upgraded. Having AI that can figure out bugs and error reports could be really useful for our society. For example, your computer rebooting for no apparent reason? The AI can check the diagnostic reports, combine them with online reports, and narrow down the possibilities.

In the long run, this could also help maintainers as well. If they can have AI for testing programs, the maintainers won’t have to hope for volunteers or rely on paid QA for detecting issues.

What Github & AI companies should do, is an opt-in program for maintainers. If they allow the AI to officially make reports, Github should offer an reward of some kind to their users. Allocate to each maintainer a number of credits so that they can discuss the report with the AI in realtime, plus $10 bucks for each hour spent on resolving the issue.

Sadly, I have the feeling that malignant capitalism would demand maintainers to sacrifice their time for nothing but irritation.

Dave@lemmy.nz on 20 Mar 20:21 collapse

AI scrapers are a massive issue for Lemmy instances. I’m gonna try some things in this article because there are enough of them identifying themselves with user agents that I didn’t even think of the ones lying about it.

I guess a bonus (?) is that with 1000 Lemmy instances, the bots get the Lemmy content 1000 times so our input has 1000 times the weighting of reddit.

RobotToaster@mander.xyz on 20 Mar 15:53 next collapse

If an AI is detecting bugs, the least it could do is file a pull request, these things are supposed to be master coders right? 🙃

reksas@sopuli.xyz on 20 Mar 19:39 collapse

to me, ai is a bit like bucket of water if you replace the water with “information”. Its a tool and it cant do anything on its own, you could make a program and instruct it to do something but it would work just as chaotically as when you generate stuff with ai. It annoys me so much to see so many(people in general) think that what they call ai is in anyway capable of independent action. It just does what you tell it to do and it does it based on how it has been trained, which is also why relying on ai trained by someone you shouldnt trust is bad idea.

fjordo@feddit.uk on 20 Mar 16:38 next collapse

I wish these companies would realise that acting like this is a very fast way to get scraping outlawed altogether, which is a shame because it can be genuinely useful (archival, automation, etc).

jol@discuss.tchncs.de on 20 Mar 17:28 collapse

How can you outlaw something a company in another conhtinent is doing? And specially when they are becoming better as disguising themselves as normal traffic? What will happen is that politicians will see this as another reason to push for everyone having their ID associated with their Internet traffic.

MoogleMaestro@lemmy.zip on 20 Mar 18:06 next collapse

What will happen is that politicians will see this as another reason to push for everyone having their ID associated with their Internet traffic.

You’re right. Which is exactly why companies should be exhibiting better behaviour and self regulate before they make the internet infinitely worse off for everyone.

C45513@lemm.ee on 20 Mar 18:32 next collapse

according to history, this sadly never works

fjordo@feddit.uk on 20 Mar 18:58 next collapse

Exactly, we’ve already seen this in the past. GDPR is a good example. Whilst I’m glad this regulation exists, it wouldn’t be necessary if megacorps would have behaved.

big_slap@lemmy.world on 20 Mar 19:06 collapse

self regulation is a joke. a few bad apples always spoil the bunch.

what needs to happen is regulation, period. force all companies to abide by laws that just make sense, and all these problems go away.

see: GDPR

oldfart@lemm.ee on 20 Mar 21:06 collapse

What did GDPR solve? Did we get rid of advertisers sharing data?

big_slap@lemmy.world on 20 Mar 21:34 collapse

nope, but now we are aware of how many times our data is shared with because of it.

here’s a short breakdown of what it has accomplished:

The GDPR lists six data processing principles that data controllers must comply with. Personal data must be:

Processed lawfully, fairly and transparently.
Collected only for specific legitimate purposes.
Adequate, relevant and limited to what is necessary.
Accurate and, where necessary, kept up to date.
Stored only as long as is necessary.
Processed in a manner that ensures appropriate security.

Lawful processing

Except for special categories of personal data, which cannot be processed except under certain circumstances, personal data can only be processed:

If the data subject has given their consent;
To meet contractual obligations;
To comply with legal obligations;
To protect the data subject’s vital interests;
For tasks in the public interest; and
For the legitimate interests of the organisation.

Data subjects’ rights

Data subjects have:

The right to be informed;
The right of access;
The right to rectification;
The right to erasure;
The right to restrict processing;
The right to data portability;
The right to object; and
Rights concerning automated decision-making and profiling.

Learn how to map your data and establish a lawful basis for processing Valid consent

There are stricter rules regarding consent:

Consent must be freely given, specific, informed and unambiguous.
A request for consent must be intelligible and in clear, plain language.
Silence, pre-ticked boxes and inactivity will no longer suffice as consent.
Consent can be withdrawn at any time.
Consent for online services from a child is only valid with parental authorisation.
Organisations must be able to evidence consent.

Data protection by design and by default

Data controllers and processors must implement technical and organisational measures that are designed to implement the data processing principles effectively.

Appropriate safeguards should be integrated into the processing.
Data protection must be considered at the design stage of any new process, system or technology.
A DPIA (data protection impact assessment) is an integral part of privacy by design.

Transparency and privacy notices

Organisations must be clear about how, why and by whom personal data will be processed.

When personal data is collected directly from data subjects, data controllers must provide a privacy notice at the time of collection.
When personal data is not obtained directly from data subjects, data controllers must provide a privacy notice without undue delay, and within a month. This must be done the first time they communicate with the data subject.
For all processing activities, data controllers must decide how the data subjects will be informed, and design privacy notices accordingly. Notices can be issued in stages.
Privacy notices must be provided to data subjects in a concise, transparent and easily accessible form, using clear and plain language.

Data transfers outside the EU

Where the EU has designated a country as providing an adequate level of data protection;
Through standard contractual clauses or binding corporate rules; or
By complying with an app
oldfart@lemm.ee on 21 Mar 06:06 next collapse

So now the adtech companies need to hire a minimum wage person in the EU, and I can write them a letter requesting they remove my anonimized data, doxxing myself in the process. Oh and now I know they’re sharing with 395 partners, as if that wasn’t obvious from uBlock before. And I get to sign a permission to process my data if I want to see a doctor.

big_slap@lemmy.world on 21 Mar 13:21 collapse

yes to everything you said, what point are you trying to make?

gigglybastard@lemmy.world on 21 Mar 14:54 collapse

that sounds great in theory but a) noone respects this and b) noone enforces this

i know because i reported a bunch of companies and websites and every time i got a reply “welp, there’s nothing we can do”

GDRP is useless

big_slap@lemmy.world on 21 Mar 15:59 collapse

a) noone respects this

well, the websites I frequent always ask me if I want to allow for tracking cookies ever since GDPR was implemented. I think it worked for websites that want to comply with the law.

also, that’s disappointing to hear about them not taking action on companies that don’t comply. you went through the whole process several times? which country are you located in? I’m just curious 🙂

[deleted] on 21 Mar 19:09 collapse

.

Buelldozer@lemmy.today on 20 Mar 20:43 collapse

What will happen is that politicians will see this as another reason to push for everyone having their ID associated with their Internet traffic.

Yes, because like or not that’s the only possible solution. If all traffic was required to be signed and the signatures were tied to an entity then you could refuse unsigned traffic and if signed traffic was causing problems you’d know who it was and have recourse.

I don’t like this solution but it’s the only way forward that I can see.

iarigby@lemmy.world on 20 Mar 22:28 next collapse

is it? Someone mentioned proof of work being effective for Tor.

Buelldozer@lemmy.today on 21 Mar 15:01 collapse

PoW has the advantage of being anonymous but I don’t like it as solution for the simple fact that it uses more electricity. It’s just not a very green solution.

iarigby@lemmy.world on 21 Mar 15:49 collapse

it doesn’t have to be only meaningless computations. And even if it were, the cost is nothing compared to such a huge scale of privacy infringement

trougnouf@lemmy.world on 20 Mar 23:33 collapse

How do you have more recourse countering a random third world IP vs a random third world person when both are outside your juridiction?

Buelldozer@lemmy.today on 21 Mar 13:30 collapse

Unsigned traffic = drop. Signed traffic that becomes an annoyance = drop. If signed traffic becomes more than an annoyance then you know who to report to the authorities and even in Brazil there’s authorities.

klu9@lemmy.ca on 20 Mar 17:00 next collapse

The Linux Mint forums have been knocked offline multiple times over the last few months, to the point where the admins had to block all Chinese and Brazilian IPs for a while.

deeferg@lemmy.world on 20 Mar 22:03 collapse

This is the first I’ve heard about Brazil in this type of cyber attack. Is it re-routed traffic going there or are there a large number of Brazilian bot farms now?

klu9@lemmy.ca on 20 Mar 22:11 collapse

I don’t know why/how, just know that the admins saw the servers were being overwhelmed by traffic from Brazilian IPs and blocked it for a while.

MonkderVierte@lemmy.ml on 20 Mar 19:20 next collapse

Assuming we could build a new internet from the ground up, what would be the solution? IPFS for load-balancing?

AbsoluteChicagoDog@lemm.ee on 20 Mar 19:50 next collapse

There is no technical solution that will stop corporations with deep pockets in a capitalist society

Dindonmasker@sh.itjust.works on 20 Mar 20:35 collapse

Maybe letters through the mail to receive posts.

dreadbeef@lemmy.dbzer0.com on 20 Mar 20:58 next collapse

How long will USPS last?

WhyJiffie@sh.itjust.works on 20 Mar 21:22 next collapse

so basically what you are saying is to not put information on public places, but only send information to specific people

AbsoluteChicagoDog@lemm.ee on 21 Mar 01:17 collapse

And only then because the USPS is a federal agency. You can bet if private corporations ran it there would be no such privacy.

Buelldozer@lemmy.today on 20 Mar 20:37 next collapse

what would be the solution?

Simple, not allowing anonymous activity. If everything was required to be crypto-graphically signed in such a way that it was tied to a known entity then this could be directly addressed. It’s essentially the same problem that e-mail has with SPAM and not allowing anonymous traffic would mostly solve that problem as well.

Of course many internet users would (rightfully) fight that solution tooth and nail.

shortwavesurfer@lemmy.zip on 20 Mar 21:10 next collapse

Proof of work before connections are established. The Tor network implemented this in August of 2023 and it has helped a ton.

Buelldozer@lemmy.today on 21 Mar 15:10 collapse

PoW uses a lot of electricity on the client side so environmentally it’s a poor solution, especially at scale.

shortwavesurfer@lemmy.zip on 21 Mar 17:04 collapse

For just accessing a simple resource, it does not use a whole lot of power because it only gets activated when the resource is under load and it helps to sort traffic based on effort placed to the POW puzzle. You can choose to place zero effort and be put in the back of the line, but people who choose to put in some small effort will be put in front of you, and people who put in a larger effort will be in front of them until the resource is no longer oversubscribed, and then it will drop back down to zero. That’s how the Tor network handles it and it works incredibly well. It has stopped the denial of service attacks in their tracks. In most cases, it’s hardly ever even active. Just because it is there, deters attacking it.

MonkderVierte@lemmy.ml on 21 Mar 11:51 collapse

No, that’s not a solution, since it would make privacy impossible and bad actors would still find ways around.

melpomenesclevage@lemmy.dbzer0.com on 20 Mar 23:18 next collapse

take the resources from them so they don’t have them anymore. infiltrating the teams that do this and exposing or sabotaging the effort. literally fighting back, possibly in ways that involve giving the CEO’s and prominent investors a free trip to an old coal mine.

short of that…

cy_narrator@discuss.tchncs.de on 21 Mar 02:01 collapse

AI will come up there to abuse it as well

db0@lemmy.dbzer0.com on 20 Mar 20:09 next collapse

Yep, it hit many lemmy servers as well, including mine. I had to block multiple alibaba subnet to get things back to normal. But I’m expecting the next spam wave.

Buelldozer@lemmy.today on 20 Mar 20:27 next collapse

I too read Drew DeVault’s article the other day and I’m still wondering how the hell these companies have access to “tens of thousands” of unique IP addresses. Seriously, how the hell do they have access to so many IP addresses that SysAdmins are resorting to banning entire countries to make it stop?

[deleted] on 20 Mar 22:50 next collapse

.

Buelldozer@lemmy.today on 21 Mar 15:04 collapse

fail2ban

I’m familiar with f2b. I even have several clients licensed with the commercial version but it doesn’t fit this use case as there’s no logon failure for it to work with.

I automatically ban any IP that comes from outside the US because there’s literally no reason for anyone outside the US to make requests to my infra.

I have systems setup with geo-blocking but it’s of limited use due to the prevalence of VPNs.

also, use a WAF on a NAT to expose your apps.

This isn’t a solution either because a WAF has no way to know what traffic is bad so it doesn’t know what to block.

werefreeatlast@lemmy.world on 21 Mar 01:12 next collapse

If you get something like 156.67.234.6, then 7, then 56 etc just block 156.67.0.0/24

Buelldozer@lemmy.today on 21 Mar 15:06 collapse

Sure, network blocking like this has been a thing for decades but it still requires ongoing manual intervention which is what these SysAdmins are complaining about.

festus@lemmy.ca on 21 Mar 13:04 collapse

There are residential IP providers that provide services to scrapers, etc. that involves them having thousands of IPs available from the same IP ranges as real users. They route traffic through these IPs via malware, hacked routers, “free” VPN clients, etc. If you block the IP range for one of these addresses you’ll also block real users.

Buelldozer@lemmy.today on 21 Mar 15:08 collapse

There are residential IP providers that provide services to scrapers, etc. that involves them having thousands of IPs available from the same IP ranges as real users.

Now that makes sense. I hadn’t considered rogue ISPs.

festus@lemmy.ca on 21 Mar 15:16 collapse

It’s not even necessarily the ISPs that are doing it. In many cases they don’t like this because their users start getting blocked on websites; it’s bad actors piggy-packing on legitimate users connections without those users’ knowledge.

pineapplelover@lemm.ee on 20 Mar 21:21 next collapse

They’re afraid

melpomenesclevage@lemmy.dbzer0.com on 20 Mar 23:50 next collapse

i hear there’s a tool called (I think) ‘nepenthe’ that creates a loop for an LLM, if you use that in combination with a fairly tight blacklist of IP’s you’re certain are LLM crawlers, I bet you could do a lot of damage, and maybe make them slow their shit down, or do this in a more reasonable way.

PrivacyDingus@lemmy.world on 21 Mar 08:52 collapse

nepenthe

It’s a Markov-chain-based text generator which could be difficult for people to implement on repos depending upon how they’re hosting them. Regardless, any sensibly-built crawler will have rate limits. This means that although Nepenthe is an interesting thought exercise, it’s only going to do anything to things knocked together by people who haven’t thought about it, not the Big Big companies with the real resources who are likely having the biggest impact.

melpomenesclevage@lemmy.dbzer0.com on 21 Mar 14:46 collapse

might hit a few times, or maybe there’s a version that can puff stuff up the data in the sense of space, and salt it in the sense of utility.

PrivacyDingus@lemmy.world on 25 Mar 08:01 collapse

any way of slowing things down or wasting resources is a gain I guess

grue@lemmy.world on 21 Mar 04:33 next collapse

ELI5 why the AI companies can’t just clone the git repos and do all the slicing and dicing (running git blame etc.) locally instead of running expensive queries on the projects’ servers?

zovits@lemmy.world on 21 Mar 06:51 next collapse

Takes more effort and results in a static snapshot without being able to track the evolution of the project. (disclaimer: I don’t work with ai, but I’d bet this is the reason and also I don’t intend to defend those scraping twatwaffles in any way, but to offer a possible explanation)

Sturgist@lemmy.ca on 21 Mar 13:25 collapse

Also having your victim host the costs is an added benefit

Realitaetsverlust@lemmy.zip on 21 Mar 14:14 next collapse

Because that would cost you money, so just “abusing” someone else’s infrastructure is much cheaper.

green@feddit.nl on 21 Mar 16:26 next collapse

Too many people overestimate the actual capabilities of these companies.

I really do not like saying this because it lacks a lot of nuance, but 90% of programmers are not skilled in their profession. This is not to say they are stupid (though they likely are, see cat-v/harmful) but they do not care about efficiency nor gracefulness - as long as the job gets done.

You assume they are using source control (which is unironically unlikely), you assume they know that they can run a server locally (which I pray they do), and you assume their deadlines allow them to think about actual solutions to problems (which they probably don’t)

Yes, they get paid a lot of money. But this does not say much about skill in an age of apathy and lawlessness

turmacar@lemmy.world on 21 Mar 17:52 collapse

Also, everyone’s solution to a problem is stupid if they’re only given 5 minutes to work on it.

Combine that with it being “free” for them to query the website and expensive to have enough local storage to replicate, even temporarily, all the stuff they want to scrape and it’s kind of a no brainier to ‘just not do that’. The only thing stopping them is morals / whether they want to keep paying rent.

Retropunk64@lemmy.world on 21 Mar 17:30 collapse

They’re stealing peoples data to begin with, they don’t give a fuck at all.

grysbok@lemmy.sdf.org on 21 Mar 14:36 next collapse

It’s also a huge problem for library/archive/museum websites. We try so hard to make data available to everyone, then some rude bots come along and bring the site down. Adding more resources just uses more resources–the bots expand to fill the container.

Fijxu@programming.dev on 21 Mar 14:36 next collapse

AI scrapping is so cancerous. I host a public RedLib instance (redlib.nadeko.net) and due to BingBot and Amazon bots, my instance was always rate limited because the amount of requests they do is insane. What makes me more angry, is that this fucking fuck fuckers use free, privacy respecting services to be able to access Reddit and scrape . THEY CAN’T BE SO GREEDY. Hopefully, blocking their user-agent works fine ;)

green@feddit.nl on 21 Mar 16:16 collapse

Thanks for hosting your instances. I use them often and they’re really well maintained

daq@lemmy.sdf.org on 21 Mar 14:52 next collapse

I’m not sure how they actually implemented it, but you can easily block ML crawlers via cloud flare. Isn’t just about every small site/service behind CF anyway?

grysbok@lemmy.sdf.org on 21 Mar 15:36 collapse

Last I checked, cloudflare requires the user to have JavaScript and cookies enabled. My institution doesn’t want to require those because it would likely impact legitimate users as well as bots.

daq@lemmy.sdf.org on 21 Mar 16:12 collapse

Huh? I can reach my site via curl that has neither. How did you come up with this random set of requirements?

grysbok@lemmy.sdf.org on 21 Mar 17:37 collapse

Odd. I just tried

curl www.scrapingcourse.com/cloudflare-challenge

and got

Enable JavaScript and cookies to continue

I’m clearly not on the same setup as you are, but my off-the-cuff guess is that your curl command was issued from a system that cloudflare already recognized (IP whitelist, cookies, I dunno).

Anyways, I’m reading through this blog post on using cURL with cloudflare-protected sites and I’m finding it interesting.

daq@lemmy.sdf.org on 21 Mar 19:57 collapse

Of course their challenge requires those things. How else could they implement it? Most users will never be presented with a challenge though and it is trivial to disable if you don’t want to ever challenge anyone. I was just saying CF blocks ML crawlers.

01189998819991197253@infosec.pub on 23 Mar 19:42 collapse

Failtoban should add all those scraper IPs, and we need to just flat out block them. Or send them to those mazes. Or redirect them to themselves lol