Open Source devs say AI crawlers dominate traffic, forcing blocks on entire countries (arstechnica.com)
from cm0002@lemmy.world to technology@lemmy.world on 26 Mar 02:16
https://lemmy.world/post/27398098

#technology

threaded - newest

mesamunefire@lemmy.world on 26 Mar 02:46 next collapse

Yep same thing. I have some small servers and was getting hammered by openai ip controlled ai crawlers not respecting robots.txt. had to block all their IP addresses and create an AI black hole in order to stop them ddos ing my tiny site(s).

CosmicCleric@lemmy.world on 26 Mar 02:58 next collapse

Man, this current age of AI really sucks.

~This~ ~comment~ ~is~ ~licensed~ ~under~ ~CC~ ~BY-NC-SA~ ~4.0~

Burghler@sh.itjust.works on 26 Mar 11:04 collapse

You licensed your comment? You just saw first hand that AI crawlers don’t care about legal barriers.

WhyJiffie@sh.itjust.works on 26 Mar 12:50 next collapse

I suppose the goal isa kind of poisoning the well, in case at any time in the future it becomes enforceable somehow

CosmicCleric@lemmy.world on 26 Mar 18:32 collapse
tempest@lemmy.ca on 26 Mar 03:14 next collapse

Did you not pay your protection money to CloudFlare?

Tangent5280@lemmy.world on 28 Mar 15:42 collapse

Hey, could you say how you did that? I’m looking to put a few servers up and I’m worried about this too

mesamunefire@lemmy.world on 28 Mar 15:53 collapse

I used fail2ban + router to block the ip addresses. Then if the headers come from openai, they also get bounced.

Below is a template I used that I created on the fly for an AI black hole that I also made. Its decent, but I feel like it could be better.

from flask import Flask, request, redirect, render_template_string
import time
from collections import defaultdict
import random

app = Flask(__name__)

# Data structure to keep track of requests per IP
ip_requests = defaultdict(list)
IP_REQUEST_THRESHOLD = 1000  # Requests threshold for one hour
TIME_WINDOW = 3600  # Time window of one hour in seconds

# Function to track and limit requests based on IP
def track_requests(ip):
    current_time = time.time()
    ip_requests[ip] = [t for t in ip_requests[ip] if current_time - t < TIME_WINDOW]  # Remove old requests
    ip_requests[ip].append(current_time)
    return len(ip_requests[ip])

# Serve slow pages incrementally
@app.route('/')
def index():
    ip = request.remote_addr
    request_count = track_requests(ip)

    if request_count > IP_REQUEST_THRESHOLD:
        return serve_slow_page(request_count)
    else:
        return 'Welcome to the site!&
Tangent5280@lemmy.world on 29 Mar 01:54 collapse

Thanks, this will help.

CosmicCleric@lemmy.world on 26 Mar 02:56 next collapse

From the article …

GNOME sysadmin Bart Piotrowski shared on Mastodon that only about 3.2 percent of requests (2,690 out of 84,056) passed their challenge system, suggesting the vast majority of traffic was automated.

vk6flab@lemmy.radio on 26 Mar 03:01 next collapse

In my experience, the single biggest bully on the internet are the servers controlled by Meta which in my experience literally perform DDoS attacks whilst crawling, hitting sites several orders of magnitude more than all the others combined.

Actively blocking them was the only option left.

alaphic@lemmy.world on 26 Mar 06:22 collapse

Jeez, don’t these fucksticks have enough data already? People are literally handing it to them hand over fist and they’re still like “no, we need to forcibly suck the data out of you until your servers burst into flames”

vk6flab@lemmy.radio on 26 Mar 07:13 next collapse

In my opinion, pretty much spot on.

prole@lemmy.blahaj.zone on 26 Mar 12:10 collapse

How else would they get complete profiles on people who don’t use their products?

henfredemars@infosec.pub on 26 Mar 03:02 next collapse

Even mainly text-mode sites like LWN are feeling the strain and finding it hard to support all this parasite bots.

zonnewin@feddit.nl on 26 Mar 05:12 collapse

So don’t support them. Block them!

PlutoniumAcid@lemmy.world on 26 Mar 06:40 next collapse

Sure, but the challenge is how to block them without putting undue load on humans.

In the olden days, you’d just host a webserver and be done with it. Today you need elaborate setups to trick bots. It’s a losing proposition.

whyNotSquirrel@sh.itjust.works on 26 Mar 10:04 collapse

Is there a tools to not have to manually block them? Like an updated list of AIshit IP?

Goun@lemmy.ml on 26 Mar 03:33 next collapse

What if we start throtling them so we make them waste time? Like, we could throttle contiguous requests, so if anyone is hitting the server aggresively they’d get slowed down.

tal@lemmy.today on 26 Mar 05:25 next collapse

They can just interleave requests to different hosts. Honestly, someone spidering the whole Web probably should be doing that regardless.

taladar@sh.itjust.works on 26 Mar 07:54 next collapse

The tricky bit is recognizing that the requests are all from the same source. Often they use different IP addresses and to even classify requests at all you have to keep extra state around that you wouldn’t need without this anti-social behavior.

WhyJiffie@sh.itjust.works on 26 Mar 12:54 collapse
tomyhaw@lemmy.world on 26 Mar 05:08 next collapse

I put a rate limit on my nginx docker container. No clue if it worked but my customers are able to use the website now. I get a Alton of automated probing and SQL injection requests. Pretty horrible considering I built my app for very minimal traffic and use session data in places rather than pulling from DB and the ddos basically attacks corrupt sessions

tempest@lemmy.ca on 26 Mar 18:07 collapse

The Internet has always been like that even before the AI stuff got up to stream. If you expose anything to the public Internet it takes about 5s for things to start port scanning if they can it try WordPress/Drupal exploits.

[deleted] on 26 Mar 12:15 next collapse

.

yuki2501@lemmy.world on 26 Mar 23:03 collapse

It’s the old spam problem again. Spammers pass the cost of their customers to their victims, while AI bots pass the cost of their crawling to the sites they crawl (without authorization).

I see no easy solution for this.