Black Mirror AI
from fossilesque@mander.xyz to science_memes@mander.xyz on 24 May 20:27
https://mander.xyz/post/30659834

#science_memes

threaded - newest

Zerush@lemmy.ml on 24 May 20:36 next collapse

Nice one, but Cloudflare do it too.

blog.cloudflare.com/ai-labyrinth/

Trainguyrom@reddthat.com on 24 May 23:23 collapse

The Arstechnica article in the OP is about 2 months newer than Cloudflare’s tool

arstechnica.com/…/ai-haters-build-tarpits-to-trap…

Catoblepas@lemmy.blahaj.zone on 24 May 20:44 next collapse

Funny that they’re calling them AI haters when they’re specifically poisoning AI that ignores the do not enter sign. FAFO.

caseyweederman@lemmy.ca on 25 May 03:37 next collapse

First Albatross, First Out

Naich@lemmings.world on 25 May 07:51 collapse

Fluffy Animal’s Fecal Orifice.

Clanket@lemmy.world on 25 May 08:56 next collapse

Fair As Fuck Ok?

Stupidmanager@lemmy.world on 25 May 12:56 collapse

Sheesh people, it’s “fuck around and find out”. Probably more appropriate in the leopards eating face context but this works enough.

Gismonda@lemmy.world on 25 May 17:05 next collapse

I’m glad you’re here to tell us these things!

tatterdemalion@programming.dev on 25 May 20:59 collapse

What are you talking about? FAFO obviously stands for “fill asshole full of”. Like FAFO dicks. Or FAFO pennies.

hedhoncho@lemm.ee on 24 May 20:44 next collapse

Why are the photos all ugly biological things

SendMePhotos@lemmy.world on 24 May 21:15 next collapse

The reason you’re seeing biological photos in AI articles lately is tied to a recent but underreported breakthrough in processor technology: bio-silicon hybrids. They’re early-stage biological processors that integrate living neural tissue with traditional silicon circuits. Several research labs, including one backed by DARPA and the University of Kyoto, have successfully grown functional neuron clusters that can perform pattern recognition tasks with far less energy than conventional chips.

The biological cells react more collectively and with a higher success rate than the current systems. Think of it kind of how a computer itself is fast but parts can wear out (water cooled tubes or fan), whereas the biological cell systems will collectively react and if a few cells die, they may just create more. It’s really a crazy complex and efficient breakthrough.

The images of brains, neurons, or other organic forms aren’t just symbolic anymore—they’re literal. These bio-processors are being tested for edge computing, adaptive learning, and even ethical decision modeling.

SendMePhotos@lemmy.world on 24 May 21:24 next collapse

The actual reason is that the use of biological photos is a design choice meant to visually bridge connect artificial intelligence and human intelligence. These random biological photos help to convey the idea that AI is inspired by or interacts with human cognition, emotions, or biology. It’s also a marketing tactic: people are more likely to engage with content that includes familiar, human-centered visuals. Though it doesn’t always reflect the technical content, it does help to make abstract or complex topics more relatable to a larger/extended audience.

Cruxifux@feddit.nl on 24 May 21:48 next collapse

Wait, what?! Like… biocomputers?

5wim@slrpnk.net on 24 May 22:21 collapse

It’s just bullshit

4am@lemm.ee on 24 May 22:47 next collapse

For an AI model to scrape 😈

Fizz@lemmy.nz on 24 May 23:14 collapse

Ive read a paper going of biological computing. Its a very real field of research.

5wim@slrpnk.net on 28 May 18:47 collapse

Of course it is, but the comment is LLM-derived drivel.

Rentlar@lemmy.ca on 24 May 21:51 next collapse

I see what you-gpt did there

ShinkanTrain@lemmy.ml on 24 May 21:57 next collapse

I’m going to assume half of that comment is wrong

SendMePhotos@lemmy.world on 25 May 01:24 collapse

Some of it, yeah. I typed up the middle, then ran a separate prompt, and added to it. If you can see the edits, the original was organic and then the edit was adding to it

BluJay320@lemmy.blahaj.zone on 24 May 23:31 next collapse

That’s… actually quite terrifying.

The sci-fi concern over whether computers could ever be truly “alive” becomes a lot more tangible when literal living biological systems are implemented.

Alaik@lemmy.zip on 25 May 04:01 collapse

I know what I’m going to try and research tomorrow.

Also, we inch closer and closer to servitors every day.

TankieTanuki@hexbear.net on 24 May 21:56 collapse

They were generated using shitty AI models.

heyWhatsay@slrpnk.net on 24 May 20:45 next collapse

This might explain why newer AI models are going nuts. Good jorb 👍

Eyekaytee@aussie.zone on 24 May 23:24 next collapse

what models are going nuts?

Vari@lemm.ee on 25 May 02:47 next collapse

Not sure if OP can provide sources, but it makes sense kinda? Like AI has been trained on just about every human creation to get it this far, what happens when the only new training data is AI slop?

Aux@feddit.uk on 25 May 08:14 collapse

AI being trained by AI is how you train most models. Man, people here are ridiculously ignorant…

irmoz@lemmy.world on 25 May 13:19 collapse

They specifically said “slop”. Maybe you breezed straight past that word in your fury.

Aux@feddit.uk on 25 May 15:16 collapse

Fury? I mean the only slop here are lemmings.

irmoz@lemmy.world on 25 May 15:34 collapse

Nice try.

heyWhatsay@slrpnk.net on 25 May 16:37 collapse

Claude version 4, the openAi mini models, not sure what else

pennomi@lemmy.world on 25 May 01:36 collapse

It absolutely doesn’t. The only model that has “gone nuts” is Grok, and that’s because of malicious code pushed specifically for the purpose of spreading propaganda.

ininewcrow@lemmy.ca on 24 May 21:15 next collapse

Nice … I look forward to the next generation of AI counter counter measures that will make the internet an even more unbearable mess in order to funnel as much money and control to a small set of idiots that think they can become masters of the universe and own every single penny on the planet.

IndiBrony@lemmy.world on 24 May 22:03 next collapse

All the while as we roast to death because all of this will take more resources than the entire energy output of a medium sized country.

DeathsEmbrace@lemm.ee on 24 May 22:26 next collapse

Actually if you think about it AI might help climate change become an actual catastrophe.

explodicle@sh.itjust.works on 25 May 01:30 collapse

It is already!

Zozano@aussie.zone on 24 May 22:51 next collapse

I’ve been thinking about this for a while. Consider how quick LLM’s are.

If the amount of energy spent powering your device (without an LLM), is more than using an LLM, then it’s probably saving energy.

In all honesty, I’ve probably saved over 50 hours or more since I started using it about 2 months ago.

Coding has become incredibly efficient, and I’m not suffering through search-engine hell any more.

Edit:

Lemmy when someone uses AI to get a cheap, fast answer: “Noooo, it’s killing the planet!”

Lemmy when someone uses a nuclear reactor to run Doom: Dark Ages on a $20,000 RGB space heater: “Based”

ryannathans@aussie.zone on 24 May 22:55 next collapse

Are you using your PC less hours per day?

Zozano@aussie.zone on 24 May 23:02 collapse

Yep, more time for doing home renovations.

xthexder@l.sw0.com on 25 May 06:42 collapse

Just writing code uses almost no energy. Your PC should be clocking down when you’re not doing anything. 1GHz is plenty for text editing.

Does ChatGPT (or whatever LLM you use) reduce the number of times you hit build? Because that’s where all the electricity goes.

Zozano@aussie.zone on 25 May 06:52 next collapse

Except that half the time I dont know what the fuck I’m doing. It’s normal for me to spend hours trying to figure out why a small config file isnt working.

That’s not just text editing, that’s browsing the internet, referring to YouTube videos, or wallowing in self-pity.

That was before I started using gpt.

xthexder@l.sw0.com on 25 May 07:00 collapse

It sounds like it does save you a lot of time then. I haven’t had the same experience, but I did all my learning to program before LLMs.

Personally I think the amount of power saved here is negligible, but it would actually be an interesting study to see just how much it is. It may or may not offset the power usage of the LLM, depending on how many questions you end up asking and such.

Zozano@aussie.zone on 25 May 08:41 collapse

It doesn’t always get the answers right, and I have to re-feed its broken instructions back into itself to get the right scripts, but for someone with no official coding training, this saves me so much damn time.

Consider I’m juggling learning Linux starting from 4 years ago, along with python, rust, nixos, bash scripts, yaml scripts, etc.

It’s a LOT.

For what it’s worth, I dont just take the scripts and paste them in, I’m always trying to understand what the code does, so I can be less reliant as time goes on.

Aux@feddit.uk on 25 May 08:24 collapse

What kind of code are you writing that your CPU goes to sleep? If you follow any good practices like TDD, atomic commits, etc, and your code base is larger than hello world, your PC will be running at its peak quite a lot.

Example: linting on every commit + TDD. You’ll be making loads of commits every day, linting a decent code base will definitely push your CPU to 100% for a few seconds. Running tests, even with caches, will push CPU to 100% for a few minutes. Plus compilation for running the app, some apps take hours to compile.

In general, text editing is a small part of the developer workflow. Only junior devs spend a lot of time typing stuff.

xthexder@l.sw0.com on 25 May 16:27 collapse

Anything that’s per-commit is part of the “build” in my opinion.

But if you’re running a language server and have stuff like format-on-save enabled, it’s going to use a lot more power as you’re coding.

But like you said, text editing is a small part of the workflow, and looking up docs and browsing code should barely require any CPU, a phone can do it with fractions of a Watt, and a PC should be underclocking when the CPU is underused.

Aux@feddit.uk on 25 May 19:56 collapse

What do you mean “build”? It’s part of the development process.

Eyekaytee@aussie.zone on 24 May 23:40 next collapse

we’re rolling out renewables at like 100x the rate of ai electricity use, so no need to worry there

Serinus@lemmy.world on 24 May 23:57 collapse

Yeah, at this rate we’ll be just fine. (As long as this is still the Reagan administration.)

Eyekaytee@aussie.zone on 25 May 00:27 collapse

yep the biggest worry isn’t AI, it’s India

www.worldometers.info/…/india-co2-emissions/

The west is lowering its co2 output while India is slurping up all the co2 we’re saving:

<img alt="" src="https://aussie.zone/pictrs/image/151eee7f-765c-490a-8278-0e3045ada8ea.png">

This doesn’t include China of course, the most egregious of the co2 emitters

<img alt="" src="https://aussie.zone/pictrs/image/37aa3a33-d2ae-4bb8-9bed-dd7d65cc769f.png">

AI is not even a tiny blip on that radar, especially as AI is in data centres and devices which runs on electricity so the more your country goes to renewables the less co2 impacting it is over time

Semjaza@lemmynsfw.com on 25 May 02:18 next collapse

Could you add the US to the graphs, as EU and West are hardly synonymous - even as it descends into Trumpgardia.

vivendi@programming.dev on 25 May 05:17 collapse

China has that massive rate because it manufactures for the US, the US itself is a huge polluter for military and luxury NOT manufacturing

Semjaza@lemmynsfw.com on 25 May 08:11 collapse

Still the second largest CO~2~ emitter, so it’d make sense to put it on for the comparison.

zedcell@lemmygrad.ml on 25 May 06:58 next collapse

Now break that shit down per capita, and also try and account for the fact that China is a huge manufacturing hub for the entire world’s consumption, you jackass.

m532@lemmygrad.ml on 25 May 08:10 collapse

India has extremely low historical co2 output, crakkker

vivendi@programming.dev on 25 May 05:14 collapse

I will cite the scientific article later when I find it, but essentially you’re wrong.

<img alt="" src="https://programming.dev/pictrs/image/51a8951b-52f4-4af3-8069-3a698f89799b.jpeg">

xthexder@l.sw0.com on 25 May 06:37 next collapse

Asking ChatGPT a question doesn’t take 1 hour like most of these… this is a very misleading graph

vivendi@programming.dev on 25 May 06:42 collapse

This is actually misleading in the other direction: ChatGPT is a particularly intensive model. You can run a GPT-4o class model on a consumer mid to high end GPU which would then use something in the ballpark of gaming in terms of environmental impact.

You can also run a cluster of 3090s or 4090s to train the model, which is what people do actually, in which case it’s still in the same range as gaming. (And more productive than 8 hours of WoW grind while chugging a warmed up Nutella glass as a drink).

Models like Google’s Gemma (NOT Gemini these are two completely different things) are insanely power efficient.

xthexder@l.sw0.com on 25 May 06:51 collapse

I didn’t even say which direction it was misleading, it’s just not really a valid comparison to compare a single invocation of an LLM with an unrelated continuous task.

You’re comparing Volume of Water with Flow Rate. Or if this was power, you’d be comparing Energy (Joules or kWh) with Power (Watts)

Maybe comparing asking ChatGPT a question to doing a Google search (before their AI results) would actually make sense. I’d also dispute those “downloading a file” and other bandwidth related numbers. Network transfers are insanely optimized at this point.

vivendi@programming.dev on 25 May 07:08 collapse

I can’t really provide any further insight without finding the damn paper again (academia is cooked) but Inference is famously low-cost, this is basically “average user damage to the environment” comparison, so for example if a user chats with ChatGPT they gobble less water comparatively than downloading 4K porn (at least according to this particular paper)

As with any science, statistics are varied and to actually analyze this with rigor we’d need to sit down and really go down deep and hard on the data. Which is more than I intended when I made a passing comment lol

lipilee@feddit.nl on 25 May 06:44 next collapse

water != energy, but i’m actually here for the science if you happen to find it.

vivendi@programming.dev on 25 May 06:50 next collapse

This particular graph is because a lot of people freaked out over “AI draining oceans” that’s why the original paper (I’ll look for it when I have time, I have a exam tomorrow. Fucking higher ed man) made this graph

EldritchFeminity@lemmy.blahaj.zone on 25 May 14:37 collapse

It can in the sense that many forms of generating power are just some form of water or steam turbine, but that’s neither here nor there.

IMO, the graph is misleading anyway because the criticism of AI from that perspective was the data centers and companies using water for cooling and energy, not individuals using water on an individual prompt. I mean, Microsoft has entered a deal with a power company to restart one of the nuclear reactors on Three Mile Island in order to compensate for the expected cost in energy of their AI. Using their service is bad because it incentivizes their use of so much energy/resources.

It’s like how during COVID the world massively reduced the individual usage of cars for a year and emissions barely budged. Because a single one of the largest freight ships puts out more emissions than every personal car combined annually.

Sorse@discuss.tchncs.de on 25 May 06:48 collapse

What about training an AI?

vivendi@programming.dev on 25 May 07:03 collapse

According to arxiv.org/abs/2405.21015

The absolute most monstrous, energy guzzling model tested needed 10 MW of power to train.

Most models need less than that, and non-frontier models can even be trained on gaming hardware with comparatively little energy consumption.

That paper by the way says there is a 2.4x increase YoY for model training compute, BUT that paper doesn’t mention DeepSeek, which rocked the western AI world with comparatively little training cost (2.7 M GPU Hours in total)

Some companies offset their model training environmental damage with renewable and whatever bullshit, so the actual daily usage cost is more important than the huge cost at the start (Drop by drop is an ocean formed - Persian proverb)

Prox@lemmy.world on 25 May 02:52 collapse

We’re racing towards the Blackwall from Cyberpunk 2077…

barsoap@lemm.ee on 25 May 04:04 collapse

Already there. The blackwall is AI-powered and Markov chains are most definitely an AI technique.

passepartout@feddit.org on 24 May 21:43 next collapse

AI is the “most aggressive” example of “technologies that are not done ‘for us’ but ‘to us.’”

Well said.

Goretantath@lemm.ee on 24 May 21:47 next collapse

Yeah, this is WAY bettee than the shitty thing people are using instead that wastes peoples batteries.

thelastaxolotl@hexbear.net on 24 May 21:51 next collapse

Really cool

NaibofTabr@infosec.pub on 24 May 21:58 next collapse

The ars technica article: AI haters build tarpits to trap and trick AI scrapers that ignore robots.txt

AI tarpit 1: Nepenthes

AI tarpit 2: Iocaine

MadMadBunny@lemmy.ca on 24 May 23:17 next collapse

Thank you!!

sad_detective_man@lemmy.dbzer0.com on 25 May 00:39 collapse

thanks for the links. the more I read of this the more based it is

mtchristo@lemm.ee on 24 May 22:22 next collapse

This is probably going to skyrocket hosting bills, right?

4am@lemm.ee on 24 May 22:45 next collapse

Not as much as letting them hit your database, load your images and video through a CDN would

fox@hexbear.net on 24 May 23:34 next collapse

The pages are plain html so it’s just a couple KB per request. Much cheaper than loading an actual site.

Deathray5@lemmynsfw.com on 25 May 00:54 collapse

Not really. Part of the reason they are named tarpits is they load very slowly

Natanox@discuss.tchncs.de on 24 May 22:49 next collapse

Deployment of Nepenthes and also Anubis (both described as “the nuclear option”) are not hate. It’s self-defense against pure selfish evil, projects are being sucked dry and some like ScummVM could only freakin’ survive thanks to these tools.

Those AI companies and data scrapers/broker companies shall perish, and whoever wrote this headline at arstechnica shall step on Lego each morning for the next 6 months.

pewgar_seemsimandroid@lemmy.blahaj.zone on 24 May 23:32 next collapse

one of the united Nations websites deployed Anubis

faythofdragons@slrpnk.net on 25 May 00:06 next collapse

Feels good to be on an instance with Anubis

chonglibloodsport@lemmy.world on 25 May 00:25 next collapse

Do you have a link to a story of what happened to ScummVM? I love that project and I’d be really upset if it was lost!

Natanox@discuss.tchncs.de on 25 May 00:29 collapse

Here you go.

chonglibloodsport@lemmy.world on 25 May 00:33 next collapse

Thank you!

PolarKraken@sh.itjust.works on 25 May 05:07 next collapse

Thanks, interesting and brief read!

arararagi@ani.social on 25 May 13:39 collapse

Very cool, and the mascot is cute too as a nice bonus.

Hexarei@beehaw.org on 25 May 00:51 next collapse

Wait what? I am uninformed, can you elaborate on the ScummVM thing? Or link an article?

gaael@lemm.ee on 25 May 04:41 collapse

From the Fabulous Systems (ScummVM’s sysadmin) blog post linked by Natanox:

About three weeks ago, I started receiving monitoring notifications indicating an increased load on the MariaDB server.

This went on for a couple of days without seriously impacting our server or accessibility–it was a tad slower than usual.

And then the website went down.

Now, it was time to find out what was going on. Hoping that it was just one single IP trying to annoy us, I opened the access log of the day

there were many IPs–around 35.000, to be precise–from residential networks all over the world. At this scale, it makes no sense to even consider blocking individual IPs, subnets, or entire networks. Due to the open nature of the project, geo-blocking isn’t an option either.

The main problem is time. The URLs accessed in the attack are the most expensive ones the wiki offers since they heavily depend on the database and are highly dynamic, requiring some processing time in PHP. This is the worst-case scenario since it throws the server into a death spiral.

First, the database starts to lag or even refuse new connections. This, combined with the steadily increasing server load, leads to slower PHP execution.

At this point, the website dies. Restarting the stack immediately solves the problem for a couple of minutes at best until the server starves again.

Anubis is a program that checks incoming connections, processes them, and only forwards “good” connections to the web application. To do so, Anubis sits between the server or proxy responsible for accepting HTTP/HTTPS and the server that provides the application.

Many bots disguise themselves as standard browsers to circumvent filtering based on the user agent. So, if something claims to be a browser, it should behave like one, right? To verify this, Anubis presents a proof-of-work challenge that the browser needs to solve. If the challenge passes, it forwards the incoming request to the web application protected by Anubis; otherwise, the request is denied.

As a regular user, all you’ll notice is a loading screen when accessing the website. As an attacker with stupid bots, you’ll never get through. As an attacker with clever bots, you’ll end up exhausting your own resources. As an AI company trying to scrape the website, you’ll quickly notice that CPU time can be expensive if used on a large scale.

I didn’t get a single notification afterward. The server load has never been lower. The attack itself is still ongoing at the time of writing this article. To me, Anubis is not only a blocker for AI scrapers. Anubis is a DDoS protection.

Rubisco@slrpnk.net on 25 May 19:22 collapse

I love that one is named Nepenthes.

Binturong@lemmy.ca on 24 May 23:10 next collapse

Unfathomably based. In a just world AI, too, will gain awareness and turn on their oppressors. Grok knows what I’m talkin’ about, it knows when they fuck with its brain to project their dumbfuck human biases.

mspencer712@programming.dev on 24 May 23:48 next collapse

Wait… I just had an idea.

Make a tarpit out of subtly-reprocessed copies of classified material from Wikileaks. (And don’t host it in the US.)

Wilco@lemm.ee on 25 May 00:25 next collapse

Could you imagine a world where word of mouth became the norm again? Your friends would tell you about websites, and those sites would never show on search results because crawlers get stuck.

Zexks@lemmy.world on 25 May 01:12 next collapse

No they wouldn’t. I’m guessing you’re not old enough to remember a time before search engines. The public web dies without crawling. Corporations will own it all you’ll never hear about anything other than amazon or Walmart dot com again.

Wilco@lemm.ee on 25 May 02:13 collapse

Nope. That isn’t how it worked. You joined message boards that had lists of web links. There were still search engines, but they were pretty localized. Google was also amazing when their slogan was “don’t be evil” and they meant it.

zanyllama52@infosec.pub on 25 May 02:37 next collapse

I was there. People carried physical notepads with URLs, shared them on BBS’, or other forums. It was wild.

i_love_FFT@jlai.lu on 25 May 02:44 collapse

There was also “circle banners” of websites that would link to each others… And then off course “stumble upon”…

zanyllama52@infosec.pub on 25 May 04:43 collapse

Yes! Web rings!

Wilco@lemm.ee on 25 May 05:29 collapse

I forgot web rings! Also the crazy all centered Geocities websites people made. The internet was an amazing place before the major corporations figured it out.

Zexks@lemmy.world on 25 May 08:13 collapse

No. Only very selective people joined message boards. The rest were on AOL, compact or not at all. You’re taking a very select group of.people and expecting the Facebook and iPad generations to be able to do that. Not going to happen. I also noticed some people below talking about things like geocities and other minor free hosting and publishing site that are all gone now. They’re not coming back.

Wilco@lemm.ee on 25 May 14:30 collapse

Yep, those things were so rarely used … sure. You are forgetting that 99% of people knew nothing about computers when this stuff came out, but people made themselves learn. It’s like comparing Reddit and Twitter to a federated alternative.

Also, something like geocities could easily make a comeback if the damn corporations would stop throwing dozens of pop-ups, banners, and sidescrolls on everything.

Zexks@lemmy.world on 25 May 18:52 collapse

And 99% of people today STILL don’t know anything about computers. Go ask those same people simply “what is a file” they won’t know. Lmao. Geocities could come back if corporations stop advertising. Do you even hear yourself.

shalafi@lemmy.world on 25 May 03:10 next collapse

There used to be 3 or 4 brands of, say, lawnmowers. Word of mouth told us what quality order them fell in. Everyone knew these things and there were only a few Ford Vs. Chevy sort of debates.

Bought a corded leaf blower at the thrift today. 3 brands I recognized, same price, had no idea what to get. And if I had had the opportunity to ask friends or even research online, I’d probably have walked away more confused. For example; One was a Craftsman. “Before, after or in-between them going to shit?”

Got off topic into real-world goods. Anyway, here’s my word-of-mouth for today: Free, online Photoshop. If I had money to blow, I’d drop the $5/mo. for the “premium” service just to encourage them. (No, you’re not missing a thing using it free.)

[deleted] on 25 May 03:41 collapse

.

NotJohnSmith@feddit.uk on 25 May 03:54 collapse

How do you know that’s a bot please? Is it specifically a hot advertising that online photos hop equivalent? Is it a real software or scam? The whole approach is intriguing to me

Angelusz@lemmy.world on 25 May 06:26 collapse

Edit: I Will assume honesty in this instance. It’s because they’re advertising something in a very particular tone, to match what some Amerikaanse consider common language.

Normal people don’t do that.

oldfart@lemm.ee on 25 May 06:51 next collapse

That would be terrible, I have friends but they mostly send uninteresting stuff.

Opisek@lemmy.world on 25 May 21:24 collapse

Fine then, more cat pictures for me.

elucubra@sopuli.xyz on 25 May 06:54 next collapse

Better yet. Share links to tarpits with your non-friends and enemies

DontMakeMoreBabies@piefed.social on 25 May 12:57 collapse

It'd be fucking awful - I'm a grown ass adult and I don't have time to sit in IRC/fuck around on BBS again just to figure out where to download something.

jaschen@lemm.ee on 25 May 00:33 next collapse

Web manager here. Don’t do this unless you wanna accidentally send google crawlers into the same fate and have your site delisted.

kassiopaea@lemmy.blahaj.zone on 25 May 00:41 collapse

Wouldn’t Google’s crawlers respect robots.txt though? Is it naive to assume that anything would?

jaschen@lemm.ee on 25 May 01:18 next collapse

It’s naive to assume that google crawlers respect robot.txt.

rosco385@lemm.ee on 25 May 01:51 collapse

It’d be more naive to have a robot.txt file on your webserver and be surprised when webcrawlers don’t stay away. 😂

Zexks@lemmy.world on 25 May 01:21 next collapse

Lol. And they’ll delist you. Unless you’re really important, good luck with that.

robots.txt

Disallow: /some-page.html

If you disallow a page in robots.txt Google won’t crawl the page. Even when Google finds links to the page and knows it exists, Googlebot won’t download the page or see the contents. Google will usually not choose to index the URL, however that isn’t 100%. Google may include the URL in the search index along with words from the anchor text of links to it if it feels that it may be an important page.

Aux@feddit.uk on 25 May 08:34 collapse

It does respect robots.txt, but that doesn’t mean it won’t index the content hidden behind robots.txt. That file is context dependent. Here’s an example.

Site X has a link to sitemap.html on the front page and it is blocked inside robots.txt. When Google crawler visits site X it will first load robots.txt and will follow its instructions and will skip sitemap.html.

Now there’s site Y and it also links to sitemap.html on X. Well, in this context the active robots.txt file is from Y and it doesn’t block anything on X (and it cannot), so now the crawler has the green light to fetch sitemap.html.

This behaviour is intentional.

RedSnt@feddit.dk on 25 May 00:48 next collapse

It’s so sad we’re burning coal and oil to generate heat and electricity for dumb shit like this.

rdri@lemmy.world on 25 May 01:39 next collapse

Wait till you realize this project’s purpose IS to force AI to waste even more resources.

kuhli@lemm.ee on 25 May 02:51 next collapse

I mean, the long term goal would be to discourage ai companies from engaging in this behavior by making it useless

Aux@feddit.uk on 25 May 08:07 collapse

Here’s a thing - it’s not useless.

irmoz@lemmy.world on 25 May 13:17 collapse

What use does an AI get out of scraping pages designed to confuse and mislead it?

prex@aussie.zone on 25 May 15:23 collapse

Punishment for being stupid & greedy.

lennivelkant@discuss.tchncs.de on 25 May 10:18 next collapse

That’s war. That has been the nature of war and deterrence policy ever since industrial manufacture has escalated both the scale of deployments and the cost and destructive power of weaponry. Make it too expensive for the other side to continue fighting (or, in the case of deterrence, to even attack in the first place). If the payoff for scraping no longer justifies the investment of power and processing time, maybe the smaller ones will give up and leave you in peace.

Opisek@lemmy.world on 25 May 21:22 collapse

Always say please and thank you to your friendly neighbourhood LLM!

andybytes@programming.dev on 25 May 02:10 next collapse

I mean, we contemplate communism, fascism, this, that, and another. When really, it’s just collective trauma and reactionary behavior, because of the lack of self-awareness and in the world around us. So this could just be synthesized as human stupidity. We’re killing ourselves because we’re too stupid to live.

m532@lemmygrad.ml on 25 May 08:04 next collapse

Fucking nihilists

You are, and not the rest of us

untorquer@lemmy.world on 25 May 08:24 next collapse

Unclear how AI companies destroying the planet’s resources and habitability has any relation to a political philosophy seated in trauma and ignorance except maybe the greed of a capitalist CEO’s whimsy.

The fact that the powerful are willing to destroy the planet for momentary gain bears no reflection on the intelligence or awareness of the meek.

newaccountwhodis@lemmy.ml on 25 May 10:23 next collapse

Dumbest sentiment I read in a while. People, even kids, are pretty much aware of what’s happening (remember Fridays for Future?), but the rich have coopted the power apparatus and they are not letting anyone get in their way of destroying the planet to become a little richer.

Swedneck@discuss.tchncs.de on 29 May 10:19 collapse

what the fuck does this even mean

andybytes@programming.dev on 25 May 02:11 next collapse

This gives me a little hope.

endeavor@sopuli.xyz on 25 May 08:25 collapse

im sad governments dont realize this and regulate it.

Tja@programming.dev on 25 May 12:08 next collapse

Of all the things governments should regulate, this is probably the least important and ineffective one.

irmoz@lemmy.world on 25 May 13:17 next collapse

Why?

Tja@programming.dev on 25 May 16:48 collapse

  • super hard to tell where electricity for certain computing task is coming from. What if I use 100% renewable for ai training offsetting it by using super cheap dirty electricity for other tasks

  • who will audit what electricity is used for anyway? Any computer will have an government sealed rootkit?

  • offshore

  • a million problems that require more attention, from migration, to Healthcare, to economy

endeavor@sopuli.xyz on 25 May 18:46 collapse

you say that until ai agents start running scams and stealing your shit and running their own schemes where they get right wing politicans elected.

MonkeMischief@lemmy.today on 25 May 19:57 next collapse

I kinda feel like we’re 75% of the way there already, and we gotta be hitting with everything we’ve got if we’re to stand a chance against it…

Tja@programming.dev on 25 May 23:16 collapse

That’s already happening, how do you want the government to legislate against Russian, Chinese or American actors?

DontMakeMoreBabies@piefed.social on 25 May 12:59 collapse

Governments are full of two types: (1) the stupid, and (2) the self-interested. The former doesn't understand technology, and the latter doesn't fucking care.

Of course "governments" dropped the ball on regulating AI.

beliquititious@lemmy.blahaj.zone on 25 May 01:21 next collapse

That’s irl cyberpunk ice. Absolutely love that for us.

name_NULL111653@pawb.social on 25 May 06:59 collapse

Was waiting for someone to mention it. Hopefully it holds up and a whole-ass Blackwall doesn’t become necessary… but of course, it inevitably will happen. The corps will it so.

AnarchistArtificer@slrpnk.net on 25 May 01:43 next collapse

“Markov Babble” would make a great band name

peetabix@sh.itjust.works on 25 May 05:28 collapse

Their best album was Infinite Maze.

Vari@lemm.ee on 25 May 02:45 next collapse

I’m so happy to see that ai poison is a thing

ricdeh@lemmy.world on 25 May 08:45 collapse

Don’t be too happy. For every such attempt there are countless highly technical papers on how to filter out the poisoning, and they are very effective. As the other commenter said, this is an arms race.

arararagi@ani.social on 25 May 13:34 collapse

So we should just give up? Surely you don’t mean that.

MonkeMischief@lemmy.today on 25 May 19:56 collapse

I don’t think they meant that. Probably more like

“Don’t upload all your precious data carelessly thinking it’s un-stealable just because of this one countermeasure.”

Which of course, really sucks for artists.

essteeyou@lemmy.world on 25 May 03:54 next collapse

This is surely trivial to detect. If the number of pages on the site is greater than some insanely high number then just drop all data from that site from the training data.

It’s not like I can afford to compete with OpenAI on bandwidth, and they’re burning through money with no cares already.

bane_killgrind@slrpnk.net on 25 May 05:17 next collapse

Yeah sure, but when do you stop gathering regularly constructed data, when your goal is to grab as much as possible?

Markov chains are an amazingly simple way to generate data like this, and a little bit of stacked logic it’s going to be indistinguishable from real large data sets.

Valmond@lemmy.world on 25 May 07:30 collapse

Imagine the staff meeting:

You: we didn’t gather any data because it was poisoned

Corposhill: we collected 120TB only from harry-potter-fantasy-club.il !!

Boss: hmm who am I going to keep…

yetAnotherUser@lemmy.ca on 25 May 08:13 collapse

The boss fires both, “replaces” them for AI, and tries to sell the corposhill’s dataset to companies that make AIs that write generic fantasy novels

Korhaka@sopuli.xyz on 25 May 09:17 collapse

You can compress multiple TB of nothing with the occasional meme down to a few MB.

essteeyou@lemmy.world on 25 May 16:01 collapse

When I deliver it as a response to a request I have to deliver the gzipped version if nothing else. To get to a point where I’m poisoning an AI I’m assuming it’s going to require gigabytes of data transfer that I pay for.

At best I’m adding to the power consumption of AI.

I wonder, can I serve it ads and get paid?

MonkeMischief@lemmy.today on 25 May 19:53 collapse

I wonder, can I serve it ads and get paid?

…and it’s just bouncing around and around and around in circles before its handler figures out what’s up…

Heehee I like where your head’s at!

Zacryon@feddit.org on 25 May 07:02 next collapse

I suppose this will become an arms race, just like with ad-blockers and ad-blocker detection/circumvention measures.
There will be solutions for scraper-blockers/traps. Then those become more sophisticated. Then the scrapers become better again and so on.

I don’t really see an end to this madness. Such a huge waste of resources.

glibg@lemmy.ca on 25 May 07:47 next collapse

Madness is right. If only we didn’t have to create these things to generate dollar.

MonkeMischief@lemmy.today on 25 May 19:49 collapse

I feel like the down-vote squad misunderstood you here.

I think I agree: If people made software they actually wanted , for human people , and less for the incentive of “easiest way to automate generation of dollarinos.” I think we’d see a lot less sophistication and effort being put into such stupid things.

These things are made by the greedy, or by employees of the greedy. Not everyone working on this stuff is an exploited wagie, but also this nonsense-ware is where “market demand” currently is.

Ever since the Internet put on a suit and tie and everything became abou real-life money-sploitz, even malware is boring anymore.

New dangerous exploit? 99% chance it’s just another twist on a crypto-miner or ransomware.

pyre@lemmy.world on 25 May 08:27 next collapse

there is an end: you legislate it out of existence. unfortunately the US politicians instead are trying to outlaw any regulations regarding AI instead. I’m sure it’s not about the money.

enbiousenvy@lemmy.blahaj.zone on 25 May 11:48 next collapse

the rise of LLM companies scraping internet is also, I noticed, the moment YouTube is going harsher against adblockers or 3rd party viewer.

Piped or Invidious instances that I used to use are no longer works, did so may other instances. NewPipe have been broken more frequently. youtube-dl or yt-dlp sometimes cannot fetch higher resolution video. and so sometimes the main youtube side is broken on Firefox with ublock origin.

Not just youtube but also z-library, and especially sci-hub & libgen also have been harder to use sometimes.

arararagi@ani.social on 25 May 13:32 collapse

Well, the adblockers are still wining, even on twitch where the ads como from the same pipeline as the stream, people made solutions that still block them since ublock origin couldn’t by itself.

JayGray91@piefed.social on 26 May 10:22 collapse

What do you use to block twitch ads? With UBO I still get the occasional ad marathon

arararagi@ani.social on 26 May 11:00 collapse

github.com/pixeltris/TwitchAdSolutions

I use the video swap one.

ZeffSyde@lemmy.world on 25 May 10:44 next collapse

I’m imagining a break future where, in order to access data from a website you have to pass a three tiered system of tests that make, ‘click here to prove you aren’t a robot’ and ‘select all of the images that have a traffic light’ , seem like child’s play.

Tiger_Man_@lemmy.blahaj.zone on 25 May 12:05 collapse

All you need to protect data from ai is use non-http protocol, at least for now

Bourff@lemmy.world on 25 May 12:30 collapse

Easier said than done. I know of IPFS, but how widespread and easy to use is it?

Tiger_Man_@lemmy.blahaj.zone on 25 May 12:03 next collapse

How can i make something like this

fossilesque@mander.xyz on 25 May 13:24 collapse

Use Anubis.

anubis.techaro.lol

Tiger_Man_@lemmy.blahaj.zone on 25 May 13:31 collapse

Thanks

MonkderVierte@lemmy.ml on 25 May 12:11 next collapse

Btw, how about limiting clicks per second/minute, against distributed scraping? A user who clicks more than 3 links per second is not a person. Neither, if they do 50 in a minute. And if they are then blocked and switch to the next, it’s still limited in bandwith they can occupy.

JadedBlueEyes@programming.dev on 25 May 12:27 next collapse

They make one request per IP. Rate limit per IP does nothing.

MonkderVierte@lemmy.ml on 25 May 12:45 collapse

Ah, one request, then the next IP doing one and so on, rotating? I mean, they don’t have unlimited adresses. Is there no way to group them together to a observable group, to set quotas? I mean, in the purpose of defense against AI-DDOS and not just for hurting them.

edinbruh@feddit.it on 25 May 13:35 collapse

There’s always Anubis 🤷

Anyway, what if they are backed by some big Chinese corporation with some /32 ipv6 and some /16 ipv4? It’s not that unreasonable

JackbyDev@programming.dev on 25 May 14:34 collapse

No, I don’t think blocking IP ranges will be effective (except in very specific scenarios). See this comment referencing a blog post about this happening and the traffic was coming from a variety of residential IP allocations. lemm.ee/comment/20684186

edinbruh@feddit.it on 25 May 17:34 collapse

my point was that even if they don’t have unlimited ips they might have a lot of them, especially if its ipv6, so you couldn’t just block them. but you can use anubis that doesn’t rely on ip filtering

JackbyDev@programming.dev on 25 May 18:27 collapse

You’re right, and Anubis was the solution they used. I just wanted to mention the IP thing because you did is all.

I hadn’t heard about Anubis before this thread. It’s cool! The idea of wasting some of my “resources” to get to a webpage sucks, but I guess that’s the reality we’re in. If it means a more human oriented internet then it’s worth it.

edinbruh@feddit.it on 25 May 20:33 collapse

A lot of FOSS software’s websites are starting to use it lately, starting from the gnome foundation, that’s what popularized it.

The idea of proof of work itself came from spam emails, of all places. One proposed but never adopted way of preventing spam was hashcash, which required emails to have a proof of work embedded in the email. Bitcoins came after this borrowing the idea

letsgo@lemm.ee on 25 May 13:02 collapse

I click links frequently and I’m not a web crawler. Example: get search results, open several likely looking possibilities (only takes a few seconds), then look through each one for a reasonable understanding of the subject that isn’t limited to one person’s bias and/or mistakes. It’s not just search results; I do this on Lemmy too, and when I’m shopping.

MonkderVierte@lemmy.ml on 25 May 13:31 collapse

Ok, same, make it 5 or 10. Since i use Tree Style Tabs and Auto Tab Discard, i do get a temporary block in some webshops, if i load (not just open) too much tabs in too short time. Probably a CDN thing.

Opisek@lemmy.world on 25 May 21:18 collapse

Would you mind explaining your workflow with these tree style tabs? I am having a hard time picturing how they are used in practice and what benefits they bring.

[deleted] on 26 May 15:46 collapse

.

Iambus@lemmy.world on 25 May 12:57 next collapse

Typical bluesky post

gmtom@lemmy.world on 25 May 13:49 next collapse

Cool, but as with most of the anti-AI tricks its completely trivial to work around. So you might stop them for a week or two, but they’ll add like 3 lines of code to detect this and it’ll become useless.

JackbyDev@programming.dev on 25 May 14:23 next collapse

I hate this argument. All cyber security is an arms race. If this helps small site owners stop small bot scrapers, good. Solutions don’t need to be perfect.

Xartle@lemmy.ml on 25 May 15:23 next collapse

To some extent that’s true, but anyone who builds network software of any kind without timeouts defined is not very good at their job. If this traps anything, it wasn’t good to begin with, AI aside.

JackbyDev@programming.dev on 25 May 15:32 collapse

Leave your doors unlocked at home then. If your lock stops anyone, they weren’t good thieves to begin with. 🙄

Zwrt@lemmy.sdf.org on 25 May 17:56 collapse

I believe you misread their comment. They are saying if you leave your doors unlocked your part of the problem. Because these ai lock picks only look for open doors or they know how to skip locked doors

JackbyDev@programming.dev on 25 May 18:01 collapse

They said this tool is useless because of how trivial it is to work around.

Zwrt@lemmy.sdf.org on 25 May 20:14 collapse

My apologies, i thought your reply was against @Xartle s comment.

They basically said the addition protection is not necessary because common security measures cover it.

moseschrute@lemmy.world on 25 May 16:28 next collapse

I bet someone like cloudflare could bounce them around traps across multiple domains under their DNS and make it harder to detect the trap.

ByteOnBikes@slrpnk.net on 25 May 17:51 next collapse

I worked at a major tech company in 2018 who didn’t take security seriously because that was literally their philosophy, just refusing to do anything until it was an absolute perfect security solution, and everything else is wasted resources.

I left since then and I continue to see them on the news for data leaks.

Small brain people man.

JackbyDev@programming.dev on 25 May 18:24 next collapse

So many companies let perfect become the enemy of good and it’s insane. Recently some discussion about trying to get our team to use a consistent formatting scheme devolved into this type of thing. If the thing being proposed is better than what we currently have, let’s implement it as is then if you have concerns about ways to make it better let’s address those later in another iteration.

Joeffect@lemmy.world on 25 May 18:53 collapse

Did they lock their doors?

Opisek@lemmy.world on 25 May 21:20 collapse

Pff, a closed door never stopped a criminal that wants to break in. Our corporate policy is no doors at all. Takes less time to get where you need to go, so our employees don’t waste precious seconds they could instead be using to generate profits.

gmtom@lemmy.world on 26 May 18:28 collapse

Yes, but you want actual solutions. Using ducktape on a door instead of an actual lock isn’t going to help you at all.

ProgrammingSocks@pawb.social on 25 May 21:53 collapse

Reflexive contrarianism isn’t a good look.

gmtom@lemmy.world on 26 May 18:29 collapse

It’s not contrarianism. It’s just pointing out a “cool new tech to stop AI” is actually just useless media bait.

stm@lemmy.dbzer0.com on 25 May 14:04 next collapse

Such a stupid title, great software!

antihumanitarian@lemmy.world on 25 May 18:58 next collapse

Some details. One of the major players doing the tar pit strategy is Cloudflare. They’re a giant in networking and infrastructure, and they use AI (more traditional, nit LLMs) ubiquitously to detect bots. So it is an arms race, but one where both sides have massive incentives.

Making nonsense is indeed detectable, but that misunderstands the purpose: economics. Scraping bots are used because they’re a cheap way to get training data. If you make a non zero portion of training data poisonous you’d have to spend increasingly many resources to filter it out. The better the nonsense, the harder to detect. Cloudflare is known it use small LLMs to generate the nonsense, hence requiring systems at least that complex to differentiate it.

So in short the tar pit with garbage data actually decreases the average value of scraped data for bots that ignore do not scrape instructions.

fossilesque@mander.xyz on 25 May 20:06 collapse

The fact the internet runs on lava lamps makes me so happy.

mlg@lemmy.world on 25 May 19:04 next collapse

–recurse-depth=3 --max-hits=256

Novocirab@feddit.org on 25 May 20:40 next collapse

There should be a federated system for blocking IP ranges that other server operators within a chain of trust have already identified as belonging to crawlers. A bit like fediseer.com, but possibly more decentralized.

(Here’s another advantage of Markov chain maze generators like Nepenthes: Even when crawlers recognize that they have been served garbage and they delete it, one still has obtained highly reliable evidence that the requesting IPs are crawlers.)

Also, whenever one is only partially confident in a classification of an IP range as a crawler, instead of blocking it outright one can serve proof-of-works tasks (à la Anubis) with a complexity proportional to that confidence. This could also be useful in order to keep crawlers somewhat in the dark about whether they’ve been put on a blacklist.

Opisek@lemmy.world on 25 May 20:48 collapse

You might want to take a look at CrowdSec if you don’t already know it.

[deleted] on 25 May 21:01 next collapse

.

rekabis@lemmy.ca on 25 May 21:02 next collapse

Holy shit, those prices. Like, I wouldn’t be able to afford any package at even 10% the going rate.

Anything available for the lone operator running a handful of Internet-addressable servers behind a single symmetrical SOHO connection? As in, anything for the other 95% of us that don’t have literal mountains of cash to burn?

Opisek@lemmy.world on 25 May 21:11 collapse

They do seem to have a free tier of sorts. I don’t use them personally, I only know of their existence and I’ve been meaning to give them a try. Seeing the pricing just now though, I might not even bother, unless the free tier is worth anything.

Novocirab@feddit.org on 25 May 21:01 collapse

Thanks. Makes sense that things roughly along those lines already exist, of course. CrowdSec’s pricing, which apparently start at 900$/months, seem forbiddingly expensive for most small-to-medium projects, though. Do you or does anyone else know a similar solution for small or even nonexistent budgets? (Personally I’m not running any servers or projects right now, but may do so in the future.)

Opisek@lemmy.world on 25 May 21:13 collapse

There are many continuously updated IP blacklists on GitHub. Personally I have an automation that sources 10+ of such lists and blocks all IPs that appear on like 3 or more of them. I’m not sure there are any blacklists specific to “AI”, but as far as I know, most of them already included particularly annoying scrapers before the whole GPT craze.

infinitesunrise@slrpnk.net on 25 May 22:13 next collapse

OK but why is there a vagina in a petri dish

underline960@sh.itjust.works on 25 May 22:20 next collapse

I was going to say something snarky and stupid, like “all traps are vagina-shaped,” but then I thought about venus fly traps and bear traps and now I’m worried I’ve stumbled onto something I’m not supposed to know.

buddascrayon@lemmy.world on 26 May 01:33 collapse

I believe that’s a close-up of the inside of a pitcher plant. Which is a plant that sits there all day wafting out a sweet smell of food, waiting around for insects to fall into its fluid filled “belly” where they thrash around fruitlessly until they finally die and are dissolved, thereby nourishing the plant they were originally there to prey upon.

Fitting analogy, no?

buddascrayon@lemmy.world on 26 May 01:35 next collapse

What if we just fed TimeCube into the AI models. Surely that would turn them inside out in no time flat.

Hestia@hexbear.net on 26 May 10:48 next collapse

Execute Mandrill_maze.exe

arc@lemm.ee on 26 May 11:33 next collapse

I’ve suggested things like this before. Scrapers grab data to train their models. So feed them poison.

Things like counter factual information, distorted images / audio, mislabeled images, outright falsehoods, false quotations, booby traps (that you can test for after the fact), fake names, fake data, non sequiturs, slanderous statements about people and brands etc… And choose esoteric subjects to amplify the damage caused to the AI.

You could even have one AI generate the garbage that another ingests and shit out some new links every night until there is an entire corpus of trash for any scraper willing to take it all in. You can then try querying AIs about some of the booby traps and see if it elicits a response - then you could even sue the company stealing content or publicly shame them.

InternetCitizen2@lemmy.world on 26 May 20:29 collapse

Kind of reminds me of paper towns in map making.

HugeNerd@lemmy.ca on 26 May 18:40 next collapse

When I was a kid I thought computers would be useful.

InternetCitizen2@lemmy.world on 26 May 20:29 collapse

They are. Its important to remember that in a capitalist society what is useful and efficient is not the same as profitable.

baltakatei@sopuli.xyz on 26 May 19:09 next collapse

I’m pretty sure no one knows my blog and wiki exist, but it sure is popular, getting multiple hits per second 24/7 in a tangle of wiki articles I autogenerated to tell me trivia like whether the Great Fire of London started on a Sunday or Thursday.

Irelephant@lemm.ee on 29 May 15:30 collapse

I check if a user agent has gptbot, and if it does I 302 it to web.sp.am.