Developer Creates Infinite Maze That Traps AI Training Bots (www.404media.co)
from kororon@lemmy.cafe to technology@lemmy.world on 23 Jan 15:59
https://lemmy.cafe/post/12228812

A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.

“It’s less like flypaper and more an infinite maze holding a minotaur, except the crawler is the minotaur that cannot get out. The typical web crawler doesn’t appear to have a lot of logic. It downloads a URL, and if it sees links to other URLs, it downloads those too. Nepenthes generates random links that always point back to itself - the crawler downloads those new links. Nepenthes happily just returns more and more lists of links pointing back to itself,” Aaron B, the creator of Nepenthes, told 404 Media.

#technology

threaded - newest

akilou@sh.itjust.works on 23 Jan 16:10 next collapse

But does running this cost the AI bot at least as much as it costs you to run?

doylio@lemmy.ca on 23 Jan 16:36 next collapse

Picking words at random from a dictionary would not be very compute intensive, the content doesn’t need to be sensical

BrianTheeBiscuiteer@lemmy.world on 23 Jan 17:36 next collapse

Yes, the scraper is going to mindlessly gobble up information. At best they’d expend more resources later to try and determine the value of the content but how do you do that really? Mostly I think they’re hoping the good will outweigh the bad.

tempest@lemmy.ca on 23 Jan 19:15 collapse

It honestly depends. There are random drive by scrapers that will just do what they can, usually within a specific budget for a domain and move on. If you have something specific though that someone wants you end up in an arms race pretty quickly as they will pay attention and tune their crawler daily.

JustJack23@slrpnk.net on 23 Jan 19:56 collapse

I was thinking exactly that, generating something like lorem ipsum to cost both time, compute and storage for the crawler.

It will be more complex and require more resources tho.

meyotch@slrpnk.net on 23 Jan 16:39 next collapse

I would think yes. The compute needed to make a hyperlink maze is low, compared to the AI processing of the random content, which costs nearly nothing to make, but still costs the same to process as genuine content.

Am I missing something?

RaoulDook@lemmy.world on 23 Jan 17:07 collapse

I’m wondering about the cost to the server’s resources / bandwidth to serve up unlimited random junk also.

But kudos to the developer for making this anyway

Naich@lemmings.world on 23 Jan 17:18 collapse

It does if you use AI to generate the pages it’s scraping.

DarkCloud@lemmy.world on 23 Jan 16:16 next collapse

it might he useful to generate text on the random urls then test different repetitions to see of you can leave a mark on the training data… So after X repetitions or injected information, release the bot back into the wild with whatever message or false info you want it saddled with.

tal@lemmy.today on 23 Jan 16:16 next collapse

I suspect that there are many websites that already dynamically generate an unbounded number of pages based on the links one clicks, and that Web spiders will have needed to deal with those for as long as there have been people spidering the Web, which is going to be no later than the first Web search engines.

I’d guess that if nothing else, they cap how far they spider a site. Probably a lot more sophisticated, use heuristics to figure out which sites are more worth spending indexing resources on, as it’s not just whether to spider but also the frequency with which to do so. Some parts of a site are more “valuable” than others – for a search engine, a more desirable target for users clicking on results – and some will update more frequently and are more-useful to re-spider at higher frequency. Google will return current news articles, yet still indexes a large portion of the content out there. They won’t be doing that by simply sending GoogleBot at everything that they’ve indexed at a fixed frequency.

wise_pancake@lemmy.ca on 23 Jan 16:17 next collapse

This genus named genius game is sending pain to these previous devious data devourors

Nougat@fedia.io on 23 Jan 16:19 next collapse

The modern equivalent of making a page that loads in two frames, left and right, which each load in two frames, top and bottom, which each load in two frames, left and right ...

As I recall, this was five lines of HTML.

palordrolap@fedia.io on 23 Jan 17:36 next collapse

I remember making one of those.

It had a faux URL bar at the top of both the left and right frame and used a little JavaScript to turn each side into its own functioning browser window. This was long before browser tabs were a mainstream thing. At the time, relatively small 4:3 or 5:4 ratio monitors were the norm, and I couldn't bear the skinny page rendering at each side, so I gave it up as a failed experiment.

And yes I did open it inside itself. The loaded pages were even more ridiculously skinny.

Nougat@fedia.io on 23 Jan 17:38 collapse

When I did my five lines, recursively opening frames inside frames ad infinitum, it would crash browsers of the time in a matter of twenty seconds.

paraphrand@lemmy.world on 23 Jan 18:52 collapse

This is really nostalgic for me. I can see the Netscape throbber in my mind.

BurnedDonutHole@ani.social on 23 Jan 16:27 next collapse

My new favorite is asking if it’s cheating to look at your opponent’s pieces in chess.

<img alt="" src="https://ani.social/pictrs/image/01202c90-ebba-4b95-bd4d-5d0ae022d7f9.webp">

Valmond@lemmy.world on 23 Jan 16:51 next collapse

Wow lol!

lemmeBe@sh.itjust.works on 23 Jan 17:16 next collapse

When I ask the same in Perplexity, I get this: <img alt="1000083824" src="https://sh.itjust.works/pictrs/image/23effc1b-3f59-4011-bbcc-19cff8be1751.jpeg">

WolfLink@sh.itjust.works on 23 Jan 18:52 collapse

I’ve always been taught if you say “I adjust” before touching a piece then it’s ok to touch it (specifically so you can move an off-center piece into the center of its square)

Bytemeister@lemmy.world on 23 Jan 19:36 next collapse

Not gonna fly if you say “I adjust” and then pick up a piece, move it to a new spot, then bring it back down and set it in the original spot.

Also ffs, don’t adjust pieces unless it’s your turn.

PlexSheep@infosec.pub on 23 Jan 20:17 collapse

Here it’s “j’adoube” with heavy German accent

Plastic_Ramses@lemmy.world on 23 Jan 18:03 next collapse

This is old.

Chatgpt no longer answers like this, if it ever did.

IzzyScissor@lemmy.world on 23 Jan 18:10 next collapse

<img alt="Are you sure about that?" src="https://lemmy.world/pictrs/image/c40492d9-3a13-4f63-9956-6b27cc738ab4.png">

Screenshot from 2 seconds ago. ChatGPT -4 MINI

petrol_sniff_king@lemmy.blahaj.zone on 23 Jan 18:24 collapse

Damn, I guess it ever did.

I wish my knee-jerk dismissal of anything remotely anti-AI didn’t get in my way so often.

Womble@lemmy.world on 23 Jan 18:30 collapse

Yeah thats not real, if you ask chat gpt it gives the obvious answer of you have to look at the pieces. Either its shopped or its leaving off previous messages where the user has deliberately conviced it that it is cheating.

paraphrand@lemmy.world on 23 Jan 18:47 collapse

I just asked too, and it says it’s cheating. Note this is with 4o Mini, like the other screenshots.

<img alt="" src="https://lemmy.world/pictrs/image/d42b28ec-8e5c-463f-87f2-f1325b7e3e95.png">

Here’s the link ChatGPT gives me for sharing as more proof: chatgpt.com/…/67928ef6-42ac-8010-8966-099794d1de8…

Womble@lemmy.world on 23 Jan 18:59 collapse

Huh, interesting, it does do it with 4o mini but not with 4o. I stand corrected.

toffi@feddit.org on 23 Jan 20:43 collapse

Yeah I got the sensible answer with 4o ( dudckduckgo)

<img alt="" src="https://feddit.org/pictrs/image/9501d24c-50b8-4a16-bad3-5d63828b2ab5.png">

manicdave@feddit.uk on 23 Jan 21:29 collapse

and again

<img alt="" src="https://feddit.uk/pictrs/image/7946315d-48e1-44de-8c5b-1e021e290495.webp">

toffi@feddit.org on 23 Jan 21:54 collapse

Basically says that all AI answers are unreliable. Which is how I treated them anyway…

Petter1@lemm.ee on 23 Jan 21:08 collapse

We don’t know what they did above this prompt, maybe it was advised to answer like this in a prior prompt 🤗 we can not really know from the picture

But 4o is not too old, I think, it is still the highest free unlimited tier.

BurnedDonutHole@ani.social on 24 Jan 00:45 collapse

I wrote that prompt and just asked it as it’s without any other prompts before it. You can see other people got the same same answer or try it yourself.

Agent641@lemmy.world on 24 Jan 00:16 collapse

There is actually a chaos variant of chess that follows this principle:

en.m.wikipedia.org/wiki/Kriegspiel_(chess)

I read about in a PKD short story.

count_dongulus@lemmy.world on 23 Jan 16:29 next collapse

This won’t work against commercial crawlers. They check page contents with something similar to a simhash and don’t recrawl these pages. They also have limiters like for depth to avoid getting stuck in circular links.

You could generate random content for each new page, but you’ll still eventually hit the depth limit. There are probably other rules related to content quality to limit crawling too.

meyotch@slrpnk.net on 23 Jan 16:42 collapse

True, this is an arms race situation after all. The real benefit of this is creating garbage training data that makes garbage models. So it’s not just increasing the cost of crawling, it increases the cost of stealing everybody’s shit because you need extra data quality checks. Poisoning the well.

anarchrist@lemmy.dbzer0.com on 23 Jan 17:35 collapse

You could theoretically use the shittiest local llm you can find to dynamically create slop for the piggies

Thrashy@lemmy.world on 23 Jan 18:02 next collapse

Say it with me now: model collapse! I think this approach is especially insidious in that rather than dumping obvious nonsense into the training corpus that can then be scrubbed, it pushes the downstream LLM invisibly towards spontaneously imploding.

Sludgehammer@lemmy.world on 23 Jan 18:58 next collapse

Just use a Markov chain.

anarchrist@lemmy.dbzer0.com on 23 Jan 19:04 collapse

Pivoting back to blovkchain bb

meyotch@slrpnk.net on 23 Jan 18:59 collapse

Exactly! That’s ideal because LLM or simple pattern matching can’t be used to easily winnow out random strings. If it’s sensible language but the usual LLM hallucinations, then you need humans to curate your data. Fuck you, Sam Altman.

muntedcrocodile@lemm.ee on 23 Jan 17:39 next collapse

I suggest they should generate random garbage content that’s different for every page. Ideally u would want to design it in a way that makes the model that is trained from that source misbehave in some way. Perhaps use another LLM to generate text but u take the tokens that are least likely to be next. U could also probably apply some technique to embed meaning into the text into a non human discernable manner that the LLM will learn to decode and thus teach it things without the developers being any the wiser. Teach the ai to think subversive thoughts in patterns of whitespace etc. Basically once the LLM is trained on something its hard to untrain it and if it doesn’t get caught until its in a production environment they are screwed.

jollyroberts@jolly-piefed.jomandoa.net on 23 Jan 18:52 next collapse

You could programmatically rearrange the meaning of sentences. Ie instead of "where is the library I need to get a book" you could do some sort of full word replacement cypher and end up with sentences like "Lets mambo down to the banana patch."

Just for fun. :-)

0x0@infosec.pub on 23 Jan 18:53 next collapse

Great suggestion. Ever feel like youre stuck in a maze or did you just have an llm stroke?

renzev@lemmy.world on 23 Jan 21:27 collapse

  1. Invent some incredibly specific but entirely false fact (e.g. the kingdom of bolivia was once ruled by King Aron the Benevolent before he was brutally murdered by his cousin-in-law over a dispute about the colonies)
  2. Embed said fact in invisible font among material you own the copyright to
  3. Let AI bots suck it up as training data
  4. Ask random AI bots about King Aron the Benevolent of Bolivia and sue the companies since you now have proof that they violated your copyright

I mean this probably wouldn’t work from a legal standpoint, but whatever. It’s nice to image.

FaceDeer@fedia.io on 23 Jan 18:14 next collapse

This sort of thing has been a strategy for dealing with unwanted web crawlers since web crawlers were a thing. It's an arms race, though; crawlers do things to detect these "mazes" and so the maze-makers keep needing to up their game as well.

As we enter an age where AI is effectively passing the Turing Test, it's going to be tricky making traps for them that don't also ensnare the actual humans you're trying to serve pages to.

samus12345@lemm.ee on 23 Jan 19:05 next collapse

<img alt="" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjw2AEI8E49H3Nd9LKXRmdGPwDFuhXrLxxQOkKwXF1ZgXqP3DwuOhxNkwNMfcJB7F12vfIhQU9ybjf15GN-7Te-tDZWxr7AktTMI3v6-6l48I7JyRwwcaYLaRZHAE5s8K8IkHxrdTPATL4/s1600/iborg384.jpg">

nova_ad_vitum@lemmy.ca on 23 Jan 21:20 collapse

I haven’t seen that episode in probably 15 years and I still remember exactly what this was.

samus12345@lemm.ee on 23 Jan 21:25 collapse

First thing that popped into my head after I read the headline!

WindyRebel@lemmy.world on 23 Jan 22:05 collapse

Can you explain for the rest of the class?

samus12345@lemm.ee on 23 Jan 22:18 collapse

memory-alpha.fandom.com/wiki/Topological_anomaly

tigeruppercut@lemmy.zip on 23 Jan 23:15 collapse

I’m surprised no one has created a trek wiki separate from the shitty fandom site yet. Sometimes when I search for Doom info I accidentally click the fandom link and have to go back out to get the .org site.

samus12345@lemm.ee on 23 Jan 23:17 next collapse

I was aware of the two Doom wikis, but not the reason there was a split, and I’ve heard other complaints about fandom sites before. What’s the deal with that? I’m out of the loop.

Revan343@lemmy.ca on 24 Jan 00:21 collapse

fandom.com has awful intrusive ads and a shitty slow website (probably largely because of the ads)

samus12345@lemm.ee on 24 Jan 00:30 collapse

Ah, that explains why I never had an issue with it, I use ublock origin.

blocked on this page: 69

Phew, you’re not kidding! And the number keeps climbing the longer I leave it there. 84 now.

explodicle@sh.itjust.works on 24 Jan 01:07 collapse

The Minecraft wiki has been way better since they ditched Fandom.

Jordan117@lemmy.world on 23 Jan 20:10 next collapse

More accurately, it traps any web crawler, including regular search engines and benign projects like the Internet Archive. This should not be used without an allowlist for known trusted crawlers at least.

finitebanjo@lemmy.world on 23 Jan 22:33 next collapse

How exactly would that work? Would trusted crawlers be blocked from accessing the maze?

Michal@programming.dev on 23 Jan 22:45 collapse

You can tell what crawler its is by useragent header

finitebanjo@lemmy.world on 23 Jan 23:57 next collapse

Yeah and then you allowlist them by blacklisting them from the maze.

Treczoks@lemmy.world on 24 Jan 00:03 collapse

Which can easily be faked.

Treczoks@lemmy.world on 24 Jan 00:03 next collapse

Just put the trap in a space roped off by robots.txt - any crawler that ventures there deserves being roasted.

DreamlandLividity@lemmy.world on 24 Jan 01:18 collapse

More accurately, it traps any web crawler

More accurately, it does not trap any competent crawlers, which have per domain limits on how many pages they crawl.

patrick@lemmy.bestiver.se on 23 Jan 21:05 next collapse

This showed up on HN recently. Several people who wrote web crawlers pointed out that this won’t even come close to working except on terribly written crawlers. Most just limit the number of pages crawled per domain based on popularity of the domain. So they’ll index all of Wikipedia but they definitely won’t crawl all 1 million pages of your unranked website expecting to find quality content.

OsrsNeedsF2P@lemmy.ml on 23 Jan 21:37 next collapse

Can confirm, I have a website (2009scape.org) with tonnes of legacy forum posts (100k+). No crawlers ever go there.

It’s a shame that 404media didn’t do any due diligence when writing this

Luvs2Spuj@lemmy.world on 23 Jan 22:10 next collapse

2009scape!? If it’s what I think it is that is amazing. Legend

OsrsNeedsF2P@lemmy.ml on 23 Jan 22:47 collapse

It is what you think it is, come join ^^. It’s a small niche world

affiliate@lemmy.world on 23 Jan 22:44 collapse

No crawlers ever go there.

if it makes you feel any better, i would go there if i was a web crawler.

Agent641@lemmy.world on 24 Jan 00:12 collapse

Then that’s a where we hide the good stuff

renzev@lemmy.world on 23 Jan 21:20 collapse

This reminds me of that one time a guy figured out how to make “gzip bombs” that bricked automated vuln scanners.

GreenKnight23@lemmy.world on 23 Jan 22:03 collapse

I had a scanner that was relentless smashing a server at work and configured one of those.

evidently it was one of our customers. it filled their storage up and increased their storage costs by like 500%.

they complained that we purposefully sabotaged their scans. when I told them I spent two weeks tracking down and confirmed their scan were causing performance issues on our infrastructure I had every right to protect the experience of all our users.

I also reminded them they were effectively DDOSing our services which I could file a request to investigate with cyber crimes division of the FBI.

they shut up, paid their bill, and didn’t renew their measly $2k mrr account with us when their contract ended.

bitch ass small companies are always the biggest pita.