Do you use anything to archive content for yourself or others? (research, videos, articles, and anything that could be lost to time or censorship)
from otter@lemmy.ca to selfhosted@lemmy.world on 08 Nov 01:47
https://lemmy.ca/post/32540424

I saw this post and I was curious what was out there.

neuromatch.social/@jonny/113444325077647843

Id like to put my lab servers to work archiving US federal data thats likely to get pulled - climate and biomed data seems mostly likely. The most obvious strategy to me seems like setting up mirror torrents on academictorrents. Anyone compiling a list of at-risk data yet?

#selfhosted

threaded - newest

otter@lemmy.ca on 08 Nov 01:48 next collapse

One option that I’ve heard of in the past

archivebox.io

ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.

ptz@dubvee.org on 08 Nov 01:52 next collapse

Going to check that out because…yeah. Just gotta figure out what and where to archive.

M600@lemmy.world on 08 Nov 02:47 next collapse

This seems pretty cool. I might actually host this.

CrazyLikeGollum@lemmy.world on 08 Nov 03:48 next collapse

That looks useful, I might host that. Does anyone have an RSS feed of at risk data?

Boomkop3@reddthat.com on 08 Nov 06:41 next collapse

Eyy, I want that!

tomtomtom@lemmy.world on 08 Nov 12:36 collapse

I am using archivebox, it is pretty straight-forward to self-host and use.

However, it is very difficult to archive most news sites with it and many other sites as well. Most cookie etc pop ups on a site will render the archived page unusable and often archiving won’t work at all because some bot protection (Cloudflare etc.) will kick-in when archivebox tries to access a site.

If anyone else has more success using it, please let me know if I am doing something wrong…

danielquinn@lemmy.ca on 08 Nov 19:54 collapse

Monolith has the same problem here. I think the best resolution might be some sort of browser-plugin based solution where you could say “archive this” and have it push the result somewhere.

I wonder if I could combine a dumb plugin with Monolith to do that… A weekend project perhaps.

catloaf@lemm.ee on 08 Nov 01:55 next collapse

I don’t self-host it, I just use archive.org. That makes it available to others too.

just_another_person@lemmy.world on 08 Nov 02:05 next collapse

Yes. This isn’t something you want your own machines to be doing if something else is already doing it.

jcg@halubilo.social on 08 Nov 02:17 next collapse

But then who backs up the backups?

abff08f4813c@j4vcdedmiokf56h3ho4t62mlku.srv.us on 08 Nov 02:21 next collapse

I guess they back either other up. Like archive.is is able to take archives from archive.org but the saved page reflects the original URL and the original archiving time from the wayback machine (though it also notes the URL used from wayback itself plus the time they got archived it from wayback).

just_another_person@lemmy.world on 08 Nov 02:30 collapse

Realize how how much they are supporting and storing.

Come back to the comments after.

Deebster@infosec.pub on 08 Nov 02:47 collapse

Your argument is that a single backup is sufficient? I disagree, and I think that so would most in the selfhosted and datahoarder communities.

otter@lemmy.ca on 08 Nov 02:20 next collapse

There was the attack on the Internet archive recently, are there any good options out there to help mirror some of the data or otherwise provide redundancy?

Zachariah@lemmy.world on 08 Nov 02:44 collapse

It’s a single point of failure though.

catloaf@lemm.ee on 08 Nov 11:47 collapse

In that they’re a single organization, yes, but I’m a single person with significantly fewer resources. Non-availability is a significantly higher risk for things I host personally.

fossilesque@mander.xyz on 08 Nov 02:23 next collapse

NOAA is at risk I think.

PunnyName@lemmy.world on 08 Nov 03:35 collapse

Everything is at risk.

mesamunefire@lemmy.world on 08 Nov 02:44 next collapse

Flash drives and periodic transfers.

chemicalwonka@discuss.tchncs.de on 08 Nov 02:51 next collapse

I use M-Discs to long term archival.

Boomkop3@reddthat.com on 08 Nov 06:42 collapse

I heard news recently that some companies recently started shipping non-m disks labelled as m-disks. You may want to have a look

yasser_kaddoura@lemmy.world on 08 Nov 08:31 next collapse

I have a script that archives to:

I used to solely depend on archive.org, but after the recent attacks, I expanded my options.

Script: gist.github.com/…/9a02bc50e75e7239f6f0c8f04fe4cfb…

EDIT: Added script. Note that the script doesn’t include archiving to archivebox, since its API isn’t available in stable verison yet. You can add a function depending on your setup. Personally, I am depending on Caddy and docker, so I am using caddy module [1] to execute commands with this in my Caddyfile:

route /add {
	@params query url=*
	exec docker exec --user=archivebox archivebox archivebox add {http.request.uri.query.url} {
		timeout 0
	}
}

[1] github.com/abiosoft/caddy-exec

opulentocean@lemm.ee on 08 Nov 08:47 next collapse

Would you be willing to share it?

yasser_kaddoura@lemmy.world on 08 Nov 09:32 collapse

Sure.

Appoxo@lemmy.dbzer0.com on 08 Nov 18:15 next collapse

I hope you are also donating to the projects for uploading multiple copies to different services.

WhyJiffie@sh.itjust.works on 09 Nov 01:57 collapse

isn’t this prone to a

 || rm -rf /

or something similar at the end of the URL?

if you can docker exec, you have a lot of privileges already, so be sure to make sure this is not a danger

yasser_kaddoura@lemmy.world on 09 Nov 07:50 collapse

Thank you for the warning. You are correct. It’s prune to command injection. I will validate the URL before executing it. This shoud suffice until archivebox’s rest API is available in stable.

Krafting@lemmy.world on 08 Nov 10:52 next collapse

I archive youtube videos that I like with TubeArchivist, I have a playlist for random videos i’d like to keep, and also subscribe to some of my favourite creator so I can keeptheir videos, even when I’m offline

vividspecter@lemm.ee on 08 Nov 11:43 collapse

I’ll add pinchflat as an alternative with the same aim.

Krafting@lemmy.world on 08 Nov 12:22 collapse

Seems nice, but you need an external Player to watch the content, which can be goof for some people, but I like the webUI of TubeArchivist (even though it can be enhanced for sure)

vividspecter@lemm.ee on 09 Nov 01:45 collapse

You can actually play from the UI too, but it’s not particularly nice to use (or intended to be used that way).

jaxiiruff@lemmy.zip on 08 Nov 15:59 next collapse

Linkding/Linkwarden

scientific_railroads@lemmy.world on 08 Nov 16:33 next collapse

For myself: Wayback It saves link to multiple different web archives and gives me pdf and warc files.

For others: Archive team have a few active projects to save at risk data and there is IRC channel in which people can suggest adding other websites for saving. They also have wiki with explanations how people can help.

vegetaaaaaaa@lemmy.world on 08 Nov 19:05 next collapse

shaarli bookmarks + hecat (shaarli_api importer + download_media/archive_webpages processors + html_table exporter for the HTML index)

danielquinn@lemmy.ca on 08 Nov 19:46 collapse

Monolith can be particularly handy for this. I used it in a recent project to archive the outgoing links from my own site. Coincidentally, if anyone is interested in that, it’s called django-cool-urls.