Building a slow web (goodinternetmagazine.com)
from Pro@programming.dev to technology@lemmy.world on 07 Jun 10:46
https://programming.dev/post/31778487

#technology

threaded - newest

crank0271@lemmy.world on 07 Jun 11:14 next collapse

Interesting read. It captures a lot of how I feel and what I miss about the “old internet.”

Imgonnatrythis@sh.itjust.works on 07 Jun 13:01 next collapse

Good luck get advertiser support for your “slow web”. Oh, wait…

dmajorduckie@lemmy.blahaj.zone on 07 Jun 13:24 collapse

I would question the assumption that advertisers on the internet is a good thing.

rumimevlevi@lemmings.world on 07 Jun 13:19 next collapse

I don’t know abou that. I don’t want to manage visiting dozens of websites.

Technically it is also possible to make interactionless feeds with no live and share bottons

rottingleaf@lemmy.world on 07 Jun 13:39 collapse

How’s visiting dozens of pages different from visiting dozens of websites?

And BTW, on sites where feeds are in fashion, maybe some kind of Usenet upgraded for HTML and Markdown and post\author hyperlinks would be more in place.

rumimevlevi@lemmings.world on 07 Jun 13:52 collapse

Visiting feeds is like using tools from one organized toolbox. Visiting many websites is like jumping between many separate toolboxes

rottingleaf@lemmy.world on 07 Jun 14:05 collapse

No. You have a toolbox, it’s called a web browser. To unite the particular websites you have a web ring, or your own bookmarks. There were also web catalogues.

rumimevlevi@lemmings.world on 07 Jun 14:15 collapse

Bookmark at not intuitive enough to me and RSS feeds are still feeds that have no interaction features like the writer of this article like.

I am always for giving the most power to users. I like compromises like user settings so people who want a feed with interactions can and who doesn’t can disable it

shiroininja@lemmy.world on 07 Jun 14:42 collapse

But why do we need interactive crap for everything. Comments and etc for articles are the worst. Not everybody needs to hear you, sometimes you’ve just gotta take in information and process it.

Like I literally Maintain my own fleet of apps that give me just the article body images, in a sorted feed. No ads. No links. Nothing. Even the links to other articles, etc in the middle of an article is too much. I hate that shit. Modern web page design is garbage and unreadable.

I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

rottingleaf@lemmy.world on 07 Jun 15:21 next collapse

Modern web page design is garbage and unreadable.

Because it’s a “newspaper meets slot machine” design. Kills two birds with one stone, hijacking media (censorship is invisible) and making money (invisible too).

I don’t need to know stacy from North Dakota’s thoughts on an article because 99% of the time it’s toxic anyways. Or misinformed.

And also because not every place is supposed to be crawling with people.

catloaf@lemm.ee on 07 Jun 15:23 collapse

Interactiviry seems to be a good thing. What brings you to participate here on Lemmy?

shiroininja@lemmy.world on 07 Jun 16:33 collapse

Reading content. I’m more of a lurker compared to most users.

otacon239@lemmy.world on 07 Jun 13:33 next collapse

I agree with everything here. The internet wasn’t always a constant amusement park.

I’m rather proud of my own static site

banana@thebrainbin.org on 07 Jun 17:54 next collapse

I like your pictures!

otacon239@lemmy.world on 07 Jun 18:01 collapse

Thank you!

MonkderVierte@lemmy.zip on 07 Jun 18:39 next collapse

Well…

<img alt="" src="https://lemmy.zip/pictrs/image/62253105-a9a9-4b94-9167-dfcffcc674f0.webp">

otacon239@lemmy.world on 07 Jun 19:09 collapse

Maybe that’s a dark mode thing? I know Dark Reader breaks almost anything with an already dark theme.

MonkderVierte@lemmy.zip on 07 Jun 19:25 collapse

Lol, no. I made a usercss for this (currently not released) but explicitly disabled it here. But that one uses a base style that switches via @prefers light/dark:

@media (prefers-color-scheme: dark) {
  :root {
    --text-color: #DBD9D9;
    --text-highlight: #232323;
    --bg-color: #1f1f1f;
    …
  }
}
@media (prefers-color-scheme: light) {
  :root {
    …
  }

Guess your site uses one of them too.

otacon239@lemmy.world on 07 Jun 20:08 collapse

I admit I used Publii for my builder. I can’t program CSS for crap. I’m far more geared towards backend dev.

AnarchistArtificer@lemmy.world on 07 Jun 19:24 next collapse

With respect to the presentation of your site, I like it! It’s quite stylish and displays well on my phone.

[deleted] on 07 Jun 22:04 collapse

.

ohshit604@sh.itjust.works on 07 Jun 22:04 next collapse

If you don’t mind me asking, how do you host your site?

otacon239@lemmy.world on 07 Jun 22:49 next collapse

I host it via docker+nginx on my own hardware.

ohshit604@sh.itjust.works on 07 Jun 23:54 collapse

I’m in the same boat (sorta)!

Follow up question, did you have trouble exposing port :80 & :443 to the internet? Also are you also using Swarm or Kubernetes?

I have the docker engine setup on a machine along side Traefik (have tried Nginx in the past) primarily using Docker Compose and it works beautifully on LAN however I can’t seem to figure out why I can’t connect over the internet, I’m forced to WireGuard/VPN into my home network to access my site.

No need to provide troubleshooting advice, just curious on your experience.

otacon239@lemmy.world on 08 Jun 14:03 collapse

I keep everything as flat as possible. Just the regular docker (+compose) package running on vanilla Debian. On the networking side, I’m lucky in that I have a government-run fiber provider that doesn’t care that much what I host, so it’s just using the normal ports.

I did previously use C*mcast, and I remember there was an extra step I had to do to get it to redirect port 80 over 443, but I couldn’t tell you what that step was anymore.

interdimensionalmeme@lemmy.ml on 08 Jun 01:11 collapse

Buy the cheapest laptop you can find, with a broken screen it’s fine. Install debian 12 on it give it a memorable name, like “server” go to a DNS registrar of your choice, maybe “porkbun” and buy your internet DNS name for example “MyInternetWebsite.tv”, this will cost you 20$/30$ for the rest of your life, or until we finally abolish the DNS system to something less extortionnate Install webmin and then apache on it go to your router, give the laptop a static address in the DNS section Some router do no have the ability to apply a static dhcp lease to computers on your network, in that case it will be more complicated or you will have to buy a new one, one that preferably supports openwrt. then go to port forwarding and forward the ports 80 and 443 to the address of the static dhcp lease now use puttygen to create a private key, copy that public key to your linux laptop’s file called /root/.ssh/authorized_keys go to the webmin interface, which can be accessed with server.lan:10000 from any computer on your PC and setup dynamic dns, this will make the DNS record for MyInternetWebsite.tv change when the IP of your internet connection changes, which can happen at any time, but usually rarely does. But you have to, or else when it changes again, your website and email will stop working. Now go to your desktop computer, and download winsshfs, put in your private key and mount the folder /var/www/html/ to a drive letter like “T:” Now, whatever you put in T: , will be the content of your very own internet web server enjoy

ohshit604@sh.itjust.works on 08 Jun 04:06 collapse

While i appreciate the detailed response here i did make another comment letting OP know i’m in a similiar situation as them, i use Docker Engine & Docker Compose for my self-hosting needs on a 13th Gen Asus Nuc (i7 model) running Proxmox with a Debian 12 VM. My reverse proxy is traefik and i am able to receive SSL certificates on port :80/:443 (also have Fail2Ban setup) however, i can’t for the life of me figure out how to expose my containers to the internet.

On my iPhone over LTE/5G trying my domain leads to an “NSURLErrorDomain” and my research of this error doesn’t give me much clarity. Edit appears to be a 503 error.

This is a snippet of my docker-compose.yml

``` services: homepage: image: ghcr.io/gethomepage/homepage hostname: homepage container_name: homepage networks: - main environment: PUID: 0 # optional, your user id PGID: 0 # optional, your group id HOMEPAGE_ALLOWED_HOSTS: my.domain,* ports: - ‘127.0.0.1:3000:3000’ volumes: - ./config/homepage:/app/config # Make sure your local config directory exists - /var/run/docker.sock:/var/run/docker.sock #:ro # optional, for docker integrations - /home/user/Pictures:/app/public/icons restart: unless-stopped labels: - “traefik.enable=true” - “traefik.http.routers.homepage.rule=Host(my.domain)” - “traefik.http.routers.homepage.entrypoints=https” - “traefik.http.routers.homepage.tls=true” - “traefik.http.services.homepage.loadbalancer.server.port=3000” - “traefik.http.routers.homepage.middlewares=fail2ban@file” # - “traefik.http.routers.homepage.tls.certresolver=cloudflare” #- “traefik.http.services.homepage.loadbalancer.server.port=3000” #- “traefik.http.middlewares.homepage.ipwhitelist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 172.18.0.0/16, 208.118.140.130” #- “traefik.http.middlewares.homepage.ipwhitelist.ipstrategy.depth=2” traefik: image: traefik:v3.2 container_name: traefik hostname: traefik restart: unless-stopped security_opt: - no-new-privileges:true networks: - main ports: # Listen on port 80, default for HTTP, necessary to redirect to HTTPS - target: 80 published: 55262 mode: host # Listen on port 443, default for HTTPS - target: 443 published: 57442 mode: host environment: CF_DNS_API_TOKEN_FILE: /run/secrets/cf_api_token # note using _FILE for docker secrets # CF_DNS_API_TOKEN: ${CF_DNS_API_TOKEN} # if using .env TRAEFIK_DASHBOARD_CREDENTIALS: ${TRAEFIK_DASHBOARD_CREDENTIALS} secrets: - cf_api_token env_file: .env # use .env volumes: - /etc/localtime:/etc/localtime:ro - /var/run/docker.sock:/var/run/docker.sock:ro - ./config/traefik/traefik.yml:/traefik.yml:ro - ./config/traefik/acme.json:/acme.json #- ./config/traefik/config.yml:/config.yml:ro - ./config/traefik/custom-yml:/custom # - ./config/traefik/homebridge.yml:/homebridge.yml:ro labels: - “traefik.enable=true” - “traefik.http.routers.traefik.entrypoints=http” - “traefik.http.routers.traefik.rule=Host(traefik.my.domain)” #- “traefik.http.middlewares.traefik-ipallowlist.ipallowlist.sourcerange=127.0.0.1/32, 192.168.1.0/24, 208.118.140.130, 172.18.0.0/16” #- “traefik.http.middlewares.traefik-auth.basicauth.users=${TRAEFIK_DASHBOARD_CREDENTIALS}” - “traefik.http.middlewares.traefik-https-redirect.redirectscheme.scheme=https” - “traefik.http.middlewares.sslheader.headers.customrequestheaders.X-Forwarded-Proto=https” - “traefik.http.routers.traefik.middlewares=traefik-https-redirect” - “traefik.http.routers.traefik-secure.entrypoints=https” - “traefik.http.routers.traefik-secure.rule=Host(my.domain)” #- “traefik.http.routers.traefik-secure.middlewares=traefik-auth” - “traefik.http.routers.traefik-secure.tls=true” - “traefik.http.routers.traefik-secure.tls.certresolver=cloudflare” - “traefik.http.routers.traefik-secure.tls.domains[0].main=my.domain” - “traefik.http.routers.traefik-secure.tls.domains[0].sans=*.my.domain” - “traefik.http.routers.traefik-secure.service=api@internal” - “traefik.http.routers.traefik.middlewares=fail2ban@file”

Image of my port-forwarding rules (note; the 3000 internal/external port was me “testing”) <img alt="" src="https://sh.itjust.works/pictrs/image/fa56898b-d183-4fca-99ed-db4a2b3aaf2f.png">


Edit: I should note the Asus Documentation for Port-forwarding mentions this:

  1. Port Forwarding only works within the internal network/intranet(LAN) but cannot be accessed from Internet(WAN).

(1) First, make sure that Port Forwarding function is set up properly. You can try not to fill in the [ Internal Port ] and [ Source IP ], please refer to the Step 3.

(2) Please check that the device you need to port forward on the LAN has opened the p

interdimensionalmeme@lemmy.ml on 09 Jun 01:02 collapse

Hi,

The internal port will also be the same as the external port 80 and 443. If the router is running in bridge mode, that would mean that your dhcp, dns and nat is happening on the upstream router. That means you will have to go to the upstream router to setup the port forwarding.

Also depending on how it works internally with the VPN. It might try to port forward the ports on the VPN’s ip address Which none of the VPN I tried allowed to port forward port 80 and 443

With a linux or openwrt router this could be as easy as the following

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j DNAT --to-destination 192.168.1.199:80 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 443 -j DNAT --to-destination 192.168.1.199:443

But the problem with store bought router is that every one of them has a different way of doing the things so it gets confusing really fast.

All of this confusion about port forwarding was engineered to discourage ordinary people from using their internet to host their own files and instead because cloud-dependant techno-serfs.

Another way, would be to go on the forum low end talk and obtain a VPS, and host your apache server there. That would work, but you would be back to renting someone else’s computer (aka cloud bull) but it’s still better than paying squarespace about it.

Keep at it, you’ll figure it out, it’s actually very easy once you know all the complicated bits, I do it all the time.

ohshit604@sh.itjust.works on 09 Jun 23:41 collapse

Once again, thank you for your insight! It truly does help a lot.

Today I learned the VPN routing is the cause of my issues, I opted to expose my homelab to WAN and tried to connect over LTE/5G and was surprised to see it actually resolve!

I also learned Fail2Ban has failed me in this regard.

Unfortunately this now throws a wrench in my plans In regard to security so now I’m debating on getting another piece of hardware and labelling one as “front end” and the other as “back end” so that the “back end” doesn’t share the same public IP as the “front end”.

This has ignited a spark to rework my homelab!

interdimensionalmeme@lemmy.ml on 10 Jun 06:26 collapse

Realistically, you don’t need security, NAT alone is enough since the packets have nowhere to go without port forwarding.

But IF you really want to build front end security here is my plan.

ISP bridge -> WAN port of openwrt capable router with DSA supported switch (that is almost all of them) Set all ports of the switch to VLAN mirroring mode bridge WAN and LAN sides Fail2Ban IP block list in the bridge

LAN PORT 1 toward -> OpenWRT running inside Proxmox LXC (NAT lives here) -> top of rack switch LAN PORT 2 toward -> Snort IDS LAN PORT 3 toward -> combined honeypot and traffic analyzer

Port 2&3 detect malicious internet hosts and add them to the block list

(and then multiple other openwrt LXCs running many many VPN ports as alternative gateways, I switch LAN host’s internet address by changing their default gateway)

I run no internal VLAN, all one LAN because convenience is more important than security in my case.

PushButton@lemmy.world on 09 Jun 04:18 collapse

Beautiful, I bookmarked it.

Thank you for sharing.

shiroininja@lemmy.world on 07 Jun 14:37 next collapse

I think I wrote this. This is my philosophy for how the web should be. Social media shouldn’t be the main Highway of the web. And the internet should be more of a place to visit, not an always there presence.

MagicShel@lemmy.zip on 07 Jun 14:55 next collapse

One of the things I miss about web rings and recommended links is it’s people who are passionate about a thing saying here are other folks worth reading about this. Google is a piss poor substitute for the recommendations of people you like to read.

Only problem with slow web is people write what they are working on, they aren’t trying to exhaustively create “content”. By which I mean, they aren’t going to have every answer to every question. You read what’s there, you don’t go searching for what you want to read.

AnarchistArtificer@lemmy.world on 07 Jun 19:28 collapse

Something that I have enjoyed recently are blogs by academics, which often have a list of other blogs that they follow. Additionally, in their individual posts, there is often a sense of them being a part of a wider conversation, due to linking to other blogs that have recently discussed an idea.

I agree that the small/slow web stuff is more useful for serendipitous discovery rather than searching for answers for particular queries (though I don’t consider that a problem with the small/slow web per se, rather with the poor ability to search for non-slop content on the modern web)

cupcakezealot@lemmy.blahaj.zone on 08 Jun 07:30 next collapse

the internet peaked in 2000

ExLisper@lemmy.curiana.net on 08 Jun 14:07 next collapse

I think this is the first time I found a reasonable take on “how to fix the internet”. You can’t fix the corpo web. Most people just want constant updates and they don’t care about ads, bots and AI slop. You can’t change their minds.

Saying “fuck it, I will just build my own thing and I don’t care if anyone will see it” is the right approach. Couple of times I was thinking about creating some guides (like guide to public EV chargers in Spain) and I just gave up because I realized I’m not going to win the SEO war and no one is going to view it. Why write guides if they are not helping anyone? I’m still not sure if it makes sense to create guides but it may be a good idea to create a simple site, post some photos, share a story. I will probably do it.

Carotte@sh.itjust.works on 09 Jun 19:18 collapse

Adding my voice to the chorus of “this is how I feel” because, well, it encapsulates exactly how I feel. Author’s personnal website is now in my RSS reader under a new category: Slow Web.

If anyone has suggestions for more website to add to that category they’re more than welcomed.