df showing a full (99%) ssd, but du only showing a fraction of that? UPDATED
from flork@lemy.lol to linux@lemmy.ml on 30 Sep 01:46
https://lemy.lol/post/53218724

I noticed the root drive of my home server (Debian) is at 99% capacity, which was odd to be because I don’t store anything on the root ssd. sudo df -h confirms that 99% of my 256gb drive is full. But sudo du -sh * all added up, only comes to about 30gb.

This is a pc that only runs docker containers and one virtual machine for home assistant. And yes I have restarted, Any ideas as to how to find the missing 200+ gigabytes?

EDIT: sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file in /var/lib/docker/containers and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information (or what’s safe to delete) but I’m getting somewhere.

#linux

threaded - newest

wildbus8979@sh.itjust.works on 30 Sep 02:01 next collapse

If you have enough room to install the ncdu command, it’s super helpful!

flork@lemy.lol on 30 Sep 16:00 next collapse

This is a cool tool thanks, unfortunately it is reporting the same (far, far below 99%) number.

flork@lemy.lol on 30 Sep 16:29 collapse

Actually running sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file in /var/lib/docker/containers and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information (or what’s safe to delete) but I’m getting somewhere.

wildbus8979@sh.itjust.works on 30 Sep 18:00 collapse

Isn’t that tool freaking nifty? I love ncdu. BTW the -x flag is useful to make sure you stay on the same file system (useful if you have network shares, extra disks, and to avoid digging stuff like /proc and /sys)

frongt@lemmy.zip on 30 Sep 02:12 next collapse

df reports on filesystems, not drives.

db2@lemmy.world on 30 Sep 02:13 next collapse

$ man fstrim
cmnybo@discuss.tchncs.de on 30 Sep 02:16 next collapse

Did du give any permission errors? It can’t count the size of directories that it doesn’t have permission to access.

flork@lemy.lol on 30 Sep 16:17 collapse

It did yes, a few “invalid argument” a few “permission denied” and a few “no such file or directory”

flork@lemy.lol on 30 Sep 16:31 collapse

UPDATE: sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file in /var/lib/docker/containers and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information (or what’s safe to delete) but I’m getting somewhere.

loweffortname@lemmy.blahaj.zone on 30 Sep 02:52 next collapse

Check for mounts hiding the underlying drive?

Sometimes du . -x will help, too. (-x doesn’t cross mount points).

drhoopoe@lemmy.sdf.org on 30 Sep 04:56 next collapse

Docker containers can eat a lot of space over time. When’s the last time you did a docker system prune? Be sure to read up on what it does before you try it.

flork@lemy.lol on 30 Sep 16:30 collapse

Thanks that allowed me to clear up about 20GB! Also sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information or if it’s safe to delete but at least I’m getting somewhere.

Eideen@lemmy.world on 30 Sep 05:50 next collapse

To help you we need to understand your setup

  • show us the output of ‘mount’
  • show us the output of ‘lsblk’
  • show us the output of ‘fdisk -l /dev/sda’
  • Do you run snapshots?
flork@lemy.lol on 30 Sep 16:07 collapse

EDIT: sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file in /var/lib/docker/containers and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information (or what’s safe to delete) but I’m getting somewhere.

mount outputted a lot of text. I suspect this may be getting somewhere but I’m not too experienced. There is a lot of overlay on /var/lib/docker/overlay2/[long string of numbers and letters]. I also see a lot (~25) of nsfs on /run/docker/netns/[string of letters and numbers] type nsfs (rw)

lsblk game me my drives as expected, and fdisk gave fdisk: command not found.

Eideen@lemmy.world on 01 Oct 09:41 collapse

The point to show the output is to help us understand your system. Not to point directly at the issue. Like for mount this will show what partition is mount where.

You system don’t have fdisk is installed.

droopy4096@lemmy.ca on 30 Sep 06:29 next collapse

I’d be curious to see du -i to see what’s going on with inodes. Alternatively I did have an issue long time ago with docker containers, sparse files and dirty disk. Force-running fsck resolved my issues in the past.

flork@lemy.lol on 30 Sep 16:10 collapse

du -i gave invalid option and fsck gave command not found

EDIT: sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file in /var/lib/docker/containers and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information (or what’s safe to delete) but I’m getting somewhere.

droopy4096@lemmy.ca on 01 Oct 01:45 collapse

that’ll teach me to type in a hurry. I meant df, not du. Lookup man page for options

Strit@lemmy.linuxuserspace.show on 30 Sep 06:49 next collapse

Wouldn’t du -hs * only check the space used inside the folder you are in?

I’d check with sudo du -hs /* myself if I wanted. Or ved ncdu to get a visual representation.

flork@lemy.lol on 30 Sep 15:50 collapse

sudo du -hs /* looked like it began started listing every file on the entire server.

Cyber@feddit.uk on 30 Sep 07:11 next collapse

du -hs * won’t find “hidden” (.) files and folders, you’ll need a slightly sifferent regx (which I will leave as an exercise for you / I don’t have that info here)

And also both du and df show different results depending on the underlying filesystem, ie btrfs (and maybe ZFS?) won’t show how much deduplication is happening.

Also, you might be looking at sparse files too, and from memory, you’ll need another option for du or df to report those correctly.

flork@lemy.lol on 30 Sep 16:32 collapse

sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information or what’s safe to delete but I’m getting somewhere.

Cyber@feddit.uk on 30 Sep 19:04 collapse

Nice. Glad you’re on to something.

I can’t help you with docker files as I don’t use it, but, there’s usually a way to find out who / what owns a file, so I hope docker utils can tell you if it’s safe to delete

syklemil@discuss.tchncs.de on 30 Sep 10:37 next collapse

One more puzzle piece here is that du won’t report on files that have been marked for deletion but are still held on to by some process. There’s an lsof incantation to list those, but I can’t recall it off the top of my head.

It used to be part of sysadmin work to detect the processes that held on to large files if df reports that you’re running out of space, and restart them to make them let go of the file. But I haven’t done that in ages. And if you restarted the host OS that should have taken care of that.

I assume you also know how to prune container resources.

flork@lemy.lol on 30 Sep 15:56 collapse

Good call on the docker prune, I didn’t think about that. That accounted for about 25GB. Still not enough but at least I’m not at 99% anymore.

custard_swollower@lemmy.world on 30 Sep 12:43 next collapse

As @Strit wrote, use sudo, as docker keeps its container, image and volume files under /var/lib/docker, and that folder is not readable without sudo (or root).

flork@lemy.lol on 30 Sep 14:38 collapse

I did use sudo

custard_swollower@lemmy.world on 30 Sep 14:42 collapse

What folder did you run it in?

[deleted] on 30 Sep 15:35 next collapse

.

flork@lemy.lol on 30 Sep 16:29 collapse

Sorry I believe I wasn’t actually in the root of the drive! sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file in /var/lib/docker/containers and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information (or what’s safe to delete) but I’m getting somewhere.

Ferk@lemmy.ml on 30 Sep 13:10 next collapse

What’s your filesystem?

If you are using something like btrfs, for example, the usage reports can be misleading, and getting the exact size can be complicated… I’d recommend using more fs-specific tools, such as btdu.

flork@lemy.lol on 30 Sep 15:34 collapse

Would different filesystems really report an over 200gb difference?

Ferk@lemmy.ml on 30 Sep 15:58 next collapse

In theory, it can. One possible reason with Btrfs might be that you are only mounting a subvolume even though there might be other files in the same filesystem (such as snapshots/copies of the subvolume for backup) but that are not being mounted.

Also some tools like gparted do not handle btrfs disk usage very well and will display it as if the whole partition is 100% full.

neclimdul@lemmy.world on 01 Oct 07:00 collapse

Generally no but in realitly it could contribute. some have weird behaviours in how they allocate space so knowing can be useful to rule things out our suggesting gotchas to look for.

lucas@startrek.website on 30 Sep 06:25 next collapse

Where are you running du -sh *? (I.e. what directory, are you definitely scanning the whole file system?) I’m sure it’s obvious, but can never hurt to check!

What does du -sh / show? (Generally, the * glob pattern in the shell will not match hidden dot-files, so is it possible they are being excluded?)

flork@lemy.lol on 30 Sep 16:16 collapse

EDIT sudo ncdu allowed me to find a 72gb [long string of characterless]-json.log file in /var/lib/docker/containers and many 1gb+ files in /var/lib/docker/overlay2. I’m not sure what to do with this information (or what’s safe to delete) but I’m getting somewhere.

~~sudo du -sh / shows a list of three du: cannot access ‘/run/user/1000/gvfs’ Permission denied and du: cannot access No such file or directory and a few cannot read directory invalid argument and at the end it shows 5.4T / which I assume is my root drive combined with what’s in mnt. ~~

antsu@discuss.tchncs.de on 01 Oct 07:09 collapse

If you’re using BTRFS, check if you don’t have any stray snapshots.