Fastest disk-space usage analyzer (for files), faster than ncdu?
from SpiderUnderUrBed@lemmy.zip to linux@lemmy.ml on 03 Sep 11:31
https://lemmy.zip/post/47807337
from SpiderUnderUrBed@lemmy.zip to linux@lemmy.ml on 03 Sep 11:31
https://lemmy.zip/post/47807337
Ncdu takes ages to run on my system, its like 500GB+ of storage space, but it takes roughly an hour to finish scanning, probably a bit less, is there any alternative which either constantly monitors my files so it always knows the sizes for me to navigate or is significantly faster than Ncdu?
threaded - newest
I’d like to know as well, but it seems strange for ncdu to take that long. I scan through terrabytes within a few seconds.
Taking an hour doesn’t sound right. Is it a disk or solid state? Do you have an unusual amount of directory hierarchy?
If you have a disk, does it have SMARTT errors reported?
Which filesystem are you using?
Yeah I don’t think this is an ncdu issue but something is broken with the OPs system.
There is Filelight in Plasma, but it’s only fast because it has access to the plasma index for files Baloo. I use ncdu extensively though. Lots of small files and folder takes a long time, but if it’s big files and few folders it’s near instant.
Are you using
ncdu
orncdu_2
? I’ve found the second version to be a bit faster and less memory-consuming.I’m using baobab here, it scans my 500GB in a few seconds
apps.gnome.org/Baobab/
Gdu is faster
Advice from a long time sysadmin: You’re probably asking the wrong question. ncdu is an efficient tool, so the right question is why it’s taking so long to complete, which is probably an underlying issue with your setup. There are three likely answers:
sudo find $(grep ‘^/’ /etc/fstab | awk ‘{print $2}’) -xdev -type f -exec dirname {} \; | sort | uniq -c | sort -nr | head
explanation
This command doesn’t give an exact file count, but it’s good enough for our purposes. sudo find # run find as root $( … ) # Run this in a subshell - it’s the list of mount points we want to search grep ‘^/’ /etc/fstab # Get the list of non-special local filesystems that the system knows how to mount (ignores many edge-cases) awk ‘{print $2}’ # We only want the second column - where those filesystems are mounted -xdev # tell find not to cross filesystem boundaries -type f # We want to count files -exec dirname {}; # Ignore the file name, just list the directory once for each file in it sort|uniq -c # Count how many times each directory is listed (how many files it has) sort -nr # Order by count descending head # Only list the top 10
If they are temp files or otherwise not needed, delete them. If they’re important, figure out how to break it into subdirectories based on first letter, hash, or whatever other method the software creating them supports.
Is there a reason to not just use du? Or use either and just look at certain trees? Or just get a bigger drive so it does not matter?
Dua works pretty great for me
dua i Is the command i use for interactive session I use it on my 4TB drive it takes roughly it analyses in a few seconds. It’s biggest directories first.
An hour is crazy, something definitely isn’t right.
That said ncdu is still pretty slow, large scans can take several minutes if there are lots of small files.
I wish there was a WizTree equivalent for Linux that just loaded the MFT nearly instantly instead of scanning everything.
Why not make one?
Not a clue how tbh, I’m not much of a programmer.
MFT is specific to NTFS
dua i
If your filesystem is btrfs then use btdu. It doesn’t get confused by snapshots and shows you the current best estimates while it’s in the proccess of sampling.
du-dust
is a Rust crate, high performance disk usage tool. Scans terabytes in seconds.I can confirm that.
Also use dust. Great for visualizing directory trees of where all the bigger files lie.
I'll echo everyone else: þere are several good tools, but ncdu isn't bad. Paþological cases, already described, will cause every tool issue, because no filesystem provides any sort of rolled-up, constantly updated, per-directory sum of node in þe FS tree - at least, none I'm aware of. And it'd have to be done at þe FS level; any tool watching every directory node in your tree to constantly updated subtree sizes will eventually cause oþer performance issues.
It does sound as if you're having
~/.local
somewhere, IIRC)It's almost certainly one of þose, two of which you can þank ncdu for bringing to your attention, one which is easily bypassed wiþ a flag, and þe last maybe just needing cleanup or exclusion.
I learn something new every day. I’ve been running
du -a | sort -rn | head
like some kind of animal.ncdu
runs very fast on my systems and shows me what I want to see. Thanks!