Disk full alerts at 3 AM are a rite of passage. The fastest way to recover is knowing exactly which commands to run, in which order, to find the culprit quickly. This guide covers the complete workflow from initial check to cleanup.

Start with df

Before drilling down, confirm which filesystem is actually full:

df -h

Note the filesystem path (e.g. /, /var, /home) — that's where you'll focus your search. If multiple filesystems are mounted, you can skip checking the ones with plenty of space.

Disk full but df shows free space? Check inodes: df -i. A filesystem can run out of inodes (file slots) while still having free space. This happens most often in directories with millions of tiny files, like mail spools or session directories.

Find large directories with du

du (disk usage) measures how much space directories actually consume on disk. Start broad and drill down:

# Top-level directories on the full filesystem
du -sh /* 2>/dev/null | sort -rh | head -15

# Drill into /var if that's where the problem is
du -sh /var/* 2>/dev/null | sort -rh | head -15

# One more level
du -sh /var/log/* 2>/dev/null | sort -rh | head -15

The -s flag gives you a summary (one line per argument), -h makes it human-readable, and piping through sort -rh puts the largest directories first. The 2>/dev/null silences permission errors on directories you can't read.

For a recursive view with a depth limit:

# Show directories up to 2 levels deep, largest first
du -h --max-depth=2 /var | sort -rh | head -20

Find large files with find

Once you've narrowed down the directory, find the specific files:

# Files larger than 500MB anywhere on the system
find / -type f -size +500M 2>/dev/null

# Files larger than 100MB in /var, sorted by size
find /var -type f -size +100M -exec du -h {} \; 2>/dev/null | sort -rh

# Files modified in the last 24 hours larger than 50MB
find /var -type f -size +50M -mtime -1 2>/dev/null

The -mtime -1 flag is useful when a runaway process is writing constantly — it narrows you to files that have grown recently.

ncdu — the interactive option

ncdu (NCurses Disk Usage) is a terminal UI that makes this whole process much faster. If it's not installed:

# Debian/Ubuntu
apt install ncdu

# RHEL/CentOS/Fedora
yum install ncdu   # or dnf install ncdu

Then run it:

# Scan a specific directory
ncdu /var

# Scan the entire filesystem (exclude other mounts)
ncdu -x /

Use arrow keys to navigate, d to delete a file or directory directly (with confirmation), and q to quit. The -x flag is important on production systems — it keeps ncdu on the current filesystem and won't accidentally scan NFS mounts or other attached storage.

Common culprits: log files

Logs are the most frequent cause of unexpected disk growth. A few places to check:

# Large files in /var/log
find /var/log -type f -size +100M -exec ls -lh {} \;

# Uncompressed old logs (logrotate may not be configured)
find /var/log -name "*.log" -not -name "*.gz" -size +50M

# Journal disk usage
journalctl --disk-usage

If the systemd journal is large, you can trim it:

# Keep only the last 2 weeks
journalctl --vacuum-time=2weeks

# Keep under 500MB total
journalctl --vacuum-size=500M

Files deleted but still held open

A classic gotcha: a file is deleted but a process still has it open, so the disk space isn't released. df shows the disk full, but du can't find anything large. Check with lsof:

# Find deleted files still held open
lsof | grep '(deleted)'

# Show size of deleted-but-open files
lsof | grep deleted | awk '{print $7, $9}' | sort -rn | head -20

The fix is to either restart the process holding the file open, or if you can't restart (e.g. a database), truncate the file in place:

# Truncate without closing (use the /proc fd path)
# Find the fd number from lsof output first
> /proc/PID/fd/FD_NUMBER

Docker taking up space

On Docker hosts, images, stopped containers, and volumes often quietly consume tens of gigabytes:

# See Docker disk usage breakdown
docker system df

# Remove stopped containers, unused images, dangling volumes
docker system prune

# More aggressive: remove all unused images (not just dangling)
docker system prune -a

Related guides