Thread counts are one of those numbers that prompt a lot of "is this normal?" questions. You see a Java process with 300 threads and wonder if the application is misbehaving. Usually it isn't. This guide explains how Linux threads work, how to read them from standard tools, and how to tell the difference between expected behavior and a real leak.

How Linux handles threads

On Linux, threads are implemented as lightweight processes — the kernel doesn't distinguish between them and regular processes at the scheduling level. Each thread gets its own entry in the process table, shares the address space of its parent process, and has its own PID (called a Thread ID or TID internally, but visible as a PID in some tools).

This is why ps aux can show more entries than you expect on a busy system — it's counting threads. By default ps shows processes, but some flags change that behavior.

Viewing thread counts

ps with thread info

# Show thread count per process (NLWP column)
ps -eo pid,nlwp,comm --sort=-nlwp | head -20

# Show individual threads for a process
ps -T -p PID

# All threads system-wide
ps -eLf | head -30

The NLWP column is "Number of Light-Weight Processes" — Linux's term for threads. The -T flag expands threads for a specific PID.

top with threads

# Press H while top is running to toggle thread display
top -H

# Or start top in thread mode immediately
top -H -p PID

In thread mode, each line in top represents an individual thread. The VIRT, RES, and SHR columns show shared memory — which means all threads of the same process show the same numbers, not per-thread allocations. Don't add them up.

/proc/PID/status

cat /proc/PID/status | grep -E 'Threads|Pid|Name'

This gives you:

Name:   java
Pid:    12345
Threads: 287

The Threads field is the authoritative count for a process. It's read directly from the kernel and doesn't require any parsing of ps output.

Counting via /proc directly

# Thread count for a specific PID
ls /proc/PID/task | wc -l

# Thread counts for all processes, sorted
for pid in /proc/[0-9]*/; do
  pid_num=$(basename "$pid")
  threads=$(ls "$pid/task" 2>/dev/null | wc -l)
  name=$(cat "$pid/comm" 2>/dev/null)
  echo "$threads $pid_num $name"
done | sort -rn | head -20

Each subdirectory under /proc/PID/task/ corresponds to one thread. Counting them is more reliable than parsing text output from ps.

System-wide thread limits

Linux has a maximum number of threads across the entire system:

# Current thread limit
cat /proc/sys/kernel/threads-max

# Current thread count system-wide
cat /proc/loadavg   # 4th field: running/total threads

# More detail
ps -eLf | wc -l

The default threads-max is typically calculated as a fraction of available RAM. Each thread needs a kernel stack (8 MB by default on x86_64). On a system with 16 GB of RAM you'd have a theoretical limit in the hundreds of thousands, so hitting the limit is rare but not impossible for high-throughput services.

Per-process limits are governed by ulimit:

# Check thread/process limit for current shell
ulimit -u

# Check limits for a running process
cat /proc/PID/limits | grep -i "max processes"

What's a normal thread count?

This depends heavily on the application:

Application TypeTypical Thread CountNotes
nginx worker1–2Event-driven, one thread per worker process
PostgreSQL backend1Process-per-connection model, not threaded
Java application50–500+JVM creates threads for GC, JIT, thread pools
Go service4–20Uses goroutines on an M:N model; OS thread count stays low
Python (CPython)2–10GIL limits true parallelism; threads used for I/O
Node.js4–8libuv thread pool + main event loop
MySQL/InnoDB30–100+Thread per connection + background threads

A JVM with 300 threads is not necessarily broken. The JVM spawns threads for garbage collection, JIT compilation, management beans, and whatever thread pools the application configures. That number can look alarming to someone unfamiliar with the JVM, but it's often expected.

When high thread count is actually a problem

Thread counts become concerning when:

  • The count is growing without bound. A stable application should have a roughly stable thread count. Use watch -n 5 'cat /proc/PID/status | grep Threads' and see if it climbs over time without leveling off.
  • Threads are not being cleaned up. Thread leaks happen when threads are created but never joined or terminated — common in naive thread pool implementations or when exception handling is wrong.
  • You're approaching ulimit. If a process is hitting its thread limit it will fail to create new ones, which often surfaces as "unable to create new native thread" errors in logs (especially in Java).
  • CPU is high but work isn't getting done. Many threads competing for a lock (lock contention) can cause high CPU usage through context switching while actual throughput drops.

Investigating a thread leak

# Watch thread count over time for PID 12345
watch -n 5 'ls /proc/12345/task | wc -l'

# If it's Java, get a thread dump
kill -3 PID        # prints to stdout/log
# or with jstack
jstack PID | grep "java.lang.Thread.State" | sort | uniq -c

# For any process, check thread states via /proc
for tid in /proc/PID/task/*/; do
  cat "${tid}status" 2>/dev/null | grep -E 'Name|State'
done | paste - -

Thread stack size and memory

Each thread allocates a stack. The default stack size on Linux is 8 MB:

ulimit -s    # shows stack size in KB (8192 = 8MB)

A process with 500 threads has potentially 4 GB of virtual address space reserved for stacks alone — though actual physical RAM usage depends on how deeply the stack is used. If you have a JVM with hundreds of threads and limited RAM, reducing the stack size can help:

# Set JVM thread stack to 512KB instead of 1MB default
java -Xss512k -jar application.jar
Virtual vs physical memory for threads: Don't panic when top shows a Java process using 8 GB of virtual memory (VIRT). Most of that is reserved address space, not actual RAM. Watch RSS (Resident Set Size) and the system-wide free -h output instead.

Related guides