- bullseye 0.18.0+ds-2
- bullseye-backports 0.22.0+ds-2~bpo11+1
- testing 0.25.0+ds-2
- unstable 0.26.0+ds-1
- experimental 0.26.0+ds-2~experimental1
|cpudist(8)||System Manager's Manual||cpudist(8)|
cpudist - On- and off-CPU task time as a histogram.
cpudist [-h] [-O] [-T] [-m] [-P] [-L] [-p PID] [interval] [count]
This measures the time a task spends on the CPU before being descheduled, and shows the times as a histogram. Tasks that spend a very short time on the CPU can be indicative of excessive context-switches and poor workload distribution, and possibly point to a shared source of contention that keeps tasks switching in and out as it becomes available (such as a mutex).
Similarly, the tool can also measure the time a task spends off-CPU before it is scheduled again. This can be helpful in identifying long blocking and I/O operations, or alternatively very short descheduling times due to short-lived locks or timers.
This tool uses in-kernel eBPF maps for storing timestamps and the histogram, for efficiency. Despite this, the overhead of this tool may become significant for some workloads: see the OVERHEAD section.
Since this uses BPF, only the root user can use this tool.
CONFIG_BPF and bcc.
- Print usage message.
- Measure off-CPU time instead of on-CPU time.
- Include timestamps on output.
- Output histogram in milliseconds.
- Print a histogram for each PID (tgid from the kernel's perspective).
- Print a histogram for each TID (pid from the kernel's perspective).
- -p PID
- Only show this PID (filtered in kernel for efficiency).
- Output interval, in seconds.
- Number of outputs.
- Summarize task on-CPU time as a histogram:
- # cpudist
- Summarize task off-CPU time as a histogram:
- # cpudist -O
- Print 1 second summaries, 10 times:
- # cpudist 1 10
- Print 1 second summaries, using milliseconds as units for the histogram, and include timestamps on output:
- # cpudist -mT 1
- Trace PID 185 only, 1 second summaries:
- # cpudist -p 185 1
- Microsecond range
- Millisecond range
- How many times a task event fell into this range
- An ASCII bar chart to visualize the distribution (count column)
This traces scheduler tracepoints, which can become very frequent. While eBPF has very low overhead, and this tool uses in-kernel maps for efficiency, the frequency of scheduler events for some workloads may be high enough that the overhead of this tool becomes significant. Measure in a lab environment to quantify the overhead before use.
This is from bcc.
Also look in the bcc distribution for a companion _example.txt file containing example usage, output, and commentary for this tool.
Unstable - in development.