table of contents
- bullseye 0.18.0+ds-2
- bullseye-backports 0.22.0+ds-2~bpo11+1
- testing 0.25.0+ds-2
- unstable 0.26.0+ds-1
- experimental 0.26.0+ds-2~experimental1
|tcptop(8)||System Manager's Manual||tcptop(8)|
tcptop - Summarize TCP send/recv throughput by host. Top for TCP.
tcptop [-h] [-C] [-S] [-p PID] [--cgroupmap MAPPATH]
[--mntnsmap MAPPATH] [interval] [count] [-4 | -6]
This is top for TCP sessions.
This summarizes TCP send/receive Kbytes by host, and prints a summary that refreshes, along other system-wide metrics.
This uses dynamic tracing of kernel TCP send/receive functions, and will need to be updated to match kernel changes.
The traced TCP functions are usually called at a lower rate than per-packet functions, and therefore have lower overhead. The traced data is summarized in-kernel using a BPF map to further reduce overhead. At very high TCP event rates, the overhead may still be measurable. See the OVERHEAD section for more details.
Since this uses BPF, only the root user can use this tool.
CONFIG_BPF and bcc.
- Print USAGE message.
- Don't clear the screen.
- Don't print the system summary line (load averages).
- -p PID
- Trace this PID only.
- --cgroupmap MAPPATH
- Trace cgroups in this BPF map only (filtered in-kernel).
- --mntnsmap MAPPATH
- Trace mount namespaces in this BPF map only (filtered in-kernel).
- Interval between updates, seconds (default 1).
- Number of interval summaries (default is many).
- Trace IPv4 family only.
- Trace IPv6 family only.
- Summarize TCP throughput by active sessions, 1 second refresh:
- # tcptop
- Don't clear the screen (rolling output), and 5 second summaries:
- # tcptop -C 5
- Trace PID 181 only, and don't clear the screen:
- # tcptop -Cp 181
- Trace a set of cgroups only (see special_filtering.md from bcc sources for more details):
- # tcptop --cgroupmap /sys/fs/bpf/test01
- Trace IPv4 family only:
- # tcptop -4
- Trace IPv6 family only:
- # tcptop -6
This traces all send/receives in TCP, high in the TCP/IP stack (close to the application) which are usually called at a lower rate than per-packet functions, lowering overhead. It also summarizes data in-kernel to further reduce overhead. These techniques help, but there may still be measurable overhead at high send/receive rates, eg, ~13% of one CPU at 100k events/sec. use funccount to count the kprobes in the tool to find out this rate, as the overhead is relative to the rate. Some sample production servers tested found total TCP event rates of 4k to 15k per second, and the CPU overhead at these rates ranged from 0.5% to 2.0% of one CPU. If your send/receive rate is low (eg, <1000/sec) then the overhead is expected to be negligible; Test in a lab environment first.
This is from bcc.
Also look in the bcc distribution for a companion _examples.txt file containing example usage, output, and commentary for this tool.
Unstable - in development.
top(1) by William LeFebvre