Linux DJ
Performance monitoring terminal with audio graph metrics
PipeWire Tuning·/latency/

pw-top Walkthrough: Diagnosing XRuns, CPU Spikes, and Bad Nodes in Live Sessions (2026 Edition)

pw-top tells you exactly what is happening inside PipeWire's processing graph. Here is how to read it, what the columns mean, and how to find the node that is causing your XRuns.

pw-top is the single most useful diagnostic tool for PipeWire audio. It shows you the processing graph in real time: which nodes are running, how much of the quantum budget each one consumes, and which one is causing your XRuns. Despite this, most PipeWire troubleshooting guides barely mention it, and when they do, they do not explain what the numbers mean.

This is a column-by-column walkthrough of pw-top output, how to read it under pressure (mid-session, mid-recording), and how to identify the exact node that is ruining your audio.

Launching pw-top

pw-top

That is it. No arguments needed for the default view. It refreshes once per second and shows all active nodes in the PipeWire graph.

For a specific update interval:

pw-top -b  # batch mode, single snapshot

The display looks something like this:

S  ID QUANT  RATE   WAIT   BUSY   W/Q   B/Q  ERR  NAME
R  36   256 48000  1.2ms  0.4ms  0.23  0.08    0  ALSA Playback (hw:0)
R  52   256 48000    -      -      -     -     0  Ardour:Master/audio_out 1
R  54   256 48000    -      -      -     -     0  Ardour:Master/audio_out 2
R  41   256 48000  0.1ms  0.8ms  0.02  0.15    0  Brave (Playback)
D  36   256 48000  1.3ms  1.1ms  0.25  0.21    0  ALSA Playback

The line marked with D in the status column is the driver. Everything else is a follower node.

Column-by-column breakdown

S (Status)

ValueMeaning
RRunning - actively processing audio
IIdle - exists but not processing
DDriver - this node drives the graph timing
-Suspended

The driver node is critical. It sets the clock for the entire graph. Typically this is your ALSA playback device. If the driver shows errors, every other node is affected.

ID

The PipeWire node ID. Use this with pw-cli and pw-dump to get detailed information about a specific node:

pw-cli info 36

Or dump the full node properties:

pw-dump | jq '.[] | select(.id == 36)'

QUANT

The quantum (buffer size) this node is running at. In a healthy graph, all nodes should show the same quantum as the driver. If a node shows a different quantum, PipeWire is resampling to match, which adds latency and CPU overhead.

If you followed the quantum selection method and set a forced quantum, all nodes should match.

RATE

The sample rate in Hz. Same principle as quantum - all nodes should match the driver. A mismatch means PipeWire is running a sample rate converter for that node. This is common with PulseAudio applications that request 44100 Hz when the device is running at 48000 Hz.

WAIT

The time the node spent waiting between being woken up and starting to process. This is scheduling latency - the gap between "PipeWire signals this node to run" and "the node actually starts running."

High WAIT values indicate scheduling problems:

  • WAIT consistently above 1 ms at quantum 128 (2.67 ms budget): your real-time scheduling is probably not working correctly
  • WAIT spikes: something is preempting the audio thread temporarily
  • WAIT equals or exceeds the quantum period: guaranteed XRun

If WAIT is high, the issue is not in PipeWire or your plugins. It is in the kernel scheduler. Check your real-time scheduling configuration.

BUSY

The time the node spent actually doing work - running plugins, mixing, converting formats. This is your processing time.

BUSY directly tells you how much of the CPU budget this node consumes. If one node's BUSY time is close to the quantum period, that node is your bottleneck.

W/Q (Wait / Quantum)

WAIT time as a fraction of the quantum period. This normalizes the wait time so you can compare across different quantum settings.

W/Q valueInterpretation
0.00-0.10Excellent scheduling response
0.10-0.30Normal, healthy
0.30-0.50Elevated, investigate if combined with XRuns
0.50-0.80Problematic, scheduling issues likely
0.80+Critical, XRuns imminent or occurring

B/Q (Busy / Quantum)

BUSY time as a fraction of the quantum period. This tells you how much of the processing budget this node uses.

B/Q valueInterpretation
0.00-0.30Light load, plenty of headroom
0.30-0.60Moderate load
0.60-0.80Heavy load, limited headroom
0.80-1.00Near saturation, XRuns likely under any additional load
1.00+Overrun, this node is exceeding the quantum budget

The important number is W/Q + B/Q for each node. If the combined value approaches 1.0, the node is using the entire quantum period and there is no margin left.

ERR

The cumulative XRun count for this node since PipeWire started (or since the node was created). This is the number you watch when testing quantum values.

A non-zero ERR on the driver line means the ALSA device reported an XRun. A non-zero ERR on a follower node means that specific node failed to complete processing within the quantum period.

Finding the node that is eating your budget

Here is the practical procedure when you have XRuns during a session:

Step 1: Open pw-top in a terminal alongside your DAW.

Step 2: Look at the ERR column. Find the node with a non-zero or increasing ERR count.

Step 3: Look at that node's B/Q. If B/Q is above 0.8, the node's processing is too heavy for the current quantum. The fix is either to increase the quantum or reduce the processing load (disable a plugin, lower an oversampling setting, freeze a track).

Step 4: If B/Q is low but W/Q is high, the problem is not processing time but scheduling. The node had time to do the work but did not get to run soon enough. Fix your real-time scheduling.

Step 5: If both B/Q and W/Q are moderate but the driver line shows ERR, the issue might be USB or hardware-level. The ALSA device itself is reporting XRuns that are not caused by PipeWire graph processing.

Real examples

Example 1: Heavy plugin load

S  ID QUANT  RATE   WAIT   BUSY   W/Q   B/Q  ERR  NAME
D  36   128 48000  0.3ms  2.1ms  0.11  0.79    3  ALSA Playback
R  52   128 48000    -    2.0ms    -   0.75    3  Ardour:out
R  41   128 48000    -    0.1ms    -   0.04    0  Firefox

Ardour is consuming 75% of the quantum budget (B/Q 0.75) at quantum 128. The driver shows 3 XRuns. Solution: increase quantum to 256, or reduce Ardour's plugin load. At quantum 256, Ardour's B/Q would drop to approximately 0.37, well within budget.

Example 2: Scheduling failure

S  ID QUANT  RATE   WAIT   BUSY   W/Q   B/Q  ERR  NAME
D  36   256 48000  4.8ms  0.6ms  0.90  0.11    7  ALSA Playback
R  52   256 48000    -    0.5ms    -   0.09    0  Ardour:out

W/Q is 0.90 on the driver. The audio thread waited 4.8 ms out of a 5.33 ms quantum period before it even started processing. BUSY is only 0.6 ms - the actual audio work is trivial. The problem is entirely scheduling. Check your real-time priority, check for competing high-priority processes, and consider whether something is holding a CPU lock.

Example 3: USB interface choking

S  ID QUANT  RATE   WAIT   BUSY   W/Q   B/Q  ERR  NAME
D  36    64 48000  0.2ms  0.3ms  0.15  0.23   42  ALSA Playback
R  52    64 48000    -    0.2ms    -   0.15    0  Ardour:out

W/Q and B/Q are both fine - plenty of headroom. But the driver has 42 XRuns. PipeWire's graph is processing in time, but the ALSA driver for the USB interface is reporting underruns. At quantum 64 (1.33 ms), USB 2.0 polling jitter is the likely cause. The fix is to increase quantum to 128, or investigate the USB host controller (try a different USB port, check for shared interrupt lines).

Using pw-top alongside other tools

pw-top tells you what is happening in the PipeWire graph. For deeper investigation:

pw-dump gives you the full state of every object in PipeWire, including properties, parameters, and link topology:

pw-dump > ~/pipewire-state.json

pw-cli lets you inspect specific nodes interactively:

pw-cli info all  # dump everything
pw-cli ls Node   # list all nodes
pw-cli enum-params 36 Props  # get properties of node 36

pw-profiler captures timing data over a period and writes it to a file for analysis:

pw-profiler -o profile.log &
# ... reproduce the problem ...
kill %1

The profiler output can be visualized with pw-viz or processed with standard text tools to find timing patterns that pw-top updates too quickly to catch.

Reading the driver line

The driver line (marked D) deserves special attention because it represents the hardware clock source. The driver's WAIT time includes the time the ALSA device took to signal that a period was complete. Its BUSY time includes the overhead of copying audio data to and from the hardware buffer.

If the driver's WAIT time is unstable - jumping between 0.5 ms and 4.0 ms at the same quantum - the hardware clock is jittery. This is common with:

  • USB interfaces on shared USB controllers
  • HDMI audio outputs that are synced to display refresh
  • Virtual sound devices (in VMs or containers)
  • Cheap PCI audio cards with poor clock crystals

A stable driver WAIT that does not vary by more than 10-15% between cycles is what you want.

When pw-top is not enough

pw-top shows you the PipeWire graph but not what is happening underneath it:

  • ALSA-level issues require alsa_delay_test, arecord --dump-hw-params, or checking /proc/asound/card*/pcm*/sub0/status
  • Kernel scheduling issues need trace-cmd, ftrace, or cyclictest to diagnose
  • IRQ affinity problems show up in /proc/interrupts (check if your audio device's IRQ is sharing a core with high-interrupt-rate devices)
  • CPU frequency scaling can cause periodic spikes; monitor with turbostat or cpupower frequency-info

For systematic latency measurement and comparison, the latency graphing tool can help visualize patterns over longer time spans than pw-top's rolling display.

Practical tips

Keep pw-top open during all serious sessions. Put it on a second monitor or a tmux pane. Glancing at it once a minute can catch developing problems before they ruin a take.

Reset ERR counters by restarting PipeWire. There is no way to zero the counters without a restart. If you are doing comparative tests, restart PipeWire between test runs:

systemctl --user restart pipewire pipewire-pulse wireplumber

Watch for node count growth. Some applications create new PipeWire nodes for each audio stream and do not clean them up. If your pw-top output keeps growing, a leaky application is wasting graph processing overhead.

The dash (-) in WAIT and BUSY. Follower nodes that are not direct graph drivers show dashes for WAIT. Their BUSY time is real (processing time), but they do not have a meaningful independent wait since they are triggered by the driver's cycle. The driver aggregates all follower processing into its own timing.

FAQ

How often does pw-top update? Every second by default. There is no command-line flag to change the interval in most builds, though you can modify the source. Use pw-top -b for a single snapshot useful in scripts.

Does running pw-top affect performance? Negligibly. pw-top reads PipeWire's profiling data through the protocol socket. It does not inject itself into the processing graph. You can safely run it during recording and performance.

Why does my DAW show as multiple nodes? Applications can create multiple PipeWire nodes - separate nodes for each track output, for MIDI, for sidechain inputs. This is normal JACK-style behavior. Each node processes independently and has its own timing.

Can I use pw-top over SSH? Yes, if you set the PipeWire socket correctly:

export PIPEWIRE_REMOTE=/run/user/1000/pipewire-0
pw-top

Replace 1000 with the target user's UID. This is useful for headless audio servers.

ERR count is zero but I hear clicks. The clicks may not be PipeWire XRuns. Check for: DC offset in your audio chain, plugin bugs producing discontinuities, sample rate mismatches causing periodic resampler artifacts (check the RATE column), or electrical noise in your analog path.

Conclusion

pw-top takes the guesswork out of PipeWire diagnostics. WAIT tells you about scheduling. BUSY tells you about processing. ERR tells you about failures. W/Q and B/Q normalize those into comparable ratios. Learn to read these five numbers and you can diagnose almost any PipeWire audio problem in under a minute. Keep it open. Refer to it often. It is the tool that separates "I think my audio setup is working" from "I know exactly what my audio setup is doing."

For the broader picture of what causes XRuns in the first place—USB bandwidth contention, CPU frequency scaling, resampling, and the full Linux audio stack—see the Linux Audio Quality guide.

  • PipeWire
  • pw-top
  • XRuns
  • Diagnostics
  • Linux Audio
  • 2026

Related Notes

← All notes