The root cause of many IP network issues is packet jitter. Excessive packet jitter leads to buffer over- and under-flows which, in turn, result in dropped packets and stalled data flows. In what follows, we will describe these issues and some of the ways that waveform monitors diagnose packet jitter problems.
What is Jitter?
Jitter in an IP network is a displacement of a signal’s periodicity. When carrying constant bitrate data, jitter is simply an aberration in the timing consistency of packet arrivals at a receiving device.
What Causes Jitter?
Assuming your routers and switches are properly configured, the most common cause of jitter is network congestion at the router and switch interfaces. Some amount of jitter is inherent in any IP network because of its lack of synchronicity.
Reducing Jitter
Media applications in a network element generally need to receive data continuously, not in bursts. To make this happen, receiving devices implement de-jitter buffers. How rapidly the buffer can fill is called its “fill rate”. Applications receive packets from the output of the buffer, not directly. Because packets flow out of the buffer at a regular rate, which we call the “drain rate”, we minimize timing variations.
Balancing buffer size
Buffer size is important. If the buffer is too small and the drain rate exceeds the fill rate, then the buffer may underflow. The result is a stalled packet flow. Conversely, if the fill rate exceeds the drain rate, the buffer will eventually overflow, and we will drop packets. You may think that it would be easy to resolve this by simply making your buffer very large. Unfortunately, if the buffer is too large, the result is unacceptably long latency. Latency is as bad for video and audio quality as are stalled flows and packet loss.
Why Does Buffering Reduce Jitter?
Network jitter causes packets to become aperiodic. For this reason, buffer fill rates will no longer be constant. As jitter increases so does aperiodicity. Ultimately, a buffer’s fill and drain rates become so irregular that the buffer will underflow and stall or overflow and drop packets. With high bitrate video, both conditions impact quality.
How Do PRISM Waveform Monitors Diagnose Packet Jitter?
Assuming an accurate clock is present in a receiver, you can measure jitter by examining the timestamps of arriving packets. PRISM allows you to do this and to plot inter-arrival intervals versus time. This is an easy way to explore jitter temporally and in PRISM, there’s an app for that. It’s called the PIT Graph.
The PIT Histogram app, lets us look at packet inter-arrival time in terms of occurrence frequency. Interestingly, devices often have identifiable histogram signatures. So, you can recognize a particular device from its PIT histogram. This is an incredibly useful app because you can easily determine stream issues such as loops and problematic devices.
There is one thing those PIT measurements can’t help you do: that is to select a correct de-jitter buffer size. For this, we can turn to a measurement called the Time-Stamped Delay Factory. This you can determine with PRISM’s IP Graphs App.
We have intentionally provided only a high-level “taste” of the jitter analysis possibilities of PRISM in this brief post. Clearly, there is far more to learn. It’s extremely useful and we strongly encourage you to check it out. To do so, all you need to do is to take a look at our recent comprehensive whitepaper, “Diagnosing and Resolving Faults in an Operational IP Video Network”. Click here to access it. When you do, remember to tick the “Video Test and Synchronization” check box on the form so that we can keep you up to date on additional technical information and all the exciting things happening with Waveform Monitor and Synchronization solutions here at Telestream.