Tip & How-To about Computers & Internet

How do latency and packet loss determine network performance and what can be done about them

The triumvirate of network performance metrics are packet loss, latency and jitter.

Almost all network applications use TCP (Transmission Control Protocol) to get their data from point A to point B. About 85% of the overall internet's traffic is TCP, of which specific aspect is that it completely hides the packet-based nature of the network from applications. Whether an application hands a single character or a multi-megabyte file to TCP, puts the data in packets and sends it on its way over the network. The internet is a scary place for packets trying to find their way: it's not uncommon for packets to be lost and never make it across, or to arrive in a different order than they were transmitted. TCP retransmits lost packets and puts data back in the original order if needed before it hands over the data to the receiver. This way, applications don't have to worry about those eventualities.

Network latency
TCP has a number of mechanisms to get good performance in the presence of high latencies:
1) Make sure enough packets are kept "in flight". Simply sending one packet and then waiting for the other side to say "got it, send then next one" doesn't cut it; that would limit throughput to five packets per second on a path with a 200 ms RTT. So TCP tries to make sure it sends enough packets to fill up the link, but not so many that it oversaturates the link or path. This works well for big data transfers.
2) For smaller data transfers TCP uses a "slow start" mechanism. Because TCP has to wait for acknowledgments from the receiver, more latency means more time spent in slow start. Web browser performance used to be limited by slow start a lot, but browsers started to reuse TCP sessions that were already out of slow start to download additional images and other elements rather than keep opening new TCP sessions.
3) Also you may use simple open-transfer-close-open-transfer-close sequences that work well on low latency networks but slow down a lot over larger distances or on bandwidth-limited networks, which also introduce additional latency.
4) Try to use a DNS server close by. Every TCP connection is preceded by a DNS lookup. If the latency towards the DNS server is substantial, this slows down the entire process.

Packet loss
Packets are lost in networks for two reasons:
1) Every transmission medium will flip a bit once in a while, and then the whole packet is lost. Wireless typically sends extra error correction bits, but those can only do so much. If such an error occurs, the lost packet needs to be retransmitted. This can hold up a transfer.
But if network latency or packet loss get too high, TCP will run out of buffer space and the transfer has to stop until the retransmitted lost packet has been received. In other words: high latency or high loss isn't great, but still workable, but high latency and high loss together can slow down TCP to a crawl.
2) Another reason packets get lost is too many packets in a short time: TCP is sending so fast that router/switch buffers fill up faster than packets can be transmitted.If TCP has determined that the network can only bear very conservative data transfer speeds, and slow start really does its name justice, it's faster to stop a download and restart it rather than to wait for TCP to recover.
Jitter - is the difference between the latency from packet to packet
Obviously, the speed of light isn't subject to change, and fibers tend to remain the same length. So latency is typically caused by buffering of packets in routers and switches terminating highly utilized links. (Especially on lower bandwidth links, such as broadband or 3G/4G links.) Sometimes a packet is lucky and gets through fast and sometimes the queue is longer than usual. For TCP, this isn't a huge problem, although this means that TCP has to use a conservative value for its RTT estimate and timeouts will take longer. However, for (non-TCP) real-time audio and video traffic, jitter is very problematic, because the audio/video has to be played back at a steady rate. This means the application either has to buffer the "fast" packets and wait for the slow ones, which can add user-perceptible delay, or the slow packets have to be considered lost, causing dropouts.

In conclusion, in networks that use multiple connections to the internet, it can really pay off to avoid paths that are much longer and thus incur a higher latency than alternative paths to the same destination, as well as congested paths with elevated packet loss. The path selecting process can be performed automatically: learnhow to automate evaluation of packet loss and latencyacross multiple providers to choose the best performing route.

Posted by on

Computers & Internet Logo

Related Topics:

Related Questions:

1 Answer

PK5001Z speed drops jitter and latency


Try dropping your MTU setting down to 1492 or maybe a little lower if you're trying to use QoS or something like that. That will cut down on packets becoming fragmented for being a little too large.

Nov 20, 2012 | Computers & Internet

1 Answer

how to monitor email latency


In a network, latency is an expression of how much time it takes for a packet of data to get from one designated point to another.

The most common way to measure latency is to ping from one site to the other.
The higher the latency in milliseconds, the worse the applications like Microsoft Exchange will perform.

Hope it helps.

Mar 01, 2011 | F Key Solutions SMTP Relay Server

1 Answer

We are currently installing wireless cctv and using alvarion access.We are encountering connection lost and sometimes got still pictures.What causes this problem? Thanks,


is it a PTZ camera or Fix? how many cameras? You need to look at the type of capacity (MBPS) & (PPS) , minimum latency and Jitter your camera manufacturre is recommending. Not all wireless devices offer low latency, Low Jitter, High MBPS and PPS needed for higher end cameras.

Feb 22, 2011 | Alvarion Computers & Internet

1 Answer

multikey secure multimedia proxy using ARPS-java source code


Because of limited server and network capacities for streaming applications, multimedia proxies are commonly used to cache multimedia objects such that, by accessing nearby proxies, clients can enjoy a smaller start-up latency and receive a better quality-of-service (QoS) guarantee-for example, reduced packet loss and delay jitters for their requests. However, the use of multimedia proxies increases the risk that multimedia data are exposed to unauthorized access by intruders. In this paper, we present a framework for implementing a secure multimedia proxy system for audio and video streaming applications. The framework employs a notion of asymmetric reversible parametric sequence (ARPS) to provide the following security properties: i) data confidentiality during transmission, ii) end-to-end data confidentiality, iii) data confidentiality against proxy intruders, and iv) data confidentiality against member collusion. Our framework is grounded on a multikey RSA technique such that system resilience against attacks is provably strong given standard computability assumptions. One important feature of our proposed scheme is that clients only need to perform a single decryption operation to recover the original data even though the data packets may have been encrypted by multiple proxies along the delivery path. We also propose the use of a set of encryption configuration parameters (ECP) to trade off proxy encryption throughput against the presentation quality of audio/video obtained by unauthorized parties. Implementation results show that we can simultaneously achieve high encryption throughput and extremely low video quality (in terms of peak signal-to-noise ratio and visual quality of decoded video frames) for unauthorized access.

Oct 16, 2008 | Computers & Internet

1 Answer

Linksys adapter


Factors that Affect Voice Quality
  • Audio Compression Algorithm
    Speech signals are sampled, quantized and compressed before they are packeted and transmitted to the other end. For IP Telephony, speech signals are usually sampled at 8000 samples per second with 12-16 bits per sample. The compression algorithm plays a large role in determining the Voice Quality of the reconstructed speech signal at the other end. The SPA supports the most popular audio compression algorithms for IP Telephony: G.711 a-law and µ-law, G.726, G.729a and G.723.1. The encoder and decoder pair in a compression algorithm is known as a codec. The compression ratio of a codec is expressed in terms of the bit rate of the compressed speech. The lower the bit rate, the smaller the bandwidth required to transmit the audio packets. Voice Quality is usually lower with lower bit rate. However, Voice Quality is usually higher as the complexity of the codec gets higher at the same bit rate.
  • Silence Suppression
    The SPA applies silence suppression so that silence packets are not sent to the other end in order to conserve more transmission bandwidth. Instead, a noise level measurement can be sent periodically during silence suppressed intervals so that the other end can generate artificial comfort noise that mimics the noise at the other end using a CNG or comfort noise generator.
  • Packet Loss
    Audio packets are transported by UDP which does not guarantee the delivery of the packets. Packets may be lost or contain errors which can lead to audio sample drop-outs and distortions and lowers the perceived Voice Quality. The SPA applies an error concealment algorithm to alleviate the effect of packet loss.
  • Network Jitter
    The IP network can induce varying delay of the received packets. The RTP receiver in the SPA keeps a reserve of samples in order to absorb the Network Jitter, instead of playing out all the samples as soon as they arrive. This reserve is known as a Jitter Buffer. The bigger the Jitter Buffer, the more jitter it can absorb and the bigger the delay it can introduce. Therefore the jitter buffer size should be kept to a relatively small size whenever possible. If jitter buffer size is too small, then many late packets may be considered as lost and thus lowers the Voice Quality. The SPA can dynamically adjust the size of the jitter buffer according to the network conditions that exist during a call.
  • Echo
    Impedance mismatch between the telephone and the IP Telephony gateway phone port can lead to near-end echo. The SPA has a near end echo canceller with at least 8 ms tail length to compensate for impedance match. The SPA also implements an echo suppressor with comfort noise generator (CNG) so that any residual echo will not be noticeable.
  • Hardware Noise
    Certain levels of noise can be coupled into the conversational audio signals due to the hardware design. The source can be ambient noise or 60Hz noise from the power adaptor. The SPA hardware design minimizes noise coupling.
  • End-to-End Delay
    End-to-end delay does not affect Voice Quality directly but is an important factor in determining whether subscribers can interact normally in a conversation taking place over an IP network. Reasonable delay figure should be about 50-100ms. End-to-end delay larger than 300ms is unacceptable to most callers. The SPA supports end-to-end delays well within acceptable thresholds.

Dec 01, 2007 | Linksys VOIP VONAGE PHONE ADPTR 2-PORT...

Not finding what you are looking for?

303 people viewed this tip

Ask a Question

Usually answered in minutes!

Top Computers & Internet Experts

Doctor PC
Doctor PC

Level 3 Expert

7733 Answers

kakima

Level 3 Expert

102366 Answers

David Payne
David Payne

Level 3 Expert

14161 Answers

Are you a Computer and Internet Expert? Answer questions, earn points and help others

Answer questions

Loading...