The triumvirate of network performance
metrics are packet loss
, latency and jitter.
Almost all network applications use TCP (Transmission Control Protocol) to get their data from point A to point B. About 85% of the overall internet's traffic is TCP, of which specific aspect is that it completely hides the packet-based nature of the network from applications. Whether an application hands a single character or a multi-megabyte file to TCP, puts the data in packets and sends it on its way over the network. The internet is a scary place for packets trying to find their way: it's not uncommon for packets to be lost and never make it across, or to arrive in a different order than they were transmitted. TCP retransmits lost packets and puts data back in the original order if needed before it hands over the data to the receiver. This way, applications don't have to worry about those eventualities.
TCP has a number of mechanisms to get good performance in the presence of high latencies:
1) Make sure enough packets are kept "in flight". Simply sending one packet and then waiting for the other side to say "got it, send then next one" doesn't cut it; that would limit throughput to five packets per second on a path with a 200 ms RTT. So TCP tries to make sure it sends enough packets to fill up the link, but not so many that it oversaturates the link or path. This works well for big data transfers.
2) For smaller data transfers TCP uses a "slow start" mechanism. Because TCP has to wait for acknowledgments from the receiver, more latency means more time spent in slow start. Web browser performance used to be limited by slow start a lot, but browsers started to reuse TCP sessions that were already out of slow start to download additional images and other elements rather than keep opening new TCP sessions.
3) Also you may use simple open-transfer-close-open-transfer-close sequences that work well on low latency networks but slow down a lot over larger distances or on bandwidth-limited networks, which also introduce additional latency.
4) Try to use a DNS server close by. Every TCP connection is preceded by a DNS lookup. If the latency towards the DNS server is substantial, this slows down the entire process.
Packets are lost in networks for two reasons:
1) Every transmission medium will flip a bit once in a while, and then the whole packet is lost. Wireless typically sends extra error correction bits, but those can only do so much. If such an error occurs, the lost packet needs to be retransmitted. This can hold up a transfer.
But if network latency or packet loss get too high, TCP will run out of buffer space and the transfer has to stop until the retransmitted lost packet has been received. In other words: high latency or high loss isn't great, but still workable, but high latency and high loss together can slow down TCP to a crawl.
2) Another reason packets get lost is too many packets in a short time: TCP is sending so fast that router/switch buffers fill up faster than packets can be transmitted.If TCP has determined that the network can only bear very conservative data transfer speeds, and slow start really does its name justice, it's faster to stop a download and restart it rather than to wait for TCP to recover.
Jitter - is the difference between the latency from packet to packet
Obviously, the speed of light isn't subject to change, and fibers tend to remain the same length. So latency is typically caused by buffering of packets in routers and switches terminating highly utilized links. (Especially on lower bandwidth links, such as broadband or 3G/4G links.) Sometimes a packet is lucky and gets through fast and sometimes the queue is longer than usual. For TCP, this isn't a huge problem, although this means that TCP has to use a conservative value for its RTT estimate and timeouts will take longer. However, for (non-TCP) real-time audio and video traffic, jitter is very problematic, because the audio/video has to be played back at a steady rate. This means the application either has to buffer the "fast" packets and wait for the slow ones, which can add user-perceptible delay, or the slow packets have to be considered lost, causing dropouts.
In conclusion, in networks that use multiple connections to the internet, it can really pay off to avoid paths that are much longer and thus incur a higher latency than alternative paths to the same destination, as well as congested paths with elevated packet loss. The path selecting process can be performed automatically: learnhow to automate evaluation of packet loss and latency
across multiple providers to choose the best performing route.