TASSTA Documentation Center TASSTA Documentation Center More products
Hide table of contents Hide details Search My account

What impedes communications performance

This topic lists processes that occur during communications and contribute to performance degradation. These processes are quantifiable in that you can create tests to detect them and measure the time that they waste.

The common result in each case is latency of some kind or other, measured as time. Low latency is considered better than high latency.

Network latency

This very important, often overlooked term, refers to the timing of data transfers over a communications channel or network. One important aspect of latency is how long it takes from the time a request for data is made until it starts to arrive. Another aspect is how much control a device has over the timing of the data that is sent, and whether the network can be arranged to allow for the consistent delivery of data over a period of time.

Network latency contributes the most to the overall latency.

The misconception of high network speed

The term network speed is used so broadly in the context of performance evaluation that its definition is hazy and it can mean just about anything. Most commonly, however, it refers to the rated or nominal amount of data per unit of time for a particular networking technology.

This term is often bandied about to market networking hardware, and you see it discussed in earnest as if it were any indication of how fast a network really is. The problem with using nominal speed ratings is that they are only theoretical, and as such, tell an incomplete story. No networking technology can achieve its full rated speed, and many run substantially below it, due to real-world performance factors.


The more accurate terms for describing the capabilities of a network are:

  • Bandwidth
    This usually refers to the data-carrying capacity of a network or data transmission medium. It indicates the maximum amount of data that can pass from one point to another in a unit of time.
  • Throughput
    This is normally a practical measure of how much actual data can be sent per unit of time across a given network, channel or interface. Therefore, it can never exceed the bandwidth.
    Consider an Ethernet network rated at 100 megabits per second. That is the absolute upper limit on throughput, even though you will normally get quite a bit less. So if someone tells you they are using 100Mbps Ethernet, chances are they are getting a throughput of about 70 to 75Mbps on their network.

In addition, to really be able to estimate the speed of a network, it is not enough to know the capabilities of just the links available on the sending and receiving end; you have to look at the entire path of a packet. Your network is only as fast as its slowest link, and even slower than that. If your device is showing a full set of network meter bars, do not fall into the trap of thinking you are getting maximum speed, especially if you don't know how complex the path of your data is.

Effect of network nodes

What happens at the network nodes greatly affects the network speed. Needless network topology indirection, inadequate server hardware, busy routing appliances and aggressive firewalls are sure to ramp up latency noticeably. Whereas a well-constructed network has latency in the tens of milliseconds, a problematic network design can increase it to several seconds.


On multiple occasions, TASSTA Support has worked on tickets from customers with geographically distributed networks where latency issues are further complicated by the use of bridged connections to PMR networks.

For advice on minimizing avoidable network latency, see the Solutions topic.


Jitter is any deviation in, or displacement of, the signal pulses in a high-frequency digital signal. The deviation can be in terms of amplitude, phase timing or the width of the signal pulse. Among the causes of jitter are electromagnetic interference (EMI) and crosstalk with other signals. In communications, jitter manifests itself on the receiving end as choppy or fading audio and video.

For advice on minimizing the effects of jitter, see the Solutions topic.

Audio latency

One useful and well-understood audio performance metric is round-trip latency. Round-trip latency is defined as the time it takes for an audio signal to enter the input of a mobile device, be processed after an application submits it to the audio API, and exit the output. Round-trip audio latency is a strong indicator of how well any mobile device is optimized for professional audio. Lower latency confers significant benefits to users.

If you have free rein to choose devices, pick the ones with the lowest experimentally proven audio latency.

Algorithm latency

On its path to the receiving end, media data in communications undergoes algorithmic transformations: it is processed by codecs and may become encrypted and decrypted. The following transformations occur to a typical PTT sound bite:

  1. The voice signal is encoded.
  2. The encoded data is encrypted by the sending client device.
  3. After arriving at the receiving device, the data is decrypted.
  4. The decrypted data is decoded for playback.

The time it takes to perform each of these steps varies widely and depends greatly on the device and the choice of algorithms. In practice, this kind of latency normally contributes the least to the overall result. Expectedly, faster hardware helps reduce the effect even further.


As discussed above, communications performance depends on various factors such as network properties, hardware and software. If you consider the accumulated latency as the main performance marker, then the formula for estimating it looks roughly as follows:

total latency = 2 * (codec latency + encryption latency) + network latency + audio latency + jitter