- SHOW ALL
- AV Services
- Banking and Finance
- Case Studies
- Central Government
Martin Finlayson, Head of Visual Communications, avsnet
One of the most common misconceptions I come across, even amongst network engineers, is the notion that you can get around video-call quality issues simply by throwing more bandwidth at the problem.
It is true that when calls are restricted to the LAN, throughput will be enough to solve call quality issues. However, the whole point of UC is to collaborate with people who are not likely to be in your immediate vicinity. This is when the problems start.
Once a call leaves one LAN and routes to another, it has to share links with other data traffic. If the network cannot differentiate one sort of traffic from another then the best it can do is deal on a first come, first served basis. On a quiet network this would be perfectly fine, but of course networks are seldom quiet even when you would expect them to be.
To explain the problem, think of it this way…
Consider your WAN connection to be a pipe and your video call is happily trickling along the bottom of the pipe minding its own business. Suddenly, a server somewhere decides to do a file transfer over the same connection, now your pipe immediately gets filled to its maximum capacity and your video-call packets are in amongst all that data.
Firstly, they start arriving sporadically rather than in the regular fashion that they were transmitted, this is called jitter. Then as the pipe reaches capacity, packets start to arrive too late and then not at all, this is called packet-loss. Remember, with real-time media if a packet arrives late it may as well have not arrived at all. This is different to normal data like email, that can simply be reconstructed from packets regardless of the time and order in which they were received.
To make matters worse the normal network metrics used to assess performance won’t expose the problem due to the sample periods used. A UC call will last many minutes whereas normal data transfers happen in a matter of seconds and it is this ‘bursty’ nature of data that causes the issue.
The average sample time for a network switch is in the region of five minutes and this is fine for most data requirements. However, a lot can happen in five minutes and our scenario outlined above, which has had a significant if not terminal effect on our video-call, wouldn’t even register if it was averaged out over a five minute period.
Even trying to get more granularity to your statistics can exacerbate the problem, by generating more network traffic every time you query all the switches and routers for the measurements.
So how does QoS help?
QoS or Quality of Service is a capability, a function of the network to provide better service to selected traffic by prioritising that traffic, providing dedicated bandwidth, controlled/guaranteed jitter and improved packet loss. In order to apply the necessary techniques of prioritisation to the desired traffic, the network needs to firstly classify the traffic.
Classification can be done based on various values in a data packet, like IP address, but also packets can be marked with pre-determined QoS values. This marking can be done by the end-points and applications that are generating the packets, phones, soft clients or VC codecs. Or they can be marked by the network itself. Most UC applications mark packets anyway, but more often than not the network is not configured to read or act on the various classifications. In fact, in the worst cases, networks can actively re-mark all packets as zero priority in an effort to simplify its operation.
If the network is configured to read and act on the classification marker however, it can significantly improve the user experience. This is because every time a real-time media packet arrives at a switch or router, it is pushed to the front of the queue for immediate forwarding.
Think of it like the priority queue when you go through security at the airport. The majority of the passengers take their chances in the normal queue, but when someone who’s paid for first-class arrives, they get spirited through with minimal delay. We all arrive at our destinations eventually but some spend less time in processing than others and you wouldn’t want to risk missing your flight!
So, if this is such a great thing why doesn’t everyone do it?
Here are some reasons:
- Firstly, setting up QoS requires all elements of a network to support it, if a single device ignores the rules then the whole thing is rendered ineffective.
- Secondly, if you get it wrong, it can have a severely detrimental effect on your network performance for all traffic.
- Thirdly, and perhaps most importantly, the internet does not support QoS! This means that no matter how well your own network is setup, once it leaves the public interface on your perimeter firewall all bets are off.
Now think on that for a minute…
If you refer to the diagrams posted in my last blog regarding the paths real-time media streams use when you implement cloud UC services, you will notice that the internet features pretty strongly in all of them. Currently only private cloud or hybrid options could support QoS to help with your own organisation’s internal traffic. But even they can’t help when you want to enjoy that funky, full-featured UC experience with an external party.
In summary, for organisations with a handful of VC enabled rooms, QoS may be considered to be more hassle than it’s worth by your IT team. Given that those few devices may already be on a separate video VLAN anyway, then that is an entirely reasonable assertion. However, if you are on the verge of enabling all your users with a video client then a correctly implemented QoS strategy may be the difference between success and failure.
Bear in mind, if your UC platform is delivered solely from the cloud then even a fully QoS enabled network can’t guarantee a pleasant user experience.
As ever, if this article has sparked a thought, consternation or simply a desire to understand more, then please feel free to get in touch.