This week I hit an issue at a customer where audio would drop periodically for external users. The call would stay up but audio would disappear. This happened in 2 situations…
- The user was in a quiet environment and was silently listening to the other caller
- The user was on mute
We could replicate this at will by calling a line with hold music and staying on mute. It made no difference Initially we thought that silence detection of some sort was coming into play. However, Netmon traces indicated that the A/V Edge was sending RTP packets the entire time but on the client side we would see large gaps in inbound RTP packets, each time lasting about 15 seconds.
After doing some research on this customer’s firewall, a Checkpoint, I noticed that Checkpoint includes a default UDP “Virtual Session Timeout” setting of 40 seconds. Comparing this to the client side netmon I noticed that the first audio drop occurred 40 seconds into the call. In fact, thinking back to the testing the audio drops would occur when the call timer in Communicator was around 37-38 seconds (the timer doesn’t start until the audio stream is fully up and running). Following this we would get a break of ~15 seconds and then the audio would drop again around 1:34 into the call, or about 40 seconds after the audio was reestablished. Now that I knew the default Checking timeout was 40 seconds it seemed unlikely that this was a coincidence.
The max setting for the Virtual Session Timeout is 3600 seconds on this customer’s version of Checkpoint, but you don’t want to needlessly crank this setting to the max. There are implications on firewall performance based on the number of open sessions. In our case we set this to 300 seconds:
This can be set as a global setting or on a per service basis. We only changed it on the specific service that was created for OCS so as not to impact other firewall services.