00:09
So let’s say we have a system with proper signal flow and AEC reference.
00:14
We can use a simple process to get the best performance from the AEC processing.
00:19
Step one is all about gain structure! Calibrate each microphone input to -20dBFS nominal.
00:28
This applies to ALL microphones regardless of whether they come in on analog inputs
00:33
or via other transports such as Dante or AES67.
00:38
With analog mic inputs, simply used the analog preamp gain control to achieve this.
00:43
In external microphone systems, you’ll most likely want to log into that system
00:47
and calibrate the microphones to send the appropriate nominal level.
00:51
Remember that ALL system inputs should be calibrated to the same -20dBFS nominal.
00:58
This includes program feeds and signals from the far end.
01:01
Now that the conferencing mics have proper gain structure,
01:04
we’ll next want to check the signal to noise ratio of each microphone.
01:09
It’s fairly easy, as we’ll simply want to be quiet for a few moments
01:13
and let the mic sense only the noise floor of the room.
01:17
The only other consideration is that we want to do this while any air conditioning,
01:22
projector fans or other noise sources in the room are running.
01:26
If the noise floor of the room as sensed by the microphone is below -35dBFS,
01:32
it meets the SNR requirements for a conferencing system.
01:35
If not, external methods of noise control are recommended
01:38
to get the required 15dB of signal to noise at each microphone.
01:43
We might think we simply need to turn on the noise reduction in the algorithm,
01:47
but it will most likely not function well in a poor environment.
01:51
Once the microphones are set to nominal level and we know the signal to noise meets the standard,
01:56
we should be able to leave them at a unity gain throughout the signal chain.
02:00
The next step is to calibrate the near end side of the system.
02:04
For this step we’ll want to first turn the amplifiers all the way down.
02:09
If there are no amplifier controls,
02:11
reduce the Max RMS setting of the Q-SYS output block all the way down to -40dBu.
02:17
Now start playing a program source and calibrate it to nominal level of -20dBFS at the Q-SYS input block.
02:25
Leave this program source playing with unity gain through the entire signal path to the output.
02:32
Using an SPL meter,
02:33
bring the amplifier controls or Max RMS setting up until the SPL reaches a comfortable listening level.
02:41
The standard for conferencing systems is 70 to 75dBSPL.
02:45
Finally, we’re ready to make a test call.
02:48
You’ll want to have a trusted person on the other end
02:51
who can give you qualified feedback about the level and audio quality at the far end.
02:56
Adjust your caller’s far end audio source to reach the -20dBFS nominal level.
03:02
With unity gain through the signal chain, you should
03:05
have the same 70-75dBSPL in the room you measured with the program source.
03:11
If the far end caller tells you there is noise at the far end, enable the noise reduction and level.
03:17
Keep in mind that the noise reduction algorithm is designed to remove constant,
03:21
steady state noise, not things such as the noise of passing traffic, talkers in the hallway, etc.
03:28
Engage the noise reduction with the NR enable button
03:31
and apply only the amount of noise reduction needed to eliminate the noise.
03:36
Applying too much noise reduction could adversely affect the quality of the mic signal at the far end.
03:41
If noise reduction is required you’ll want to make sure it’s applied to all the microphones in the room.
03:48
Now we’d like to check the performance of the AEC algorithm when only the far end is talking.
03:54
Looking at each conferencing mic, we’ll want to confirm that the RMLR meter is showing green as in the diagram.
04:03
meaning we want the reference signal level to match the level of resulting signal at the microphone.
04:08
In most rooms this will require that some attenuation is applied at the reference signal,
04:13
which can be done in the AEC block itself.
04:16
As previously stated,
04:18
it’s a good idea to mute any conferencing mic signals in the DSP after the AEC processing block.
04:24
That would be the first choice, but sometimes the AV designer doesn’t have that choice.
04:30
The microphone audio could come from a Dante enabled microphone for example,
04:34
without a way to get a mute signal back to Q-SYS.
04:37
The mic mute button would then most likely completely mute the Dante audio feed to Q-SYS from that mic.
04:43
Q-SYS has at least one option to accommodate this scenario.
04:47
The ‘hold if mic level below’ setting holds the AEC convergence
04:52
when the mic audio level goes below the setting shown.
04:56
If this is set to be just a few dB below the noise floor of the mic when unmuted,
05:01
the algorithm will hold there through the mute state and have less work to do when it’s unmuted again.
05:08
Starting in Q-SYS version 8.1 there’s also a setting
05:12
to hold the AEC convergence if the reference level goes below a certain level.
05:16
This is useful if the far end signal must be muted before it hits the AEC reference block.
05:22
If there’s still some residual echo to your far end test-caller,
05:26
then it’s time to engage residual echo suppression, or RES.
05:30
First use the button to enable the feature, then engage only enough to remove the residual echo.
05:37
After that, it’s time to test the system in the double-talk condition.
05:42
Have far end and local talkers speaking at once and see if echo reaches the far end.
05:48
If so, increase the RES percentage to just enough to eliminate any echo.
05:53
You’ll want to be sparing with this,
05:55
as conference participants may start to feel like the system is half-duplex as it’s increased.
06:01
As an aside here, note that one mic in a room might be harder to calibrate than others due to placement, etc.
06:08
As a first pass, it might make sense to apply the same settings to each mic,
06:13
but if the test caller hears echo it may get more complicated.
06:17
You’ll want to isolate exactly which mics in the room
06:20
are returning the echo and mute those that aren’t problematic.
06:24
From there you can fine-tune each problem microphone independently
06:29
of the others until you have the performance perfectly dialed in.
06:32
Of course many rooms are required to use multiple methods of conferencing.
06:37
Sometimes they must be used simultaneously.
06:39
If this is your room, first do a test call with each conferencing mode independently
06:45
and then finally all methods at once making all necessary adjustments.
06:50
So what if you’ve tried the steps as outlined and still have echo?
06:56
Let’s think about a few troubleshooting strategies.
06:59
One common problem is that the ‘echo’ heard by the far end isn’t echo at all.
07:06
A quick way to check this it to mute all the conferencing mics and see if the far end still hears themselves.
07:13
If so, the conferencing receive signals are accidentally looped or misrouted.
07:18
If there are any large matrix mixers in your design,
07:21
carefully examine the crosspoints to make sure that the far end receive signals
07:26
aren’t accidentally being sent directly back to the transmits.
07:29
This routing can be tricky when multiple conferencing modes are required, so keep that in mind.
07:35
If an external video codec is being used, the codec itself can be a source of such a loop.
07:41
Many codecs have a ‘reinforcement mode’ or reinforcement output
07:46
that includes the conferencing mic signals and the far end.
07:50
The intent of such a thing is to plug directly into an amplifier for voicelift and conferencing in the room.
07:56
If this type of codec output is plugged into Q-SYS as a conferencing input,
08:01
you can hear some very interesting echo like effects.
08:04
In addition, the microphones will be sent to their own AEC reference signals.
08:08
This usually results in some modulated sounds of the mics to the far end.
08:12
The Q-SYS hover monitor is an easy way to check for this.
08:16
Hover over the input node and if you hear the local microphones, you know this is a potential problem.
08:23
Other sources can cause similar issues, so an easy way to find the culprit is systematically muting each source
08:30
and microphone until you find the one that muting makes the echo go away.
08:35
You then know exactly where to focus your problem-solving skills.
08:40
If you know all routings and gain structure are correct
08:44
and you’ve done everything you can to calibrate AEC with no success,
08:48
it’s possible you need to extend the tail length of the AEC algorithm.
08:52
The tail length required should depend on the size, shape and reflectivity of the room.
08:58
If you’re using the 100ms tail length, it’s very possible you’ll need to extend to 200ms.
09:05
The 200ms algorithm can handle about 85% of conference rooms,
09:09
but if the room is very large and very reflective it might require 300 or 400ms tail lengths.
09:16
Remember the longer the tail length, the fewer algorithms can fit in a given core.
09:22
The eternal struggle here is that architects and
09:25
end users love elaborate room shapes made of very reflective surfaces.
09:31
Glass, hardwoods, and ceramic tile are not the greatest environment for conferencing,
09:36
but it’s what we get to work with as AV engineers and technicians.
09:40
In some cases it’s necessary to propose acoustic treatment
09:43
be used to absorb some reflections to provide the best in intelligibility and response.
09:49
Something like this should do the trick!
09:51
In all seriousness, there are many tasteful,
09:54
low profile acoustic treatments available that can help maximize performance.
09:59
That's it! Thanks for watching.