Monday, July 18, 2016

Why sending garbage is required before radio transmission (aka preamble for AGC radio receivers)

Sometimes there is a trade-off doing it "low level", but how much I learn everyday!

I eventually discovered, the hard way, why radio receivers with automatic gain control (AGC) do need compulsory seemingly "useless" preambles for them to adjust their volume properly, for the real data to come.

First: send a calibrating preamble (alternating zeros/ones)

No signal? The gain is adjusted to get one. So we read radio noise!
By the way, this radio module took about 30ms to fully adjust its gain.
Once I re-think about it, it is plain obvious... a receiver does not know whether someone is really transmitting or not. So when nothing is transmitted, it acts just as anybody on a audio system who wants to check if the source is plugged or not: the receiver increases the volume (gain) until it catches something. But when nobody really is transmitting, increasing gain eventually ends up amplifying noise, i.e. seemingly random 0s and 1s. As humans, we know we are hearing static and the input device is not transmitting. But the receiver is not able to know.

Without proper calibration, the start of real data then gets buried in the noise, until the receiver manages to set its gain, back to actual transmission levels (i.e. it reduces the amplification until signal is just below saturation -- again, like a human who quickly turns the volume down after the source is plugged).





So the deal is to "pre-heat" the transmission, by sending quickly alternating zeros and ones. This phase shall be long enough for the receiver to calibrate its levels to the actual physical transmission levels, so it directly depends on the hardware
From left to right: amplified noise due to the automatic gain control,
followed by the preamble (alternating 0 and 1), and, eventually, actual content.
In reality, the content itself is prefixed with a clear and unique "sync symbol",
so the receiving software know when the real data starts. It cannot rely
on the preamble, since it will most probably be corrupted.
We now have a clear signal, so are we done? Nope, a special unique "sync symbol" must then be sent, which is used one layer above in the communication model to mark the beginning of actual data.

Eventually, will we have a reliable radio stream from now on? Still nope: noise is systematically possible even within a well-tuned and synchronized radio signal. This is why error detection and other correcting codes have to be added to the stream of bits -- but this is another story.

Second: use transitions instead of levels

A second issue is: never "hold" any value for long! When your data contains a long suite of zeros, or a long serie of ones, then the receiver may think it has to auto-tune its level again (both gain and offset are tuned!). There is no such thing as "being done with the calibration", because the process is permanent to overcome electromagnetic perturbations or moving devices. So constant signal would again ruin the data.
The sender held its "1" for too long, which results in a corrupted reception
(the radio receiver starts tuning its reception (offset and gain).
No signal shall be maintained more than 10ms with this radio module!

The solution here again is to ensure no level is kept constant for too long. This is what all low-level radio protocols implements, and one of the easiest and historically earliest strategy is the Manchester code below.
Manchester encoding: instead of levels (data), the signal values are coded with transitions.
1 is a switch from high to low, 0 is a switch from low to high (at the clock downwards pulse).
This way, no constant radio signal is sent willingly (see the three consecutive 1s?),
and the automatic gain control of the receiver keeps on doing its job properly.



Links:
Automatic gain control is well documented on the wikipedia but it will not talk about the preamble.
There is some interesting theory and some nice graphs in the first chapters of this document.