Dealing with high-def, low-delay, SDR-based video transmissions

By Wei Zhou

Application Engineer

Analog Devices

November 21, 2017

Dealing with high-def, low-delay, SDR-based video transmissions

Integrated RF agile transceivers are not only widely employed in software-defined radio architectures in cellular telephone base stations, but also for wireless HD video transmission.

Integrated RF agile transceivers are not only widely employed in software-defined radio (SDR) architectures in cellular telephone base stations, such as multiservice distributed access system (MDAS) and small cell, but also for wireless HD video transmission for industrial, commercial, and military applications, such as unmanned aerial vehicles (UAVs).

In a simplified wireless video transmission scheme, the camera captures the image and transmits video data to a baseband processor via Ethernet, HDMI, USB, or another interface. Image coding/decoding can be handled by hardware or an FPGA. The RF front includes the switcher, LNA, and PA to the programmable integrated transceiver.

[Figure 1 | Wireless video transmission diagram]

Table 1 shows the significant size difference between the uncompressed and compressed data rates. By using high efficiency video coding (HEVC), also known as H.265 and MPEG-H Part 2, we can decrease the data rate and save bandwidth. H.264 is currently one of the most common formats for the recording, compressing, and distributing video content. It presents a huge step forward in video compression technology and is one of several potential successors to the widely used AVC (H.264 or MPEG-4 Part 10).

Compressed Data Rate = Uncompressed Data Rate / Compression Ratio

In the 1080p example, the data rate is 14.93 Mbits/s after compression, which then can be handled by the baseband processor and the wireless PHY layer.

[Table 1 | Compressed data rate. Assumptions include a 24-bit video bit depth and a 60-frame/s frame rate.]

Signal bandwidth

The AD9361/AD9364 integrated transceiver ICs support channel bandwidths from less than 200 kHz to 56 MHz by changing the sample rate, digital filters, and decimation. The parts are zero-IF transceivers with I and Q channels to transmit the complex data. The complex data includes real and imaginary parts, corresponding to I and Q respectively, which locate at the same frequency bandwidth to double the spectrum efficiency when compared to a single part. The compressed video data can be mapped to the I and Q channels to create constellation points, which are known as symbols. Figure 2 shows a 16 QAM example where each symbol represents four bits.

[Figure 2 | 16 QAM constellation.]

For a single-carrier system, the I and Q digital waveform needs to pass through a pulse-shaping filter before the DAC in order to shape the transmitted signal within a limited bandwidth. An FIR filter can be used for pulse shaping, and the filter response is illustrated in Figure 4.

[Figure 3 | Pulse shaping filter response.]

To maintain the fidelity of the information, there’s a minimum signal bandwidth corresponding to the symbol rate. The symbol rate is proportional to the compressed video data rate as shown in the equation below. For the OFDM system, the complex data should be modulated to the subcarriers using the IFFT, which also transmits the signal in a limited bandwidth.

Symbol Rate = Bit Rate / The Number of Bits Transmitted with Each Symbol

The number of bits transmitted with each symbol depends on the modulation order. The occupied signal bandwidth is given by:

RF Occupied Signal BW = Symbol Rate x (1 + α)

in which α is the filter bandwidth parameter.

From the previous formulas we can deduce this equation:

RF Occupied Signal BW =  (Compressed Data Rate / The Number of Bits Transmitted with Each Symbol) (1 + α)

Hence, we can calculate the RF occupied signal bandwidth as summarized in Table 2.

[Table 2 | Occupied RF signal bandwidth with kinds of modulation order (α = 0.25).]

The AD9361/AD9364, with up to 56 MHz of signal bandwidth, support all the Table 2 video format transmissions and even higher frame rates. Higher order modulation occupies smaller bandwidth and the symbol represents more information/bits, but a higher SNR is needed to demodulate.

Transmission distance and transmitter power

In applications such as UAVs, the maximum transmission distance is a critical parameter. However, it’s equally important that communication not be cutoff even at a limited distance. Oxygen, water, and other obstacles (except for free space attenuation) can attenuate the signal. Figure 6 shows the wireless communication channel-loss model.

[Figure 4 | Wireless communication channel loss model.]

Receiver sensitivity is normally taken as the minimum input signal (Smin) required to demodulate or recover the information from the transmitter. After getting the receiver sensitivity, the maximum transmission distance can be calculated with some assumptions, as shown here:

(S/N)min = 10log(kT0B) + NF + (S/N)min = -174dBm + 10logB + NF + (S/N)min

(S/N)min is the minimum signal-to-noise ratio needed to process a signal,

NF is the receiver’s noise figure,

k is Boltzmann’s constant = 1.38 × 10–23 joule/k,

T0 is the absolute temperature of the receiver input (Kelvin) = 290 K,

and B is the receiver bandwidth (Hz).

The parameter (S/N)min depends on the modulation/demodulation order. With the same SNR, lower order modulation gets a lower symbol error, and with the same symbol error, higher order modulation needs higher SNR to demodulate. So, when the transmitter is far away from the receiver, the signal is weaker and the SNR can’t support the high order demodulation.

To keep the transmitter online and maintain a video format with the same video data rate, the baseband should use lower order modulation at the expense of increasing bandwidth. This helps ensure that the received images aren’t blurred. Fortunately, SDR with digital modulation and demodulation offers the capability to change the modulation. The previous analysis is based on the assumption that the transmitter RF power is constant.

While greater RF transmitting power with the same antenna gain will reach a more distant receiver with the same receiver sensitivity, the maximum transmitting power should comply with FCC/CE radiation standards. In addition, the carrier frequency will have an influence on the transmission distance. As a wave propagates through space, there’s a loss due to dispersion. The free space loss is determined by:

Afs = 20log (4πR / λ) = 20log (4πRf / C)

Here, R is the distance, λ is the wave length, f is the frequency, and C is the speed of light. Therefore, a larger frequency will have more loss over the same free space distance. For example, the carrier frequency at 5.8 GHz will be attenuated by more than 7.66 dB as compared to 2.4 GHz over the same transmission distance.

RF frequency and switching

The AD9361/AD9364 have a programmable frequency range from 70 MHz to 6 GHz. This will satisfy most NLOS frequency applications, including various types of licensed and unlicensed frequencies, such as 1.4 GHz, 2.4 GHz, and 5.8 GHz. The 2.4-GHz frequency is widely used for Wi-Fi, Bluetooth, and IoT short-range communication, making it increasingly crowded. Using it for wireless video transmission and control signals increases the chances for signal interference and instability. This creates undesirable and often dangerous situations for UAVs.

Using frequency switching to maintain a clean frequency will keep the data and control connection more reliable. When the transmitter senses a crowded frequency, it automatically switches to another band. For example, two UAVs using the frequency and operating in close proximity will interfere with each other’s communications. Automatically switching the LO frequency and reselecting the band will help maintain a stable wireless link. Adaptively selecting the carrier frequency or channel during the power-up period is one of the excellent features in high-end UAV.

Frequency hopping

Fast frequency hopping, which is widely used in electronic countermeasures (ECM), also helps avoid interference. Normally if we want to hop the frequency, the PLL needs to relock after the procedure. This includes writing the frequency registers, and going through VCO calibration time and PLL lock time so that the interval of the hopping frequency is at a level of hundreds of microseconds. Figure 7 shows a hopping transmitter LO frequency from 816.69 to 802.03 MHz. The AD9361 is used in normal frequency changing mode and the transmitter RF output frequency jumps from 814.69 to 800.03 MHz with a 10-MHz reference frequency.

[Figure 5 | Hopping frequency from 804.5 to 802 MHz with 500 µs.]

The hopping frequency time is tested by using the E5052B as shown in Figure 7. The VCO calibration and PLL lock time is about 500 µs according to Figure 7b. The signal source analyzer E5052B can capture the PLL transient response. Figure 7a shows the wideband mode of transient measurement, while 7b and 7d provide significantly fine resolution in frequency and phase transient measurement with frequency hopping. Figure 7c shows the output power response.

500 µs is a very long interval for the hopping application. However, the transceivers include a fast lock mode that makes it possible to achieve faster than normal frequency changes by storing sets of synthesizer programming information (called profiles) in the device’s registers or the baseband processor’s memory.

Figure 8 shows the test result by using the fast lock mode to implement the hopping frequency from 882 to 802 MHz. The time is down to less than 20 µs, according to the Figure 8d phase response. The phase curve is drawn by referring to the 802 MHz phase. The SPI writing time and VCO calibration time are both eliminated in that mode due to the frequency information and calibration results being saved in profiles.

[Figure 6 | Hopping frequency from 882 to 802 MHz within 20 µs in fast lock mode.]

Implementing the PHY layer

Orthogonal frequency division multiplexing (OFDM) is a form of signal modulation that divides a high data rate modulating stream onto many slowly modulated narrow-band close-spaced subcarriers. This makes it less sensitive to selective frequency fading. The disadvantages are a high peak to average power ratio and sensitivity to carrier offset and drift. OFDM is widely applied in the wideband wireless communication PHY layer.

The critical OFDM technology includes IFFT/FFT, frequency synchronization, sampling time synchronization, and symbol/frame synchronization. The IFFT/FFT should be implemented via FPGA in the fastest way possible. It’s also important to select the interval of the subcarriers. The interval should be large enough to withstand the mobility communication with Doppler frequency shift and small enough to carry more symbols within the limited frequency bandwidth to increase the spectrum efficiency. COFDM refers to a combination of encoding technology and OFDM modulation. COFDM with its high resistance of signal attenuation and forward error correcting (FEC) advantages can send video signals from any moving object. The encoding will increase the signal bandwidth, but it’s usually worth the trade-off.

The advantages over Wi-Fi

Drones equipped with Wi-Fi are easy to connect to a cell phone, laptop, and to other mobile devices, which make them convenient to use. But for wireless video transmission in UAV applications, the FPGA plus AD9361 solution offers some advantages over Wi-Fi. First, in the PHY layer, the transceivers’ agile frequency switching and fast hopping help avoid interference. Most integrated Wi-Fi chips also operate on the crowed 2.4-GHz band with no frequency band reselection mechanism to make the wireless connection more stable.

Second, with the FPGA plus AD9361 solution, there’s flexibility in how the transmission protocol can be defined and developed. Wi-Fi protocol is standard and based on a two-way handshake with every data packet. With Wi-Fi, each packet must confirm that a packet was received, and that all 512 bytes in the packet were received intact. If one byte is lost, the whole 512-byte packet must be transmitted again.

While this protocol ensures data reliability, it’s complex and time consuming to re-establish the wireless data link. The TCP/IP protocol will cause high latency that results in non-real-time video and control, which can lead to a UAV crash. The SDR solution (FPGA plus AD9361) uses a one-way data stream, which means the drone in the sky transmits the video signal like a TV broadcast. There’s no time for resending packets when real-time video is the goal.

In addition, Wi-Fi doesn’t offer the proper level of security for many applications. Using the encryption algorithm and user-defined protocol, the FPGA plus AD9361/AD9364 solution is less susceptible to security threats.

Furthermore, the one-way broadcast data stream delivers transmission distance capabilities two to three times that of Wi-Fi approaches. The flexibility from the software-defined radio capability enables digital modulation/demodulation adjustment to satisfy the distance requirements or and adjusting to changing SNR in complex space radiation environments.

As shown, with agile frequency band switching and fast-frequency hopping, it’s possible to establish a more stable and reliable wireless link to resist the increasingly complex radiation in space and decrease the probability of a crash. In the protocol layer, the solution is more flexible, using a one-way transmission to reduce wireless establishment time and create a lower latency connection.

Wei Zhou is an applications engineer for Analog Devices, supporting the design and development of RF transceiver products and applications, especially in the wireless video transmission and wireless communication fields. He’s been working in ADI’s Central Applications Center located in Beijing, China, for five years supporting various products including DDS, PLL, high speed DAC/ADC, and clocks. He received his B.S. degree from Wuhan University, Wuhan, China, and his M.S. degree from Institute of Electronics, Chinese Academy of Science (CAS), Beijing, China.