Snapdragon chips are mainly designed for smartphones and thus incorporate interfaces native to the mobile processing environment. Engineers at Qualcomm adopted MIPI interfaces to connect image sensors and displays. This makes it easy to connect to typical displays and cameras found in a smartphone. However, Snapdragon chips have found their way to other applications due to their superior performance. This means that sometimes, it’s required to get their data from peripherals not designed for mobile environment. For example, Sony and Omnivision image sensors don’t use MIPI for transferring data.
Let’s look at a MIPI standard family. The MIPI Alliance publishes a collection of standards that provide connectivity for various processors and peripherals in an embedded system. The standards address the main requirements of the embedded world, including low power consumption, high performance, small form-factor, and low interference emission. The MIPI standards provide methods to transmit multimedia data, control data, and even raw data in chip-to-chip interconnects. Each application is served by specific protocol, as shown in Figure 1.
The MIPI solution for transmitting camera signals is the Camera Serial Interface (CSI) protocol. MIPI offers two variants of the protocol, CSI-2 and CSI-3. Snapdragon processors support CSI-2, which is based on D-PHY as the physical layer. The MIPI D-PHY interface is composed of one clock lane and one to four data lanes which can operate in low power (LP) or high speed (HS) mode. The standard provisions up to 2.5 Gbits/s in HS mode per lane, which totals 10 Gbits/s for a 4-lane D-PHY interface.
A typical D-PHY transmission is shown in Figure 2. Within the HS mode, data lanes are treated as differential signals and their level is typically 200 mV, peak-to-peak. In comparison, in LS mode, the lanes carry a non-differential 1.2-V signal. Special control signaling is used to enter and exit HS mode.
To connect a non-MIPI image sensor to MIPI CSI interfaces, the electrical signals of the physical layer and the interface protocol at the image sensors’ side should be translated to MIPI CSI. The best approach to do so is via an FPGA, as shown in Figure 3. For example, the Sony sub-LVDS signal can be received by the FPGA, which extracts the pixel data from the signal and then replays it on the MIPI side using the CSI protocol.
The designer can implement the necessary CSI controller to translate the protocols in the FPGA fabric. A reference design for such a controller is provided by Lattice Semiconductor. See for example, MIPI CSI-2 Transmit Bridge app note.
Implementing a MIPI D-PHY interface for FPGAs
Translating the electrical signal to establish connectivity on a physical layer is more challenging. Specifically, the D-PHY interface’s two-mode operation, shown in Figure 2, poses a challenge to connect MIPI CSI interfaces to an FPGAs. The IO pins on FPGAs can be set to various IO standards by the FPGA designer, but MIPI requires the lane to work with one IO standard in LS and another IO standard in HS mode. A possible remedy is to combine two FPGA pins; one is set to LVCMOS for LS and the other is set to LVDS for HS mode to emulate a MIPI lane.
The simplest approach to combine the pins is via a resistor network as proposed by Lattice. In this setup, two LVCMOS12 pins and a pair of LVDS25 pins are connected together via resistors to emulate a MIPI D-PHY lane (see Figure 4). In this configuration, the interface’s maximum operational frequency is determined by the internal architecture of the pins that drive the signals. Typically, maximum throughput of about 800 Mbits/s per lane can be achieved by the resistor network.
Another approach to connect MIPI D-PHY lanes to FPGAs is via an active adaptor chip from Meticom (see Figure 5). In this configuration, high-speed communications of up to 2.5 Gbits/s per lane, or 10 Gbits/s per 4-lane interface, can be achieved. This provides the maximum bandwidth as defined in the MIPI D-PHY v1.2 standard. Hence, this solution is suitable for connecting high-resolution, high frame-rate image sensors.
Modern FPGA devices are composed of reconfigurable logic fabrics, block memories, and even ARM processors and high-speed interface blocks, which can be used to implement interfaces like PCIe and USB 3.0. This opens the door to vast possibilities. For example, the system designer can choose to utilize the massive processing power of modern FPGAs to process the legacy video signal before sending it to the receiver chip. Here, custom ISPs can be implemented inside the FPGA. A video stream can be scaled, de-noised, or fed through a custom filter. Multiple video streams can be combined. These types of operations are typically compute-intensive if run on a processor, but the vast parallelism and different design methodology of FPGAs enable system designers to offload these operations into programmable parts, which frees the processor to perform other high-level operations.
The possibilities are endless and each scenario needs to be carefully investigated to achieve the optimum design for that application. For more information and to see if FPGAs can enhance your product, please contact Intrinsyc.
Maysam Mirahmadi, Ph.D., is the FPGA Engineering Team Lead at Intrinsyc Technologies, where he works with customers to develop various architectures for video processing and accelerated computing. Prior to joining Intrinsyc, he was with IBM Canada Research and Development Center as a research scientist, where he was working on heterogeneous and cloud computing and their application in wireless communications.