Machine vision has been around in practical terms since the early 1980s, with experiments in this technology dating back even earlier. In the beginning, even reading a single letter took many long seconds, but this technology has progressed to the point to where reading characters, bar codes and even automated inspections can be done in the blink of an eye. While great if you’re looking to improve manufacturing productivity with an off-the-shelf vision system, or perhaps even experiment with a webcam and computer-based processing, but what if you want to develop your own entirely custom vision package?
You could start from scratch, but in order to save you from having to design your own board and interface components, Lattice Semiconductor has developed a Modular Video Interface Platform (VIP), which comes as a set of three boards. The first module is the CrossLink VIP Input Bridge Board, which takes input from a pair of Sony IMX 214 image sensors, then combines them into a single steroscopic 1080P signal. One camera’s picture fills in the left side of screen—or more properly the digital representation of one—while the other shows the right side. This combined data is then transmitted to an FPGA-based ECP5-85 FPGA VIP Processor board, which manipulates this combined image, and contains a mini-USB connector for programming. This processed signal is then sent to the VIP Output Bridge Board and can be piped to a display via a standard HDMI cable.
As it’s a modular system, the input, processor, and output boards can be replaced and swapped out as needed. One interesting application of this functionality is that the video input board can be exchanged for their dual-HDMI input board, allowing you to use external cameras instead of those that come built into the standard input board. This could be important if you need to test your application out with certain cameras, or even if you’d like to experiment with different spacing between image sensors.
I was able to obtain one of these to examine, and upon power up and connection to a computer monitor, it worked without any trouble, displaying the ceiling in the expected slightly different perspectives. You can see its view of a light fixture in the image below, combining left and right camera views into a single HDMI output (Figure 1).
[Figure 1 | Lattice Board’s view. Note difference in shades between left and right images (excluding shadow from light).]
While a module that simply powers up and works the first try is a always a pleasant experience, one thing that’s noticeable is that the two cameras, in addition to the different perspectives, see things in slightly differently, with one side darker than the other. On the surface this is surprising since image processing is done after combination on the processor board. Given the different camera perspectives, and the resulting differences in light gathered, a slight variation might be expected. However, one would need to be accommodated for this discrepancy in any robotics, computer vision, driving assistance package, or other device that you’re developing.
Given the widespread use of image sensors in hardware designs, having a tool like this available should be extremely beneficial to those developing stereo vision applications. Perhaps your next augmented reality device or drone camera package will have been originally developed with the help of this system!