Comparing linear versus nonlinear filters in image processing

May 1, 2012 OpenSystems Media

2Historically, real-time or embedded image processing was limited in terms of complexity due to cost/power constraints of the underlying silicon. However, with today’s sub-90 nm geometries, it is possible to consider complex filtering techniques that up until now could only be completed in offline image data manipulation. Examining the differences between linear and nonlinear filters can help designers implement the most effective filtering technology for detecting and manipulating image information.

Filtering in image processing is a mainstay function that is used to accomplish many things, including interpolation, noise reduction, and resampling. The choice of filter is often determined by the nature of the task and the type and behavior of the data. Noise, dynamic range, color accuracy, optical artifacts, and many more details affect the outcome of filter functions in image processing.

The following discussion will explore the differences between two major categories of filtering – linear and nonlinear – as well as highlight image processing approaches that benefit from these filter types and identify situations where one filter might be preferred or required over the other.

Filter theory background

In image processing, 2D filtering techniques are usually considered an extension of 1D signal processing theory. Almost all contemporary image processing involves discrete or sampled signal processing. This is compared to signal processing that was applied to analog or continuous time domain processing that characterized television and video several generations ago. The two are related, and the foundation for discrete signal processing is derived from continuous time signal processing theory.

Linear filters

To review and compare the two types of filtering, the first step is to briefly describe the attributes that comprise linear filtering.

Several principles define a linear system. The first two are the basic definitions of linearity. If a system is defined to have an input as x[n] = ax[n1] + bx[n2], then the linear system response is y[n] = ay[n1] + by[n2]. This is known as the superposition property, and is fundamental to linear system design. The second property is shift invariance. If y[n] is the response to a linear, shift-invariant system with input x[n], then y[n-n0] is the response to the system with input x[n-n0].

In addition, two extra conditions are imposed, causal and stable. The causal condition is needed when considering systems in which future values are not known (for example, in video streaming). It is possible to consider a system that is not causal when looking at captured images with samples before and after the target location (for example, in a buffered version of an image frame). Stability is imposed to keep a filter’s output from exceeding a finite limit, given an input that also does not exceed a finite limit. This is called the Bounded-Input Bounded-Output (BIBO) condition.

For most cases, a system is evaluated in the spatial frequency domain. To accomplish this, the convolution theorem is used, providing the necessary tools to evaluate frequency domain information.

If x[n] and h[n] are two sequences, their convolution is defined as shown in Equation 1.

21
Equation 1

The corresponding frequency response is shown in Equation 2.

22
Equation 2

In Equation 2, e-iw is the frequency domain representation and w is the frequency variable from -p to p. This fundamental relationship describes the response of a filter in terms of frequency – low pass, high pass, band pass, and so on. Depending on the nature of the filter kernel h[n], a wide variety of responses can be realized for any image data set.

A typical low-pass filter with 25 taps (h[0..24]) is shown in Figure 1. The idea of a low-pass filter is to preserve low-frequency information and reduce or eliminate high-frequency information in an image. It blurs edges but keeps smooth areas of an image intact. In a similar manner, high-pass filters preserve edges and other high-frequency information but filter low-frequency regions of an image.

21
Figure 1: A typical low-pass filter maintains low-frequency elements and reduces or removes high-frequency elements in an image.

Nonlinear filters

Nonlinear filters have quite different behavior compared to linear filters. For nonlinear filters, the filter output or response of the filter does not obey the principles outlined earlier, particularly scaling and shift invariance. Moreover, a nonlinear filter can produce results that vary in a non-intuitive manner.

The simplest nonlinear filter to consider is the median or rank-order filter. In the median filter, filter output depends on the ordering of input values, usually ranked from smallest to largest or vice versa. A filter support range with an odd number of values is used, making it easy to select the output.

For example, suppose a filter was based on five values. In the region of interest, x0..x4, the values are ordered from smallest to largest. The value at position 2 is selected as the output. Consider the case at low frequency; all the values are the same or close to it. In this case, the value selected will be the original value ± some small error. In the case of high frequency, such as an edge, the values on one side of the edge will be low and the values on the other side will be high. When the ordering is done, the low values will still be in the low position and the high values will still be in the high position. A selection of the middle value will either be on the low side or the high side, but not in the middle, as would be the case using a linear low-pass filter. The median filter is sometimes called an edge-preserving filter due to this property. It is useful in removing outliers such as impulse noise.

Selecting the right filter

Both filter types have their place in image processing functions. In a typical pipeline for real-time image processing, it is not uncommon to have dozens of both types included to form, shape, detect, and manipulate image information. Moreover, each of these filter types can be parameterized to work one way under certain circumstances and another way under a different set of circumstances using adaptive filter rule generation.

Filtering image data is a standard process used in almost all image processing systems. The goals vary from noise removal to feature abstraction. Linear and nonlinear filters are the two most utilized forms of filter construction. Knowing which type of filter to select depends on the goals and nature of the image data. In cases where the input data contains a large amount of noise but the magnitude is low, a linear low-pass filter may suffice. Conversely, if an image contains a low amount of noise but with relatively high magnitude, then a median filter may be more appropriate. In either case, the filter process changes the overall frequency content of the image.

Allen Rush is CTO of Nethra Imaging.

Nethra Imaging info@nethra.us.com www.nethra.us.com

Allen Rush (Nethra Imaging)
Previous Article
People-centered innovation driving the next generation of telehealth - Q&A with Alan Boucher, Director of Software Architecture and Engineer
People-centered innovation driving the next generation of telehealth - Q&A with Alan Boucher, Director of Software Architecture and Engineer

In a Q&A with Intel-GE Care Innovations' Alan Boucher, find out how the company is implementing ZigBee sens...

Next Article
Peer review: The best technique embedded developers aren't using
Peer review: The best technique embedded developers aren't using

A little help from my friends? Tool-assisted peer review reduces the painstaking, time-consuming aspects of...