<

Image Segmentation


See how to execute a Segmentation in SPRING.

The statistical classification is the most used conventional procedure for digital image analysis. It is an isolated pixel analysis process.

This procedure presents a limitation of the punctual analysis being uniquely based on the spectral attributes. To overcome these limitations, it is proposed to use an image segmentation, before the classification step, where it will be extracted the relevant objects for the desired application.

In this process, the image is divided into regions which corresponds to the application areas of interest. By regions, it is understood a set of connected pixels, which are spread bidirectionally and that presents an uniformity.

The division into parts, consists basically in a region growing process of boundary detection or of bay detection. The descriptions follow.

Region Growing and Multi Region Growing

It is a data grouping technique, where only the adjacent regions, spatially, can be grouped.

Initially, the segmentation process labels each pixel as a distinct region. The similarity criteria is computed for each spatially adjacent region. The similarity criteria is based on an statistical hypothesis test, which checks the average among regions. Next, the image is divided into a set of sub images and then an union operation is performed, following an aggregation limit definition.

For the union of two neighbor regions A and B, the following criteria is adopted:

  • A and B are similar (average test);
  • the similarity reaches the limit defined;
  • A and B are spatially close (among the A neighbors, B is the closest, and among the B neighbors, A is the closest).

If A and B satisfies the above criteria, then, the regions are aggregated, otherwise, the system repeats the aggregation testing procedure.

If the method chosen is Multi Region Growing the algorithm will use multiprogramming, multithread and multicore techniques according to the capabilities of your computer. These operations can be managed by clicking the Advanced Operations button in the Segmentation interface.

Bay Detection

The bay detection classification is performed over a resulting image after the boundaries extraction.

The boundary extraction is performed by a boundary detection algorithm, that is, by the Sobel filter. This algorithm considers the original image gray level gradients to generate a gradient image or boundary intensity image.

The algorithm computes a limit for boundary following. When a pixel is found with a value superior to the established superior limit, the following process is started. The neighbors are analyzed to select the next pixel (the one with the largest digital value) and this direction is followed, until a new boundary is reached. In this process a binary image is generated where the pixels with a 1 value corresponds to boundaries, and the pixels with a 0 value for the non boundary regions.

The binary image will be labeled such that the image regions with 0 values are regions limited by 1 values in the image, creating a labeled image.

The segmentation procedure by bay detection assumes a topographical representation for the image, that is, for a given gradient image, the digital level value for each pixel represents an elevation value at that point. The image corresponds to a topographical surface with relief aspects or a region with bays of different depth.

The region growing corresponds to a topographical surface immersion in a lake. An initial height is defined (digital level) for the bay filling (limit). The water fills progressively the different bays in the image, up to the topographical defined limit (digital level value). When the limit is reached a barrier between two regions is defined. The filling process continues in a different direction p to the topographical point, defining a new barrier, until the point where all barriers have been defined.

The final result is a labeled image, where each region has a label (digital level value), and that has to be classified using regions classifiers.

Region Growing Baatz

The Multiresolution Segmentation method is a region growth algorithm proposed by Baatz and Schäpe (baatz00).

Segmentation begins with each pixel in the image as a seed, representing an object. At each iteration, each object merges with a neighboring object. This neighbor is the one for which the object resulting from the fusion represents the smallest increase in heterogeneity in relation to the sum of the internal heterogeneity measures of the two fusion candidate objects.

The fusion, however, only happens if the increase in heterogeneity is less than a certain threshold. The measure of heterogeneity has a spectral component and a morphological component.

The spectral component is defined from the values ​​of the pixels that compose the object, being proportional to the standard deviation of these values, weighted by arbitrary weights defined for each spectral band of the image.

The morphological component is defined by the relative deviation of the shape of the object relative to a compact shape and a soft, weight-weighted form. Compactness is defined as the ratio of the object's edge length to the square root of its area (its size in pixels), and the softness as the ratio of the object's edge length to the edge length of the surrounding rectangle.

To allow distributed growth of objects, each object is selected for fusion only once at each iteration, and this selection is performed in a spatially distributed fashion.

The addition of heterogeneity resulting from the fusion between two objects is what directs the algorithm through the iterations, is called the fusion factor. When an object is selected, the melting factor is calculated for each of its neighbors. The neighbor for which this factor is minimum is selected for the merger. But the merge occurs only if the fusing factor is less than a certain threshold, defined as the square of the scale parameter, one of the parameters of the algorithm. Targeting ends when no further merger can be performed. The scale parameter is an abstract term that determines the maximum increase of allowed heterogeneity resulting from the merger between two objects, and that indirectly influences the average size of the final objects produced by the segmentation (Brodský06).

See also:
Other Image Processing techniques.
How to execute a Segmentation.